Transforma logo

Developing AI at the edge

OCT 02, 2020 | Jim Morrish
 
region: ALL vertical: ALL Artificial Intelligence

Traditional approaches to AI are cloud-centric

There is a standard approach emerging for the adoption of AI at the edge, which is that large volumes of data (‘data lakes’) reside in the cloud, and are used to support the development of AI rules which are then deployed to the edge. Approaching AI rule development in this way leverages both the strengths of the cloud (availability of processing power, and access to large data sets) and the edge (autonomy of operation, and speed of response) and also avoids the costs of needing to transmit large volumes of data to cloud infrastructure for application of those AI rules. This technique is used to develop a wide variety of rules today, ranging from AI rules that analyse CCTV feeds searching for particular patterns of activity, through to rules to support the pre-emptive maintenance of industrial equipment. Any such rules are enhanced over time as more data is sent to the cloud, and as feedback on the efficacy of already developed rules is received.

Moving AI development to the edge

But there is a potential enhancement to this approach. I recently spoke with ONE Tech, an interesting company that is pioneering the development of AI rules at the edge. There are two main strands to the thinking here, and I’ll discuss each below.

Firstly, edge components are generally not constrained by limitations associated with the cost and availability of connectivity, and so are well placed to ingest huge volumes of data. This means that edge components are generally able to analyse a greater proportion of the data that a machine might generate, rather than the more typical approach which is to filter raw information and to only transmit to the cloud data which has been assessed to be potentially meaningful, or significant. This means that AI developed at the edge can generally draw on a much richer and more granular set of data than can AI developed in the cloud. That potentially results in the development of better and more effective rules.

There is a downside with this approach, of course, which is that any analyses that are undertaken at the edge of a network are generally compromised by their inability to compare data streams across a range of similar devices in different locations. Also, the availability of processing power is generally more limited.

But these downsides are to some extent offset by a second technique that relies on another AI engine deployed locally to assess whether a device is ‘working as normal’, or not. In the case that a device is not working ‘as normal’ then a message can be escalated to human operators, or some supervising AI system, essentially to highlight a potential problem that needs further review. Supervisors (or supervising systems) can determine if the abnormal behaviour is indicative of a potential problem, or not, and take appropriate remedial actions and also update the AI rules running on the local device. In essence, this is a neat way to accelerate the development of AI rules, a task which is often hindered by the lack of information that is available to identify failure states (industrial systems generally fail quite rarely, so there is often a dearth of information relating to such events to be used for AI training purposes). The deployment of a local AI engine to identify any abnormal operating conditions effectively identifies candidate data combinations that could be indicative of potential failure states, potentially before those failure states occur.

Accelerating the development of AI

All in all, the approach outlined above is not a new ‘standard’ way of developing AI rules, and it won’t displace current development techniques, rather it’s an interesting enhancement that potentially increases the accuracy and accelerates the development of AI rules.

Next Post OCT 05, 2020| Jim Morrish
The Digital Twin is dead. Long live the Digital Master!
All Blog Posts