Transforma logo

Edge Computing

 

Technology-enabled solutions are becoming ever more critical to the day-to-day operations of many enterprises. Among the most impactful technologies are AI (Artificial Intelligence), IoT (Internet of Things), cloud computing, and next generation communication technologies such as 5G. Edge computing may garner fewer headlines, but it is a key enabler for many solutions that leverage the emerging technologies listed above.

What is Edge Computing?

Fundamentally, edge computing makes processing and storage resources available in close proximity to edge devices or sensors, complementing centralised cloud resources and allowing for analytics close to those end devices. This results in a number of benefits that can be very relevant in an enterprise context, including:

  • (Near) real-time responsiveness. Analytics that may have previously been undertaken in offsite cloud locations can potentially be supported locally, avoiding the need for raw data to be transferred back and forth to a cloud location.
  • Improved device-to-device communications. Communications and the exchange of data between devices that are co-located together can be routed more directly, and without need to transit cloud infrastructure.
  • Improved robustness, resilience and reliability. With more analytics undertaken locally to data sources, systems are not as susceptible to disruption in the case that a connection to a remote cloud location fails.
  • Improved security and data protection. With more data processed locally, many security and privacy issues associated with transmitting data to cloud locations can potentially be mitigated, and it can be easier for enterprises to demonstrate compliance with data privacy and data sovereignty requirements.
  • Regulatory compliance. Locally managed information potentially only needs to comply with local regulations, rather than multi-jurisdiction regulations that might apply in a cloud environment.
  • Reduced operating costs. Undertaking more analytics locally, supported by edge computing, can reduce the amount of data that needs to be sent to cloud locations for processing, so reducing communications costs associated with data carriage.

What kind of applications can Edge Computing support?

The simplest applications of edge technology are probably in building control and facilities management, including HVAC control, and security monitoring, access control and alarm systems. Clearly, such scenarios benefit not only from autonomous operation at the edge, but also the location of analytics at the (local) enterprise edge allows for good oversight of all parameters associated with a particular facility. This kind of application is clearly applicable in almost any vertical sector.

Another potential application is production monitoring and control in an industrial context. For example, an edge-enabled system can be used to monitor a range of production lines and machinery in a manufacturing location to pre-emptively identify any maintenance that is required, and dynamically re-scheduling production given anticipated downtime, availability of parts, and the profile of orders that need to be fulfilled. Edge-enabled, AI-based quality control can be applied to video streams of manufactured products on a production line, to check for any defects in manufactured products. Information can also be streamed to Augmented Reality (AR) interfaces such as tablet computers, or video goggles, with minimal latency. Such applications are currently a reality in a production manufacturing environment, but may also find application in smart cities (for example, to control traffic) and also healthcare environments (for example, to manage patient flows).

The application of AI and Machine Learning (ML) to CCTV feeds is a key opportunity for edge computing, particularly given the proportion of data communication volumes that it is possible to redact by applying analytic rules at the edge. AI enabled cameras can simply transmit an ‘alert’ that a certain decision rule has been triggered, averting the need to transmit a full moving image feed to cloud infrastructure for analysis. Such solutions are commonly productised in the form of facial recognition cameras, or security cameras, and clearly have application in almost all vertical sectors.

Another application that is particularly suitable for edge computing is the management of an estate of devices, for instance as might be found at a solar farm or a wind farm or in the context of a range of agricultural and aquacultural applications, transportation management (particularly rail, and public transport), oil and gas extraction and mining operations.

In all, there are many, many contexts in digital transformation in which edge computing can be significantly beneficial.

The topography of Edge

There are many definitions of ‘Edge’ and some complexity around how it might work, so it’s worth unpicking some of that complexity.

Edge locations, data lakes and data streams

Thus far, we’ve referred to ‘the edge’ in quite general terms as being characterised by deploying compute power closer to edge devices or sensors. However, there are many different kinds of edge location as illustrated in the chart below.

edge-locations.jpg

The most local kind of edge computing is where compute power is installed on an actual end device (‘Device Edge’ in the figure above), for example an industrial robot. Next most local would be an Edge Gateway, located close to an end device; often this would be an industrial computer deployed to connect an OT (Operational Technology) asset to IT (Information Technology) systems.

The Enterprise Edge includes servers and compute power that are local to an enterprise, or on-site at an industrial facility, and sit at the interface between that local network and associated cloud infrastructure.

The Network Edge (specifically Multi-Access Edge Computing, or MEC, or increasingly Mobile Edge Computing ) is an evolution of the enterprise edge scenario, where edge processing is provided at the ‘edge’ of a communications network. Although, in fact, in most currently planned deployments of MEC, mobile network operators are intending to deploy edge resources either at locations that could be better characterised as the ‘Data Centre Edge’ (see below) or at the Enterprise Edge in the case of Mobile Private Networks and particularly in association with 5G. In time, compute power can be expected to migrate ‘closer’ to the actual (radio access) network edge in public networks.

Beyond these quintessentially ‘local’ versions of edge is another definition of edge that has been adopted by providers of cloud infrastructure and which includes the provision of hosting capacity in secondary and tertiary locations: closer to the end devices, from the perspective of a cloud provider. We mention this for completeness only since, from the perspective of a solution designer, such locations are for most intents and purposes still ‘cloud’ locations. This is a rapidly developing area though, since the public cloud providers have been partnering with communications service providers to co-locate the cloud edge and network edge (MEC) together.

In terms of the processing of data, data lakes and large databases of information tend to reside ‘in the cloud’, whilst data streams can potentially be processed at any of the edge locations described above.

Dynamics of data at the edge

The next aspect of edge to consider is the dynamics of information (data) as it flows over edge assets.

Currently, organisations will often establish their analytic approaches and frameworks at central (cloud) locations. However, as described above, there can be significant benefits to undertaking certain analyses closer to the end device and the first dynamic associated with the advent of edge computing is a trend to push decisions closer to the edge.

Conversely, raw information originating from a device can be redacted (i.e. filtered or aggregated) as it flows over edge infrastructure, ensuring that only necessary and meaningful information remains for onward transmission.

Lastly, information flowing from edge devices can be augmented as it flows across various edge assets by associating contextual and other information, often including information from other local devices.

In general then, volumes of data tend to reduce as they travel ‘away’ from an end asset, but the information that remains is generally richer and more meaningful.

These dynamics of information in the context of edge assets are illustrated in the figure below.

dynamics-of-edge-information.jpg

DataOps at the Edge

DataOps is a fast-emerging concept in the edge domain, allowing for the optimisation of data flows (and location of application deployment) within a local campus environment, including deployment to (and federated support of) devices with limited on-board processing. DataOps at the edge might include configurable application-specific synchronisation policies between cloud and edge (and within the on-campus edge), and data partitioning so that only the business data needed at a specific edge location is distributed and synchronised. This approach effectively renders all available compute-processing assets within a campus location as a single, managed, distributed computing environment and has the potential to more effectively support a range of edge-type applications.

The general principles underlying this kind of distribution of applications within a campus edge environment are that application functionality that is either mission-critical, or time-critical, or relies on high data volumes, should be deployed as close to the relevant source(s) of data as possible, meanwhile recognising the potential to increase the scope of an application (in terms of the variety of data that it can ingest from different sources) as it moves further ‘away’ from an edge data source.

Edge Computing Challenges

Whilst the advent of DataOps at the edge can bring significant benefits, there are also challenges inherent with deploying such a flexible system. One such challenge is edge orchestration, including the automated configuration, coordination, and management of computer systems and software.

What is needed is a container orchestration system for automating software application deployment, scaling, and management and which can be deployed to provide a platform for the deployment of software containers (that function like individual computers from the perspective of software programmes running in them) in an edge environment. Open-source Kubernetes is one of the main such solutions.

But such application platforms are not the full story. Up to this point, this report has focussed on edge capabilities in an abstract sense. In a real-life deployment, it is likely that edge applications provided by different vendors, and intended to support different use cases, will need to be deployed alongside each other on the same infrastructure. A Kubernetes-type approach can help with the actual deployment of applications but may not ensure that one application cannot interfere with another. Clearly there is a need for some level of security in the edge environment including extending to ensure trustworthy operations.

Application Stores Edge Computing

Any industrial facility is likely to include machinery from a range of OEMs. This immediately highlights two potentially conflicting dynamics: the desire of the facility owner to deploy edge applications that support their overall smart factory vision, and the desire of OEM equipment manufacturers to deploy applications to support ‘their’ machinery.

However, with flexible deployment of applications to edge assets, supported by a Kubernetes-based platform and enabled by appropriate security measures, these kinds of conflict can be dealt with, and both stakeholders (the facility operators and the OEM) can potentially deploy their own applications alongside each other on the same edge assets.

There’s no reason why the story should stop there though. Both the facility owners and OEM could maintain their own ecosystems of applications that can potentially be installed onto edge hardware. These ecosystems could extend to include third parties such as finance and insurance providers, and so on.

Related Content

REPORT | DEC 17, 2024 | Matt Hatton
Mobile Network Operators (MNOs) have invested significant sums in upgrading to 5G. In increasing numbers they are also making the further step up to upgrading to a 5G Standalone (5G SA) core network. Much of the promise of 5G comes from the latter, giving MNOs the potential to deliver richer features and functionality including real-time latency, improved reliability, and ‘Quality on Demand’ type features. The combination of Network Exposure Function (NEF) and APIs also offers the potential to monetise 5G’s promise of the programmable network. In this report we explore the likely true revenue potential for MNOs by applying the additional capabilities of 5G (and particularly 5G SA) to the Internet of Things. The report starts with analysis of the key elements of 3GPP Releases 15 to 19 that relate to 5G and IoT. The includes the key functions of 5G New Radio in the form of enhanced Mobile Broadband (eMBB), Ultra Reliable Low Latency Communications (URLLC) and massive Machine Type Communications (mMTC). It also considers features such as network slicing, 5G RedCap, Cellular-V2X, Non-Terrestrial Networks, Network Exposure Function, APIs, edge computing, and much more. Under each sub-section the report considers how the features and functionality could be applied to IoT. The report then goes on to quantify the market opportunity associated with all of the various elements. For the period 2025-2040 it provides a view on the IoT connectivity revenue that will be attributable to each of the features. It does so in three main categories: ‘5G SA’ (which uses the additional features provided by the core network upgrade), ‘5G NSA’ (which have a requirement for the capabilities of the 5G access network but not advanced SA capabilities) and finally ‘5G future-proof’ (which would be addressable with 4G, or below, networks, and will migrate to 5G by default rather than because they demand any 5G functionality). The modelling is done as an assessment of what proportion of the connectivity revenue of a European or North American MNO (along with their channels) might be accounted for by each of the revenue types. Finally, it examines some of the contingencies and dependencies that might influence the timing of such revenue opportunities and provides a set of conclusions and recommendations for MNOs.