Transforma logo

AI & Machine Learning

 

Artificial Intelligence, and all its sub-components, is one of the most intriguing and potentially transformational of all of the currently emerging technology areas that Transforma Insights tracks.

Defining and categorising Artificial Intelligence

There are dozens of different ways to define and categorise AI. One of the most common is the Artificial Intelligence/Machine Learning/Deep Learning triumvirate which looks at the broad approaches taken. Artificial Intelligence (AI) is generally accepted to be the umbrella term for several types of activities, all aimed at mimicking human intelligence. The most commonly discussed sub-set is Machine Learning (ML) which is specifically about applying complex algorithms and statistical techniques to existing data to make (or inform) decisions or predictions. An important subset of ML, where much of the latest thinking is focused, is Deep Learning, which uses a combination of very large data sets and Neural Networks, which seek to imitate the behaviour of the human mind, for instance through the use of reinforcement learning.

AI-ML-DL.jpg

Another categorisation looks at the type of intelligence that is being developed. Artificial General Intelligence, for instance, is attempting to create the capability to perform a range of tasks based on independent decision-making. Artificial Narrow Intelligence in contrast seeks to perform a specific task, often extraordinarily well. The concepts of ‘Strong’ and ‘Weak’ AI are somewhat analogous.

A third useful categorisation looks at the methods, in the form of specific techniques, for applying AI, in the form of Supervised ML, Unsupervised ML and Reinforcement ML learning and the associated algorithms, as well as the next progression, in the form of Deep Learning.

ML-algorithms.jpg

Supervised ML

Most AI implementations until now have focused on Machine Learning, and specifically supervised ML: signposting for a machine the activity that needs to be performed and indicating the best ways that it might be achieved. This is the simplest to implement, the most easily understandable and the easiest to validate. It is unsurprising that most of the success stories in AI to date have focused on the automation of time-consuming yet relatively simple tasks. These promise the quickest return on investment. Good examples include legal document analysis or medical image analysis.

Unsupervised ML

Unsupervised learning involves the analysis of unstructured and/or unlabelled data to create a framework for understanding the data. The machine is not instructed how to achieve its goals, and not necessarily even on what the goal might be. Instead, it is let loose, to a greater or lesser extent, on a set of data with instructions only on what the end goal might be, which itself might be only a vague goal of structuring unstructured data. The key objective of unsupervised ML is to find structure where it may not have been seen before and cluster data. An example of a project might be one around customer segmentation, where the ML is presented with a set of data about customer buying habits and told to find some trends or commonalities that allow those buyers to be segmented.

Reinforcement ML

Reinforcement learning is characterised by trial and error. It will make decisions based on previous experience, for instance trying a particular action and failing, and often trying and failing millions of times to work out what is the best approach to take in order to maximise the ‘rewards’ it receives from its feedback system. The objective of reinforcement systems is to create a policy for further future action. A good example is gaming, where the AI trains itself by playing the game and progressively improving its performance.

Deep Learning: the cutting edge

The most interesting cutting-edge developments lie in Deep Learning (DL). The principle with DL is that the algorithms are presented with large volumes of data and then asked to make their own decisions about how to categorise or react to what they see, perhaps in order to achieve a particular goal. Probably the most prominent area of exploration for DL is in software research to enable autonomous vehicles. The parameters under which the AI must function are potentially very diverse. Therefore, training that imitates humans and makes use of large amounts of data is highly appropriate.

Components of AI

There are many component parts to AI, including hardware, software, data sources and consultancy. The chart below illustrates the main categories:

ai-landscape.jpg

Training data

At the bottom of the stack, and particularly relevant for deep learning, is training data. Deep learning requires large data sets. The bigger the better, so ownership of large volumes of data becomes a differentiator for a company’s AI. This naturally triggers competition for access to the biggest and best data sets. Large general data sets are typically the preserve of hyperscale companies such as Amazon, Google and Microsoft. Others have acquired data-rich companies, such as The Weather Company in the case of IBM. Innovation in generic deep learning AI capabilities will be hard to come by for companies without access to huge data sets. However, there are many niches where there are specialist data owners.

Hardware

AI requires relatively sophisticated processors to run optimally. To date GPUs (Graphics Processing Units) have been adapted to facilitate deep learning, and a new class of ‘AI Accelerator’ has emerged. This is a class of multicore processor with massive parallel functionality, and more computational power and efficiency. An interesting development has been Google’s Tensor Processing Unit (TPU) which is designed for neural networks. It provides a high volume of low precision compute, i.e. it can process data very fast but may not have the numerical precision of a GPU, which is fine for most AI. Other capabilities that should be harnessed for efficiently deploying AI include High Bandwidth Memory (HBM), memory on chip (e.g. TPU), new non-volatile memory, low latency networking and MRAM (Magnetoresistive Random Access Memory).

Algorithms

Algorithms are the engine of machine learning. The data scientist will select a particular type of algorithm depending on the process that is being engaged in. In the same way that the broad categories of supervised, unsupervised, reinforcement and deep learning will be appropriate for different broad categories of use case, so too the specific type of algorithm chosen will be dictated by the use case.

Frameworks and libraries

Frameworks exist to make it quicker and easier to build AI applications. These act as a template and guide for developing, training, validating, deploying and managing the various aspects of using AI. Prominent examples include Keras, PyTorch and Tensorflow.

Platforms

AI software platforms open up use of AI to data scientists and other functions. They allow non-technical experts to make use of pre-built capabilities in data processing and model training and evaluation in order to rapidly accelerate the process of deployment. Platforms functions include things like selection tools for appropriate ML algorithm(s), and sometimes access to expert knowledge from the ML platform provider. Examples include Amazon ML/SageMaker, Google AutoML, H2O.ai Q and OpenML.

Key AI use cases

There are thousands of different potential applications of AI/ML. In the recent Artificial Intelligence market forecast we at Transforma Insights have tried to quantify the space. Below are some of the key Use Cases:

Robotic Process Automation

RPA takes IT-based tasks that were previously handled manually by a human, observes them being performed and replicates them through intelligent agents. Typically, this is by way of bots that track human action and aim to replicate it. Today RPA is highly focused on brownfield replication, i.e. automating existing processes, while its future trajectory is towards more greenfield automation of processes. Transforma Insights publishes a Technology Insight report focused specifically on providing an introduction to the RPA market: ‘Robotic Process Automation 101’.

Natural Language Processing (NLP)

Probably the single most widely deployed use of AI (not least in the context of consumer smart speakers), dealing with understanding human speech and writing so as to provide more accurate inputs into other applications (either AI or non-AI). This might be as simple as interpreting a command to a smart speaker, all the way through to analysing and interpreting sentiment with regard to products within customer reviews. There is an end goal of developing the capability to create a bot able to pass the ‘Turing Test’, i.e. to appear to respond as a human would.

Behavioural analysis

Much of AI relates to image recognition and processing, often in the form of simple exercises such as identifying pictures of cats, or spotting cars that are parked in a prohibited location. Behavioural analysis is a step more sophisticated and involves interpreting images, usually video streams, to understand the behaviour of the (usually) people being observed. This can be used for detecting suspicious behaviour, or tracking employees for safety purposes, or even for a social credit scoring process, as seen in China.

Optimisation

There are dozens of ways in which enterprise activities can be streamlined with potentially dramatic cost savings. These might include business processes, systems, workflows, logistics, transport, human resources, and numerous other things. The aim here is to augment (and in some cases replace) human decision-making.

Financial services

The financial services segment was one of the first to adopt AI due to the existence of large, accurate and comprehensive data sets, need for efficiency and potential ROI. As well as the use of AI for relatively mundane activities such as credit scoring and document (e.g. credit card application) analysis, at the most extreme the use of AI in financial trading is widespread today, with the majority of financial trades being executed by trading bots.

The growing adoption of AI

Transforma Insights has highly granular forecasts of AI instances as part of its TAM Forecast Database. The total number of deployed AI instances will grow from 1.8 million in 2021 to 21 billion in 2030 at a CAGR of 45%. 9% of Artificial Intelligence instances will be deployed on IoT devices, particularly consumer-facing products such as Audio Visual equipment (including smart speakers) and cars.

ai-forecast-device-type-2020-30.jpg

The single biggest use case is Natural Language Processing, followed by Chatbots & Digital Assistance and Customer Behaviour Analysis.

ai-forecast-use-case-split-2030.jpg

North America accounts for 30% of the global instances, China is 27% and Europe 23%. The bigger economies are generally over-indexing, compared to labour intensive markets. The majority of AI instances are in consumer devices and for consumer applications, reflecting the dominance of AI in smart speakers, TVs, passenger vehicles and other consumer devices.

ai-forecast-region-enterprise-consumer-2030.jpg

For more information on AI forecasts, check the Artificial Intelligence Market Forecast Report 2020-2030.

AI challenges

While AI offers great opportunities there are also some big potential risks.

AI bias

The first and most pressing challenge relates to ‘AI bias’. There have been numerous headline-grabbing examples, including David Heinemeier Hansson’s campaign about the disparity in credit limits between him and his wife on their Apple Credit Cards, the Tay bot that was shut down for parroting bigoted views, and the HR department which was rejecting perfectly good applications for no apparent good reason. To tackle the AI bias issue specifically, Transforma Insights advocates the implementation of a robust oversight layer with responsibility for objective setting and critical review to ensure that the AI is not only doing what it’s tasked to do, but also doing what’s right.

Transforma_Insights_AI_layers.png

The challenge is partly one of maturity. The use of AI is immature and will naturally become more refined over time, but it isn’t clear that AI companies will have the opportunity to learn on the job. AI already receives enough negative PR over how it will remove the jobs of millions of workers. While it might be a net benefit to society, that is precious little comfort to those directly affected. We are likely to see continued push-back against AI which may also create some challenges. The concept of the ‘adversarial patch’ is an interesting one to look out for: something created specifically with the intention of fooling an AI.

The next AI winter?

Another risk is more fundamental, concerning diminishing returns on investment. The last decade, particularly with the development of deep learning, has been a good one for AI. However, the history of AI is one of boom and bust. There have been two ‘AI winters’ so far, in the 1970s and the mid-80s to mid-90s. They are preceded by periods of great technological progress, after which progress slows while awaiting the next technology break-through. For instance, the arrival of Deep Learning pulled AI out of its last winter. There is a strong possibility that the benefits stimulated by deep learning will soon be all but exhausted and we might face a third AI winter in the 2020s.

ai-winter.jpg

Much of the stimulation for the last decade’s success has come from the availability of unprecedented cloud compute capability. There is no immediate trigger for the next breakthrough, although that doesn’t necessarily mean it won’t appear. Perhaps the sheer scale of Deep Learning investments will trigger progress to AGI. Alternatively, perhaps quantum computing will do the same. Its building block, the qubit, isn’t limited to being just 0 or 1, which is highly applicable for AI. However, we could be well into the 2030s before it is harnessed effectively.

The most likely scenario is continued incremental progress of existing deep learning capabilities. Advances are being made in hardware, large investments are being made in sweating data assets, and more AI platforms are coming on stream, which helps to democratise the use of AI amongst more organisations. The 2020s will be an AI decade, but it could be more based on wide adoption of the current levels of technology into more business processes, rather than a next technological step.

Related Content

REPORT | FEB 21, 2024 | Matt Arnott ; Paras Sharma
This report provides Transforma Insights’ view on the Autonomous Road Freight Vehicles market. This segment comprises vehicles used for transporting goods on roads in a commercial setting. To be counted as part of this Application Group, vehicles must be capable of operating at Level 3 (L3) of the SAE levels of autonomy. The autonomous road freight vehicles market is at a nascent stage with an enormous potential to disrupt the road freight market. The adoption of L3 autonomous freight vehicles is gaining momentum with multiple governments around the world supporting the testing and commercialisation of autonomous vehicles on roads, although concerns around the safety and performance of these vehicles can act as a deterrence to the pace of mass adoption. The initial focus of most technology companies is proving the concept of commercial L3 vehicles and allowing shippers, carriers, and logistics companies to embrace and familiarise themselves with their usage. The report provides a detailed definition of the sector, analysis of market development and profiles of the key vendors in the space. It also provides a summary of the current status of adoption and Transforma Insights’ ten-year forecasts for the market. The forecasts include analysis of the number of IoT connections by geography, the technologies used (including splits by 2G, 3G, 4G, 5G, LPWA, short range, satellite, and others), as well as the revenue split between module, value-added connectivity and services. A full set of forecast data, including country-level forecasts, sector break-downs and public/private network splits, is available through the IoT Forecast tool.
REPORT | JAN 30, 2024 | Suruchi Dhingra
CXOs are becoming increasingly aware of the potential of generative AI to reshape their businesses. Many are evaluating its use in their business applications and have started experimenting with the technology to automate processes such as summarising and extracting information from a large volume of content, facilitating call centre support, enhancing personalised content creation, and more. The most effective way to realise the full potential of generative AI is through an ecosystem of partners that have access to the technology and expertise in the area. But with so many vendors entering the market and with the wide range of options available, companies are often left confused. Amidst the crowded vendor market, organisations need a clear and informed approach to effectively navigate through this complex vendor landscape. This report contains an overview of the main options available in the market. The choice of vendors depends on whether companies prefer to build generative AI systems from scratch or purchase pre-existing solutions to achieve their desired outcome. Broadly, generative AI vendors can be grouped into four categories: 1) Foundation model or Large Language Model (LLM) providers, 2) Infrastructure providers, 3) Software providers, and 4) IT service providers (including system integrators). In this report, we discuss the generative AI offerings offered by each of these sets of vendors in the market.
REPORT | DEC 22, 2023 | Rohan Bansal
This report discusses the basics of quantum computing along with necessary terminologies one needs to know in order to understand the working of quantum computers. Quantum computing is a branch of computing that uses quantum theory to solve problems that are too large or complex for traditional computers, and at exponentially faster rates. These machines manipulate the quantum state of an object to produce qubits to perform complex operations. These qubits lead to the formation of quantum gates that further combine to create quantum circuits, which are the basic functional blocks of a quantum computer and are responsible for performing operations using one of the many quantum algorithms. In this report we outline the basic principles of quantum computing and how it works, including discussion of qubits, relevant principles of quantum mechanics (including superposition, entanglement, and interference). We also discuss quantum gates and quantum circuits, including consideration of key gate types and how these are configured into circuits. Quantum algorithms are key aspects of the coming quantum computing revolution, and we discuss some of the most significant, including Shor’s Algorithm, Grover’s Algorithm, Simon’s Algorithm, and the Deutsch-Jozsa Algorithm. Lastly, we outline the key components of a quantum computer, including both hardware and software, and discuss alternative types of quantum computers and quantum processors (including gate-based, quantum annealers, and photonic quantum computers).