Artificial Intelligence, and all its sub-components, is one of the most intriguing and potentially transformational of all of the currently emerging technology areas that Transforma Insights tracks.
There are dozens of different ways to define and categorise AI. One of the most common is the Artificial Intelligence/Machine Learning/Deep Learning triumvirate which looks at the broad approaches taken. Artificial Intelligence (AI) is generally accepted to be the umbrella term for several types of activities, all aimed at mimicking human intelligence. The most commonly discussed sub-set is Machine Learning (ML) which is specifically about applying complex algorithms and statistical techniques to existing data to make (or inform) decisions or predictions. An important subset of ML, where much of the latest thinking is focused, is Deep Learning, which uses a combination of very large data sets and Neural Networks, which seek to imitate the behaviour of the human mind, for instance through the use of reinforcement learning.
Another categorisation looks at the type of intelligence that is being developed. Artificial General Intelligence, for instance, is attempting to create the capability to perform a range of tasks based on independent decision-making. Artificial Narrow Intelligence in contrast seeks to perform a specific task, often extraordinarily well. The concepts of ‘Strong’ and ‘Weak’ AI are somewhat analogous.
A third useful categorisation looks at the methods, in the form of specific techniques, for applying AI, in the form of Supervised ML, Unsupervised ML and Reinforcement ML learning and the associated algorithms, as well as the next progression, in the form of Deep Learning.
Most AI implementations until now have focused on Machine Learning, and specifically supervised ML: signposting for a machine the activity that needs to be performed and indicating the best ways that it might be achieved. This is the simplest to implement, the most easily understandable and the easiest to validate. It is unsurprising that most of the success stories in AI to date have focused on the automation of time-consuming yet relatively simple tasks. These promise the quickest return on investment. Good examples include legal document analysis or medical image analysis.
Unsupervised learning involves the analysis of unstructured and/or unlabelled data to create a framework for understanding the data. The machine is not instructed how to achieve its goals, and not necessarily even on what the goal might be. Instead, it is let loose, to a greater or lesser extent, on a set of data with instructions only on what the end goal might be, which itself might be only a vague goal of structuring unstructured data. The key objective of unsupervised ML is to find structure where it may not have been seen before and cluster data. An example of a project might be one around customer segmentation, where the ML is presented with a set of data about customer buying habits and told to find some trends or commonalities that allow those buyers to be segmented.
Reinforcement learning is characterised by trial and error. It will make decisions based on previous experience, for instance trying a particular action and failing, and often trying and failing millions of times to work out what is the best approach to take in order to maximise the ‘rewards’ it receives from its feedback system. The objective of reinforcement systems is to create a policy for further future action. A good example is gaming, where the AI trains itself by playing the game and progressively improving its performance.
The most interesting cutting-edge developments lie in Deep Learning (DL). The principle with DL is that the algorithms are presented with large volumes of data and then asked to make their own decisions about how to categorise or react to what they see, perhaps in order to achieve a particular goal. Probably the most prominent area of exploration for DL is in software research to enable autonomous vehicles. The parameters under which the AI must function are potentially very diverse. Therefore, training that imitates humans and makes use of large amounts of data is highly appropriate.
There are many component parts to AI, including hardware, software, data sources and consultancy. The chart below illustrates the main categories:
At the bottom of the stack, and particularly relevant for deep learning, is training data. Deep learning requires large data sets. The bigger the better, so ownership of large volumes of data becomes a differentiator for a company’s AI. This naturally triggers competition for access to the biggest and best data sets. Large general data sets are typically the preserve of hyperscale companies such as Amazon, Google and Microsoft. Others have acquired data-rich companies, such as The Weather Company in the case of IBM. Innovation in generic deep learning AI capabilities will be hard to come by for companies without access to huge data sets. However, there are many niches where there are specialist data owners.
AI requires relatively sophisticated processors to run optimally. To date GPUs (Graphics Processing Units) have been adapted to facilitate deep learning, and a new class of ‘AI Accelerator’ has emerged. This is a class of multicore processor with massive parallel functionality, and more computational power and efficiency. An interesting development has been Google’s Tensor Processing Unit (TPU) which is designed for neural networks. It provides a high volume of low precision compute, i.e. it can process data very fast but may not have the numerical precision of a GPU, which is fine for most AI. Other capabilities that should be harnessed for efficiently deploying AI include High Bandwidth Memory (HBM), memory on chip (e.g. TPU), new non-volatile memory, low latency networking and MRAM (Magnetoresistive Random Access Memory).
Algorithms are the engine of machine learning. The data scientist will select a particular type of algorithm depending on the process that is being engaged in. In the same way that the broad categories of supervised, unsupervised, reinforcement and deep learning will be appropriate for different broad categories of use case, so too the specific type of algorithm chosen will be dictated by the use case.
Frameworks exist to make it quicker and easier to build AI applications. These act as a template and guide for developing, training, validating, deploying and managing the various aspects of using AI. Prominent examples include Keras, PyTorch and Tensorflow.
AI software platforms open up use of AI to data scientists and other functions. They allow non-technical experts to make use of pre-built capabilities in data processing and model training and evaluation in order to rapidly accelerate the process of deployment. Platforms functions include things like selection tools for appropriate ML algorithm(s), and sometimes access to expert knowledge from the ML platform provider. Examples include Amazon ML/SageMaker, Google AutoML, H2O.ai Q and OpenML.
There are thousands of different potential applications of AI/ML. In the recent Artificial Intelligence market forecast we at Transforma Insights have tried to quantify the space. Below are some of the key Use Cases:
RPA takes IT-based tasks that were previously handled manually by a human, observes them being performed and replicates them through intelligent agents. Typically, this is by way of bots that track human action and aim to replicate it. Today RPA is highly focused on brownfield replication, i.e. automating existing processes, while its future trajectory is towards more greenfield automation of processes. Transforma Insights publishes a Technology Insight report focused specifically on providing an introduction to the RPA market: ‘Robotic Process Automation 101’.
Probably the single most widely deployed use of AI (not least in the context of consumer smart speakers), dealing with understanding human speech and writing so as to provide more accurate inputs into other applications (either AI or non-AI). This might be as simple as interpreting a command to a smart speaker, all the way through to analysing and interpreting sentiment with regard to products within customer reviews. There is an end goal of developing the capability to create a bot able to pass the ‘Turing Test’, i.e. to appear to respond as a human would.
Much of AI relates to image recognition and processing, often in the form of simple exercises such as identifying pictures of cats, or spotting cars that are parked in a prohibited location. Behavioural analysis is a step more sophisticated and involves interpreting images, usually video streams, to understand the behaviour of the (usually) people being observed. This can be used for detecting suspicious behaviour, or tracking employees for safety purposes, or even for a social credit scoring process, as seen in China.
There are dozens of ways in which enterprise activities can be streamlined with potentially dramatic cost savings. These might include business processes, systems, workflows, logistics, transport, human resources, and numerous other things. The aim here is to augment (and in some cases replace) human decision-making.
The financial services segment was one of the first to adopt AI due to the existence of large, accurate and comprehensive data sets, need for efficiency and potential ROI. As well as the use of AI for relatively mundane activities such as credit scoring and document (e.g. credit card application) analysis, at the most extreme the use of AI in financial trading is widespread today, with the majority of financial trades being executed by trading bots.
Transforma Insights has highly granular forecasts of AI instances as part of its TAM Forecast Database. The total number of deployed AI instances will grow from 1.8 million in 2021 to 21 billion in 2030 at a CAGR of 45%. 9% of Artificial Intelligence instances will be deployed on IoT devices, particularly consumer-facing products such as Audio Visual equipment (including smart speakers) and cars.
The single biggest use case is Natural Language Processing, followed by Chatbots & Digital Assistance and Customer Behaviour Analysis.
North America accounts for 30% of the global instances, China is 27% and Europe 23%. The bigger economies are generally over-indexing, compared to labour intensive markets. The majority of AI instances are in consumer devices and for consumer applications, reflecting the dominance of AI in smart speakers, TVs, passenger vehicles and other consumer devices.
For more information on AI forecasts, check the Artificial Intelligence Market Forecast Report 2020-2030.
While AI offers great opportunities there are also some big potential risks.
The first and most pressing challenge relates to ‘AI bias’. There have been numerous headline-grabbing examples, including David Heinemeier Hansson’s campaign about the disparity in credit limits between him and his wife on their Apple Credit Cards, the Tay bot that was shut down for parroting bigoted views, and the HR department which was rejecting perfectly good applications for no apparent good reason. To tackle the AI bias issue specifically, Transforma Insights advocates the implementation of a robust oversight layer with responsibility for objective setting and critical review to ensure that the AI is not only doing what it’s tasked to do, but also doing what’s right.
The challenge is partly one of maturity. The use of AI is immature and will naturally become more refined over time, but it isn’t clear that AI companies will have the opportunity to learn on the job. AI already receives enough negative PR over how it will remove the jobs of millions of workers. While it might be a net benefit to society, that is precious little comfort to those directly affected. We are likely to see continued push-back against AI which may also create some challenges. The concept of the ‘adversarial patch’ is an interesting one to look out for: something created specifically with the intention of fooling an AI.
Another risk is more fundamental, concerning diminishing returns on investment. The last decade, particularly with the development of deep learning, has been a good one for AI. However, the history of AI is one of boom and bust. There have been two ‘AI winters’ so far, in the 1970s and the mid-80s to mid-90s. They are preceded by periods of great technological progress, after which progress slows while awaiting the next technology break-through. For instance, the arrival of Deep Learning pulled AI out of its last winter. There is a strong possibility that the benefits stimulated by deep learning will soon be all but exhausted and we might face a third AI winter in the 2020s.
Much of the stimulation for the last decade’s success has come from the availability of unprecedented cloud compute capability. There is no immediate trigger for the next breakthrough, although that doesn’t necessarily mean it won’t appear. Perhaps the sheer scale of Deep Learning investments will trigger progress to AGI. Alternatively, perhaps quantum computing will do the same. Its building block, the qubit, isn’t limited to being just 0 or 1, which is highly applicable for AI. However, we could be well into the 2030s before it is harnessed effectively.
The most likely scenario is continued incremental progress of existing deep learning capabilities. Advances are being made in hardware, large investments are being made in sweating data assets, and more AI platforms are coming on stream, which helps to democratise the use of AI amongst more organisations. The 2020s will be an AI decade, but it could be more based on wide adoption of the current levels of technology into more business processes, rather than a next technological step.