Transforma logo

AI & Machine Learning

 

Artificial Intelligence, and all its sub-components, is one of the most intriguing and potentially transformational of all of the currently emerging technology areas that Transforma Insights tracks.

Defining and categorising Artificial Intelligence

There are dozens of different ways to define and categorise AI. One of the most common is the Artificial Intelligence/Machine Learning/Deep Learning triumvirate which looks at the broad approaches taken. Artificial Intelligence (AI) is generally accepted to be the umbrella term for several types of activities, all aimed at mimicking human intelligence. The most commonly discussed sub-set is Machine Learning (ML) which is specifically about applying complex algorithms and statistical techniques to existing data to make (or inform) decisions or predictions. An important subset of ML, where much of the latest thinking is focused, is Deep Learning, which uses a combination of very large data sets and Neural Networks, which seek to imitate the behaviour of the human mind, for instance through the use of reinforcement learning.

AI-ML-DL.jpg

Another categorisation looks at the type of intelligence that is being developed. Artificial General Intelligence, for instance, is attempting to create the capability to perform a range of tasks based on independent decision-making. Artificial Narrow Intelligence in contrast seeks to perform a specific task, often extraordinarily well. The concepts of ‘Strong’ and ‘Weak’ AI are somewhat analogous.

A third useful categorisation looks at the methods, in the form of specific techniques, for applying AI, in the form of Supervised ML, Unsupervised ML and Reinforcement ML learning and the associated algorithms, as well as the next progression, in the form of Deep Learning.

ML-algorithms.jpg

Supervised ML

Most AI implementations until now have focused on Machine Learning, and specifically supervised ML: signposting for a machine the activity that needs to be performed and indicating the best ways that it might be achieved. This is the simplest to implement, the most easily understandable and the easiest to validate. It is unsurprising that most of the success stories in AI to date have focused on the automation of time-consuming yet relatively simple tasks. These promise the quickest return on investment. Good examples include legal document analysis or medical image analysis.

Unsupervised ML

Unsupervised learning involves the analysis of unstructured and/or unlabelled data to create a framework for understanding the data. The machine is not instructed how to achieve its goals, and not necessarily even on what the goal might be. Instead, it is let loose, to a greater or lesser extent, on a set of data with instructions only on what the end goal might be, which itself might be only a vague goal of structuring unstructured data. The key objective of unsupervised ML is to find structure where it may not have been seen before and cluster data. An example of a project might be one around customer segmentation, where the ML is presented with a set of data about customer buying habits and told to find some trends or commonalities that allow those buyers to be segmented.

Reinforcement ML

Reinforcement learning is characterised by trial and error. It will make decisions based on previous experience, for instance trying a particular action and failing, and often trying and failing millions of times to work out what is the best approach to take in order to maximise the ‘rewards’ it receives from its feedback system. The objective of reinforcement systems is to create a policy for further future action. A good example is gaming, where the AI trains itself by playing the game and progressively improving its performance.

Deep Learning: the cutting edge

The most interesting cutting-edge developments lie in Deep Learning (DL). The principle with DL is that the algorithms are presented with large volumes of data and then asked to make their own decisions about how to categorise or react to what they see, perhaps in order to achieve a particular goal. Probably the most prominent area of exploration for DL is in software research to enable autonomous vehicles. The parameters under which the AI must function are potentially very diverse. Therefore, training that imitates humans and makes use of large amounts of data is highly appropriate.

Components of AI

There are many component parts to AI, including hardware, software, data sources and consultancy. The chart below illustrates the main categories:

ai-landscape.jpg

Training data

At the bottom of the stack, and particularly relevant for deep learning, is training data. Deep learning requires large data sets. The bigger the better, so ownership of large volumes of data becomes a differentiator for a company’s AI. This naturally triggers competition for access to the biggest and best data sets. Large general data sets are typically the preserve of hyperscale companies such as Amazon, Google and Microsoft. Others have acquired data-rich companies, such as The Weather Company in the case of IBM. Innovation in generic deep learning AI capabilities will be hard to come by for companies without access to huge data sets. However, there are many niches where there are specialist data owners.

Hardware

AI requires relatively sophisticated processors to run optimally. To date GPUs (Graphics Processing Units) have been adapted to facilitate deep learning, and a new class of ‘AI Accelerator’ has emerged. This is a class of multicore processor with massive parallel functionality, and more computational power and efficiency. An interesting development has been Google’s Tensor Processing Unit (TPU) which is designed for neural networks. It provides a high volume of low precision compute, i.e. it can process data very fast but may not have the numerical precision of a GPU, which is fine for most AI. Other capabilities that should be harnessed for efficiently deploying AI include High Bandwidth Memory (HBM), memory on chip (e.g. TPU), new non-volatile memory, low latency networking and MRAM (Magnetoresistive Random Access Memory).

Algorithms

Algorithms are the engine of machine learning. The data scientist will select a particular type of algorithm depending on the process that is being engaged in. In the same way that the broad categories of supervised, unsupervised, reinforcement and deep learning will be appropriate for different broad categories of use case, so too the specific type of algorithm chosen will be dictated by the use case.

Frameworks and libraries

Frameworks exist to make it quicker and easier to build AI applications. These act as a template and guide for developing, training, validating, deploying and managing the various aspects of using AI. Prominent examples include Keras, PyTorch and Tensorflow.

Platforms

AI software platforms open up use of AI to data scientists and other functions. They allow non-technical experts to make use of pre-built capabilities in data processing and model training and evaluation in order to rapidly accelerate the process of deployment. Platforms functions include things like selection tools for appropriate ML algorithm(s), and sometimes access to expert knowledge from the ML platform provider. Examples include Amazon ML/SageMaker, Google AutoML, H2O.ai Q and OpenML.

Key AI use cases

There are thousands of different potential applications of AI/ML. In the recent Artificial Intelligence market forecast we at Transforma Insights have tried to quantify the space. Below are some of the key Use Cases:

Robotic Process Automation

RPA takes IT-based tasks that were previously handled manually by a human, observes them being performed and replicates them through intelligent agents. Typically, this is by way of bots that track human action and aim to replicate it. Today RPA is highly focused on brownfield replication, i.e. automating existing processes, while its future trajectory is towards more greenfield automation of processes. Transforma Insights publishes a Technology Insight report focused specifically on providing an introduction to the RPA market: ‘Robotic Process Automation 101’.

Natural Language Processing (NLP)

Probably the single most widely deployed use of AI (not least in the context of consumer smart speakers), dealing with understanding human speech and writing so as to provide more accurate inputs into other applications (either AI or non-AI). This might be as simple as interpreting a command to a smart speaker, all the way through to analysing and interpreting sentiment with regard to products within customer reviews. There is an end goal of developing the capability to create a bot able to pass the ‘Turing Test’, i.e. to appear to respond as a human would.

Behavioural analysis

Much of AI relates to image recognition and processing, often in the form of simple exercises such as identifying pictures of cats, or spotting cars that are parked in a prohibited location. Behavioural analysis is a step more sophisticated and involves interpreting images, usually video streams, to understand the behaviour of the (usually) people being observed. This can be used for detecting suspicious behaviour, or tracking employees for safety purposes, or even for a social credit scoring process, as seen in China.

Optimisation

There are dozens of ways in which enterprise activities can be streamlined with potentially dramatic cost savings. These might include business processes, systems, workflows, logistics, transport, human resources, and numerous other things. The aim here is to augment (and in some cases replace) human decision-making.

Financial services

The financial services segment was one of the first to adopt AI due to the existence of large, accurate and comprehensive data sets, need for efficiency and potential ROI. As well as the use of AI for relatively mundane activities such as credit scoring and document (e.g. credit card application) analysis, at the most extreme the use of AI in financial trading is widespread today, with the majority of financial trades being executed by trading bots.

The growing adoption of AI

Transforma Insights has highly granular forecasts of AI instances as part of its TAM Forecast Database. The total number of deployed AI instances will grow from 1.8 million in 2021 to 21 billion in 2030 at a CAGR of 45%. 9% of Artificial Intelligence instances will be deployed on IoT devices, particularly consumer-facing products such as Audio Visual equipment (including smart speakers) and cars.

ai-forecast-device-type-2020-30.jpg

The single biggest use case is Natural Language Processing, followed by Chatbots & Digital Assistance and Customer Behaviour Analysis.

ai-forecast-use-case-split-2030.jpg

North America accounts for 30% of the global instances, China is 27% and Europe 23%. The bigger economies are generally over-indexing, compared to labour intensive markets. The majority of AI instances are in consumer devices and for consumer applications, reflecting the dominance of AI in smart speakers, TVs, passenger vehicles and other consumer devices.

ai-forecast-region-enterprise-consumer-2030.jpg

For more information on AI forecasts, check the Artificial Intelligence Market Forecast Report 2020-2030.

AI challenges

While AI offers great opportunities there are also some big potential risks.

AI bias

The first and most pressing challenge relates to ‘AI bias’. There have been numerous headline-grabbing examples, including David Heinemeier Hansson’s campaign about the disparity in credit limits between him and his wife on their Apple Credit Cards, the Tay bot that was shut down for parroting bigoted views, and the HR department which was rejecting perfectly good applications for no apparent good reason. To tackle the AI bias issue specifically, Transforma Insights advocates the implementation of a robust oversight layer with responsibility for objective setting and critical review to ensure that the AI is not only doing what it’s tasked to do, but also doing what’s right.

Transforma_Insights_AI_layers.png

The challenge is partly one of maturity. The use of AI is immature and will naturally become more refined over time, but it isn’t clear that AI companies will have the opportunity to learn on the job. AI already receives enough negative PR over how it will remove the jobs of millions of workers. While it might be a net benefit to society, that is precious little comfort to those directly affected. We are likely to see continued push-back against AI which may also create some challenges. The concept of the ‘adversarial patch’ is an interesting one to look out for: something created specifically with the intention of fooling an AI.

The next AI winter?

Another risk is more fundamental, concerning diminishing returns on investment. The last decade, particularly with the development of deep learning, has been a good one for AI. However, the history of AI is one of boom and bust. There have been two ‘AI winters’ so far, in the 1970s and the mid-80s to mid-90s. They are preceded by periods of great technological progress, after which progress slows while awaiting the next technology break-through. For instance, the arrival of Deep Learning pulled AI out of its last winter. There is a strong possibility that the benefits stimulated by deep learning will soon be all but exhausted and we might face a third AI winter in the 2020s.

ai-winter.jpg

Much of the stimulation for the last decade’s success has come from the availability of unprecedented cloud compute capability. There is no immediate trigger for the next breakthrough, although that doesn’t necessarily mean it won’t appear. Perhaps the sheer scale of Deep Learning investments will trigger progress to AGI. Alternatively, perhaps quantum computing will do the same. Its building block, the qubit, isn’t limited to being just 0 or 1, which is highly applicable for AI. However, we could be well into the 2030s before it is harnessed effectively.

The most likely scenario is continued incremental progress of existing deep learning capabilities. Advances are being made in hardware, large investments are being made in sweating data assets, and more AI platforms are coming on stream, which helps to democratise the use of AI amongst more organisations. The 2020s will be an AI decade, but it could be more based on wide adoption of the current levels of technology into more business processes, rather than a next technological step.

Related Content

REPORT | OCT 04, 2024 | Jim Morrish
Transforma Insights beta-launched our Regulatory Database in August 2023, focussing on an initial 18 countries. As of August 2024, it is now fully live and includes analysis of regulations relating to digital transformation in the top 45 countries that, based on our IoT market forecasts, will represent 96% of cellular IoT connections in 2029. The development has used our ‘DNA of Regulations’ framework as a reference, and we have sought to identify regulations that are relevant to each of the strands of DNA individually in each of the top 45 countries. This report provides an overview of our findings, highlighting the most frequently recurring themes. At the time of writing this report, the database contains a total of 240 regulations, 238 of which relate to our 45 focus countries. Thirteen of these are ‘global’ and apply in all markets worldwide. These are not strictly speaking regulations (there being no regulatory body with ‘global’ jurisdiction) and are typically standards, although we also include one UN Declaration on Artificial Intelligence (AI). The remaining 225 regulations are either country- or region-specific, with many of the latter group emanating from European Union lawmakers. This report provides an overview and statistical analysis of the guidance contained in our Regulatory Database, focusing on the 45 countries for which we have undertaken full research, seeking to identify all regulations that impact any of our identified DNA strands. We discuss the emerging profile of regulations within each of the following major themes in turn:  Hyperconnectivity  IoT  Artificial Intelligence  Privacy  Autonomous Robotic Systems  Distributed Ledger  Data Sharing  Additive Manufacturing  Quantum Computing The report focusses primarily on country- and region-specific regulations and globally relevant guidance is discussed in a separate section. Detailed definitions of each of the ‘DNA of Regulations’ components can be found in our ‘DNA of Regulations’ report.
REPORT | SEP 03, 2024 | Suruchi Dhingra
Artificial Intelligence (AI) is a rapidly developing technology with a widespread impact. Its implementation can be seen all around us from chatbots to autonomous vehicles. AI developments are on course to improve operational efficiency, bring labour productivity, and boost economies. However, the negative environmental impacts of AI are often hidden from public view and the technology can have socioeconomic and sustainability implications and hinder the clean energy ambitions of tech giants. Google and Microsoft, both have reported a significant surge in their carbon emissions in the last five years, largely attributed to increase in their data centre energy consumption as a consequence of infusing AI into their core products. AI has been used for decades but the question around its impact on the environment has arisen in recent years since the techniques or AI models developed now use a vast amount of data, significant computational power and are becoming far more complex and widespread. This was triggered by the generative AI boom and mass adoption of large language model-based products like ChatGPT, DALL-E, and others in the second half of 2022 . These recent AI models are fed with diverse and large data sets to bring more accuracy and robustness, and work with low latency. This requires extensive storage and powerful processing, thus escalating electricity demand and carbon emissions. Microsoft, for example, had a stable increase of carbon emissions for many years but has reported a significant increase of 29% in carbon emissions since 2020, mostly due to its data centre expansions that were designed to support AI workloads. Another tech giant, Google, has also been facing challenges due to increased electricity demand driven by AI. It saw a rise of 48% in its carbon emissions since 2019, driven by data centres and supply chain emissions. This is clearly a challenge to the ambitions of any AI company that is aiming to becoming carbon neutral. In this report, we summarise our findings on the environmental impact of AI and we highlight some of the initiatives that are being taken by AI companies and cloud providers to offset related carbon emissions.
REPORT | AUG 16, 2024 | Rohan Bansal
The development of quantum computers is well under way. Even though these machines are still in relatively early development stages it will not be long before the adoption of quantum computing becomes mainstream in many industries. Over the past few years, some remarkable milestones have been achieved by different technology leaders in this field. For instance, Google was able to construct a quantum computer in 2019 which could do a mathematical calculation in 3 minutes 20 seconds, which would have taken a regular supercomputer 10,000 years to solve. Currently the most advanced development is IBM’s Quantum Condor processor with a 1,000-qubit capacity. Quantum computing increasingly provides great opportunities to enterprises, especially in the field of new drug discovery, financial trading, and supply chain optimisation. But with its great benefits, it also brings along some major threats that need to be mitigated in order to protect highly sensitive information concerning people, organisations, and nations. In this report we analyse the contexts in which quantum computing can pose a threat to present-day security systems and approaches to mitigate the risks. We talk in detail about post quantum cryptography (PQC) and quantum key distribution (QKD), today’s two major techniques that can be used to protect data from the threat of quantum attacks. The report also highlights different advances that are being made in order to ensure data protection along with a range of use case examples in which quantum-safe security is being implemented to safeguard user information. Lasty, we also profile some of the players that are providing quantum-safe security to organisations and highlight the different products and systems they are using in order to do so.
REPORT | MAY 31, 2024 | Rohan Bansal
The development of quantum computers is well under way. Even though these machines are still in quite early development stages it will not be long before they become mainstream in many industries. Over the past few years, some remarkable milestones have been achieved by different tech giants in this field. For instance, Google was able to construct a quantum computer in 2019 which could do a mathematical calculation in 3 minutes 20 seconds, that would have taken a regular supercomputer 10,000 years to solve. The latest and the most advanced development has been the IBM Quantum Condor processor with a 1,000-qubit capacity. Quantum computing increasingly provides great opportunities to enterprises, especially in the field of new drug discovery, financial trading, and supply chain optimisations. In this report we analyse the contexts in which quantum computing (potentially) has advantages compared to classical computing and the leading use cases for quantum computing. The report highlights ways in which organisations can leverage the advantages of quantum computing already today and also talks about the technologies being leveraged today to apply quantum computing to practical problems and achieve enhanced results. It discusses in detail the use, advantages, and disadvantages of quantum computers in the field of financial services, the insurance industry, medical and drug discovery, climate change, automotive, energy, and telecommunications, and how governments are investing into the technology to aid these developments.
REPORT | MAY 30, 2024 | Rohan Bansal
This report discusses the leading players in the quantum computing industry, including companies that provide hardware and software for quantum computing. Vendors are profiled in three categories: Quantum Hardware Providers, including companies specialising in the design, development, and manufacturing of quantum computing hardware. Collectively, these hardware players form the backbone of today’s quantum computing industry, drive much of the progress, and enable transformative advancements. As they continue to refine and scale quantum processors, the potential for quantum computing to revolutionise fields such as cryptography, materials science, and drug discovery becomes increasingly tangible. The Quantum Software/Cloud Service Providers group is comprised of companies that provide relevant software to perform operations on quantum hardware. These companies specialise in developing sophisticated algorithms, programming languages, and software frameworks that enable users to leverage the unique capabilities of quantum computers. The role of quantum software providers extends beyond code development and aims to build an ecosystem that supports the growth of quantum applications. As the quantum computing industry matures, the collaboration between hardware and software providers becomes increasingly crucial to unlock the true potential of quantum computers. Quantum Algorithm Developers/Consultation Providers, providing users with pre-built quantum algorithms to help them accelerate their quantum development plans. They offer educational and training services to enlighten customers about the possibilities of quantum computing in their industry and the most effective ways to use this technology in its nascent stage. The report also highlights development timelines of some of the key vendors in the quantum computing space and different ongoing partnerships between technology providers and industry players and how these partnerships are specifically leveraging quantum computing to strengthen their position in the market.
REPORT | FEB 21, 2024 | Matt Arnott ; Paras Sharma
This report provides Transforma Insights’ view on the Autonomous Road Freight Vehicles market. This segment comprises vehicles used for transporting goods on roads in a commercial setting. To be counted as part of this Application Group, vehicles must be capable of operating at Level 3 (L3) of the SAE levels of autonomy. The autonomous road freight vehicles market is at a nascent stage with an enormous potential to disrupt the road freight market. The adoption of L3 autonomous freight vehicles is gaining momentum with multiple governments around the world supporting the testing and commercialisation of autonomous vehicles on roads, although concerns around the safety and performance of these vehicles can act as a deterrence to the pace of mass adoption. The initial focus of most technology companies is proving the concept of commercial L3 vehicles and allowing shippers, carriers, and logistics companies to embrace and familiarise themselves with their usage. The report provides a detailed definition of the sector, analysis of market development and profiles of the key vendors in the space. It also provides a summary of the current status of adoption and Transforma Insights’ ten-year forecasts for the market. The forecasts include analysis of the number of IoT connections by geography, the technologies used (including splits by 2G, 3G, 4G, 5G, LPWA, short range, satellite, and others), as well as the revenue split between module, value-added connectivity and services. A full set of forecast data, including country-level forecasts, sector break-downs and public/private network splits, is available through the IoT Forecast tool.
REPORT | JAN 30, 2024 | Suruchi Dhingra
CXOs are becoming increasingly aware of the potential of generative AI to reshape their businesses. Many are evaluating its use in their business applications and have started experimenting with the technology to automate processes such as summarising and extracting information from a large volume of content, facilitating call centre support, enhancing personalised content creation, and more. The most effective way to realise the full potential of generative AI is through an ecosystem of partners that have access to the technology and expertise in the area. But with so many vendors entering the market and with the wide range of options available, companies are often left confused. Amidst the crowded vendor market, organisations need a clear and informed approach to effectively navigate through this complex vendor landscape. This report contains an overview of the main options available in the market. The choice of vendors depends on whether companies prefer to build generative AI systems from scratch or purchase pre-existing solutions to achieve their desired outcome. Broadly, generative AI vendors can be grouped into four categories: 1) Foundation model or Large Language Model (LLM) providers, 2) Infrastructure providers, 3) Software providers, and 4) IT service providers (including system integrators). In this report, we discuss the generative AI offerings offered by each of these sets of vendors in the market.
REPORT | DEC 22, 2023 | Rohan Bansal
This report discusses the basics of quantum computing along with necessary terminologies one needs to know in order to understand the working of quantum computers. Quantum computing is a branch of computing that uses quantum theory to solve problems that are too large or complex for traditional computers, and at exponentially faster rates. These machines manipulate the quantum state of an object to produce qubits to perform complex operations. These qubits lead to the formation of quantum gates that further combine to create quantum circuits, which are the basic functional blocks of a quantum computer and are responsible for performing operations using one of the many quantum algorithms. In this report we outline the basic principles of quantum computing and how it works, including discussion of qubits, relevant principles of quantum mechanics (including superposition, entanglement, and interference). We also discuss quantum gates and quantum circuits, including consideration of key gate types and how these are configured into circuits. Quantum algorithms are key aspects of the coming quantum computing revolution, and we discuss some of the most significant, including Shor’s Algorithm, Grover’s Algorithm, Simon’s Algorithm, and the Deutsch-Jozsa Algorithm. Lastly, we outline the key components of a quantum computer, including both hardware and software, and discuss alternative types of quantum computers and quantum processors (including gate-based, quantum annealers, and photonic quantum computers).