The Applied Intelligence Live conference in Austin, Texas, held in September 2023, in general, was a hub of discussions revolving around the transformative power of generative Artificial Intelligence (AI). However, by taking a step or two back from the usual hubbub of a commercial conference near to the surface was an awkward moment, like an odd couple meeting at a revolving door and simultaneously saying “after you,’ then both jumping in. In short, panelists and attendees alike grappled with the critically important, but immature, partnership between the new AI technology and people. Descriptions of what would come next, in terms of ethical and practical impacts the technology will deliver, included visionary aspirations as well as raw dread. In this blog post we explore the peculiar pairing of opposing sentiments, as well as five core themes that emerged during the event.
In the past 50 years, AI has evolved from its origins in academic circles to its prominent role in the business world. Historically, it wasn't until the late 2000s that discussions about its practical applications in businesses gained traction. Over the past decade, AI has undergone a remarkable transformation, with the number of real-world use cases accelerating from just a few to several hundred. In turn, with the November 30, 2022 launch of Chat Generative Pre-trained Transformer (ChatGPT) AI became a worldwide phenomenon when it breached the divide between academia and the public consciousness.
This rapid pace underscores the increasing roles AI now plays in reshaping industries and optimizing various processes. It has also revealed the lack of meaningful guardrails for the technology. Governments, standards organizations, and regulators are all struggling to adopt meaningful governance to keep pace with AI’s significant potential and risks.
There is a rush, currently, to focus on the instrumental aspects of AI, its status as a tool and means to many ends. In short, its uses. At the same time, AI was interrogated at Applied Intelligence Live, not only for its good or bad utility, but for what it reveals about human nature and ethics. Questions like, ‘What does it mean to be human in a world of AI?’ were raised on stage and as readily fumbled away. The question posed by the German philosopher Martin Heidegger “What has the essence of technology to do with revealing? The answer: everything,” a summation of the questions asked. However, as ExoFusion CEO Romi Mahajan noted, in an event full of people with a “profession bound up in technology” the perspective was a positivist perspective, rather than a broad range of perspectives.
At many AI discussions, AI’s intersection with people included their messy impulses and ethics, in a mixture of hope, urgency, and skepticism. Several speakers referred to the power of AI as at a tipping point to reshape the world in fundamental ways. By way of a stark example, on the one hand AI could dramatically accelerate finding cures for cancer. On the other hand, it could also dramatically increase the speed for manufacturing deadly and virulent viruses. In other words, it can do things both good and bad for people, but also holds a mirror up to the complexity of human nature.
At the event, Gisele Waters, PhD., the Working Group Chair for the Institute of Electrical and Electronics Engineers (IEEE) P3119 AI Procurement, for example, described providing an IEEE standard for public entities using AI to make decisions that affect people’s lives, such as with public benefits to the elderly, unemployed, marginalized communities, and those with basic food needs. To that end, Gisele’s perspective is informed by design research, care ethics, and socio-technical challenges.
However, where ethical considerations were on the stage at Applied Intelligence Live, it was clear commercial interests framed the discussion and held the spotlight. AI regulations were deemed necessary, such as the United Nation’s cooperation with the Organization for Economic Co-operation and Development (OECD) on the principles and classifications of AI systems. However, it was noted, regulations favor larger companies who can afford to manage the costs of regulatory processes and smaller companies will be hampered.
The internationalization of ethical standards and guardrails was brought up repeatedly, but a deep skepticism accompanied those discussions. A meaningful governance framework would require buy-in from all countries pursuing AI which, at this time, is not considered realistic. Instead, the uncomfortable idea that first something truly terrible first needs to happen with AI for its worldwide governance is a more likely path.
The “Blueprint for an AI Bill of Rights,” released by the United States White House was referenced as a first step, but also frankly, a toothless document. It has no enforcement power. In general, the European Union (EU) and references to the General Data Protection Regulation (GDPR) were offered up as leading the charge on regulations. However, specific AI regulations are lagging worldwide.
More pointedly, regulations in the United States wouldn’t occur knowing China, Russia and other countries wouldn’t join regulatory frameworks. In short, where AI-related innovation and international security are at play it is clear meaningful, comprehensive AI regulation is not on the near horizon.
Another notable theme revolved around the potential of multi-purpose chatbots. Chatbots were acknowledged for their versatility but were also seen as a work in progress. Attendees stressed the importance of enhancing chatbots to become more inclusive and human-like.
To that end, Michael Shephard, Sr. Distinguished Engineer at Dell Technologies presented an avatar, Clara, who responded to questions while Shephard was on stage. The proposed evolution would involve the convergence of AI, avatars, and chatbots increasing the understanding of semantic nuances, emotions, and various languages. The aim is to create more natural and engaging interactions between users and chatbots.
As the interface between people and AI-powered bots and avatars advances, specifics around the recognition of emotions and semantic nuances based on languages, dialects, ethnic, and regional differences will be explored.
The conference didn't just dwell on the anxieties and enthusiasms of the present. It also looked to the future. Attendees predicted a significant transformation in industries and work in the next five to 15 years as AI continues to advance. They envisioned a world where AI empowers the acceleration of other paradigm shifting technologies such as digital twins, personalized medicine, avatars, and flying cars. Afterall, as it was a technology event, the promise of AI as a saving-power and launch pad for a new and fulfilling era, a modern day Renaissance, had their moments.
As in the past, when other technologies were introduced to organizations, new titles were created. In the past, titles were tagged with ‘cloud’ or ‘network’ while today an increasing number titles are adding on ‘transformation’ and ‘AI’. Even so, AI initiatives were generally perceived as being disruptive to organizational core processes and the need for outside expertise was considered wise. AI touches on fundamental technologies in firms and, in doing so, organizes organizational knowledge in new ways. The new flows of knowledge can either increase an organization’s value-creation or result in missed opportunities. As such, grounding an organization’s use of AI in tangible use cases was stressed. Without a clear business case, launching an AI initiative was considered waste of resources and a poor strategy.
As AI propels the convergence of technologies like IoT and quantum computing in a feedback loop, acceleration will proceed at an unparalleled pace. In this environment, understanding “who did what and who owns what” will increase in importance. In the nearer term, the expectation is there will be further shaking out of winners and losers in the AI rush as proof of concepts (POCs) collide with AI-based minimum viable products (MVPs).
As LLMs and processes accelerate further into the many trillions of data points, AI was predicted to become increasingly human-like. Such rapid change would necessitate a reevaluation of industry norms and rules. The impact of AI on employment was also a key concern, as questions arose about how humans would adapt to an evolving job market. That idea that AI will replace people in entire professions putting large groups out of work was a routine note in sessions at the conference. During the ominously entitled (but well attended) headliner session “How Close to the Terminator Are We: The Future Impact of Quantum Computing and AI”, Strangeworks CEO William Hurley said that to “rattle some cages” he had previously suggested that by year 2060 there will be total unemployment, though he remains skeptical of that timeline. Nonetheless, citing job losses at IBM and a report by Goldman Sachs, Hurley noted “job loss to AI is already happening.”
What could happen in the future if the job loss to AI efficiency does surge? Hurley said universal basic income (UBI) could be a solution and people will increasingly pursue more creative endeavors. At the current moment, said a fellow panelist, Romi Mahajan, CEO of ExoFusion, AI was a liberatory technology in a static world. The problem he said was “we are bereft of new ideas” needed to create meaningful lives for people whose job roles are replaced by AI.
In the next few years, the vital role of education and training programs in the AI landscape are expected to generate a lot of buzz. The term “Prompt Engineer” is likely to become standard fare on resumés as workers position their knowledge of AI prompt processes specific to their areas of expertise and market verticals.
In an era where AI's integration into everyday life and work is becoming increasingly prevalent, staying informed about AI developments and trends is paramount. Both individuals and businesses are urged to become well-versed in AI technologies and practices to effectively navigate the changing AI landscape.
The Applied Intelligence Live 2023 conference provided a platform for in-depth exploration of AI's multifaceted nature. It also served as a nexus for the anxieties about AI to emerge. From its evolution from academia to industry, ethical considerations, the potential of chatbots, the future impact on industries and work, to the critical importance of education, these themes shed light on the complex world of AI.
In 2018, the 270-year reign of the First, Second, and Third Industrial Eras ended. The Fourth and final industrial revolution, Industry 4.0, simultaneously kicked off the Imagination Era. And that, perhaps, is a way to encompass the paradoxes AI brought to the surface at the event. The advent of societal wide AI is in an unsettled time between eras, an interregnum, as it thrashes about reaching for the future.
It is clear the stakeholders in the discussion about AI include, well, everyone. During the panel discussion on the “Terminator” potential of AI, Mahajan, pointed out, simply, that 95% of the people in the room were, though well-meaning, self-interested professional technologists. “To put it crassly, we make our money in that,” he said, the perspectives of people who will be impacted the nurses, community activists, authors, the unemployed, need to be on the next stage. The attendees to the session applauded.
Harnessing the potential of AI while ensuring it aligns with core human values remains an aspiration and a source of deep anxiety. As yet, the two core aspects of the AI future, its complex technology and the complex humans using it, are not in sync. This partnership is new, and like a newly partners in a dance, the resulting movements are promising, but uncomfortably spasmodic. The essential answers to core questions –the pace, rhythm, and tune of this dance – remain unanswered. Finally, who (or what) will lead or follow?