ai the year of the reality check

AI: The year of the reality check

Share on

Prominent tech leaders have touted AI as more profound than fire or electricity, and as marking a “seismic moment” in the history of technology. The hype is reflected in the lofty valuations of AI companies. However, after the initial wow-factor, ChatGPT usage has plateaued, raising questions about the pace of adoption and areas of deep impact. Are we in an AI bubble? Could large language models end up as mere glorified stenographs? Or might public backlash arise if generative AI enables mass manipulation, blackmail and sophisticated cybercrime?

How fast is AI adoption?

Consumer product innovations integrating AI have been limited so far. At CES 2024, Samsung’s new personal assistant robot equipped with a built-in projector was a main attraction but was still at the prototype stage without a clear roadmap for commercialisation. There were some useful innovations aimed at vulnerable populations, such as smart wheelchairs and AI-powered assistive robots for those with mobility impairments. However, they appeal to niche markets. In entertainment, innovations like AI-generated artwork or an AI DJ system appeared gimmicky, not revolutionary. Promising were AI optimisations in automotive, such as a collision-avoidance platform using AI to detect dangerous driving or an AI-powered analysis to assess vehicle damage. But for those expecting LLMs to make technology human and provide magical consumer experiences around digital assistants, smarter search and creative tools, the CES may have been underwhelming. One week later, Samsung’s unveiling of its latest Galaxy S24 smartphone seemed appealing, with AI-native capabilities around real-time translation and visual search. But how actual usage of these features will develop remains to be seen.

Another reason for slow adoption may be accountability questions. AI contained in the virtual realm saw rapid adoption in areas like coding, creative content, office work, and marketing.  However, liability and safety concerns may slow adoption when it can impact human lives physically. For example, who is responsible when an AI system contributes to a mistake in a medical surgery? Was the doctor negligent in applying the AI outside the expected parameters, or were there flaws in the initial design of the AI system? Causation may be difficult to establish conclusively, and there is no history of case law. Without visibility into how AI systems infer conclusions from massive datasets, there is a lack of understanding about failure modes.

The opportunity…and the danger

The hype around AI echoes the dot-com bubble in the early 2000s, where internet companies were assumed to hold massive profit potential. However, transformative technologies do not always guarantee lucrative consumer markets. AI may follow a similar pattern. The most significant monetisation opportunities could be in less visible B2B implementations that increase efficiency behind the scenes, such as in R&D for drug discovery, lights-out manufacturing, or risk management. Our recent conversations with IT services companies which deploy these technologies at scale confirm this view. AI is about to enter an innovation cycle similar to the development of the cloud fifteen years ago. Accenture confirms that only 10% are AI-ready. To reap the full benefits of the technology, companies should have an exploitable dataset, which implies all sorts of works, sometimes structural, to be achieved beforehand. This could prove a gold mine for Accenture and their competitors and a source of first mover advantage for the companies that already have leadership in IT.

A new study by Cognizant reveals AI’s astounding potential. GenAI could inject up to $1 trillion into the US economy by 2032, increasing GDP by over 3 percentage points annually by increasing labour productivity through automation and data insights. A factory manager can leverage AI to analyse data, identify efficiencies, and generate reports – automating half of their duties. Customer service reps could see “digital people” take over routine inquiries, increasing the number of cases they can handle per day. The analysis forecasts over 90% of jobs impacted by AI, with 9% fully displaced. Unlike past automation focussed on manual tasks, GenAI is poised to transform knowledge work — from entry-level data analysts to C-suite executives, reflecting shifts that must be carefully managed. 

Political ramifications

The labour market is not the only aspect of our society to be profoundly changed. The rise of GenAI has raised concerns about mass manipulation and cognitive warfare. The elections in Taiwan at the beginning of this year provided a preview of the dangers ahead. There, pro-China actors allegedly used GenAI to create fake polls and fabricate documents to discredit candidates before disseminating them on social media. Fake accounts controlled by bots can instantly spread false narratives to millions, working together to game algorithms and elevate harmful content. Generative AI makes these bots more capable of original, personalised messaging that appears human. Deepfakes of candidates created by AI manipulate visual media. Meanwhile, AI text models generate persuasive fake news articles, or fraudulent social media posts customised to specific audiences. The content is designed to go viral by stoking outrage and triggering engagement. To avoid fact-checking and censure on social media, disinformation campaigns have been shown to subtly integrate propaganda with the truth. They reference real events and then twist narratives in China’s favour, like framing it as a peacemaker while portraying the US as warmongering. This blending of fact and fiction manipulates public opinion while evading scrutiny.

In Taiwan, it was reassuring to see China-hawk Lai Ching-te elected despite all this. But with over half of the world population called to the polls this year, including India, the EU and the US, 2024 will be a defining year for democracy. If generative AI unleashes widespread personalised disinformation, it could severely compromise electoral integrity. Implementing protections against mass deception will be critical. It is a key reason why policymakers are rapidly coalescing around the world to regulate AI before it can be weaponised to undermine democracy on an unprecedented scale.  

Lessons to learn from social media

Unlike social media, AI may not enjoy a lengthy grace period before regulatory scrutiny. The hands-off approach to regulating social media was a misstep with dire consequences. Social media have precipitated alarming mental health declines, especially among teenagers. Rates of depression, anxiety, loneliness, and suicidal ideation have surged in tandem with the proliferation of smartphones and social media addiction. In the US, 10% of teens see friends in person once a month or less. 42% of high school students report persistent sadness and hopelessness, while 22% have seriously considered suicide. These disturbing statistics have risen sharply in just the past few years. 

Generative AI amplifies the perils of social media to new heights by enhancing tools for engagement and addiction. Rather than providing a playbook, social media sounds an urgent alarm for regulating big tech. Geopolitical factors may further propel policymakers to act as AI amasses vast datasets, including politically sensitive areas like education, the military, critical infrastructure, employment and product safety.

Taking all this into consideration, we believe rapid regulatory proposals and escalating litigation signal gathering headwinds for tech giants in 2024. With unease mounting over social media and uncontrolled AI, assertive government and court interventions now seem to be the major risk source for big tech investors. Will AI deliver on its potential? Possibly. Will AI deliver on its potential with its dangers mitigated for? That will be a far greater challenge.

RELATED INSIGHTS

SIGN UP TO TRIUM TALKS
Trium Talks triangle