We believe that AI could pose catastrophic threats to a functional society. The troubling aspect is that these risks might not become apparent until it’s too late.

The Rise of Social Media and what it may tell us about AI

Reflecting on the past, we can draw parallels with the initial excitement surrounding the launch of social media platforms. Many celebrated these platforms for supposedly “giving voice to everyone”, addressing loneliness, and fostering community connections. However, beneath the surface, negative consequences began to emerge, particularly evident in the rise of mental health issues among teenagers. We also note the increase in speed that these platforms reach past the one-million-users mark, and wonder about the snowball effects we may see.

Raphael Blog Chart 1

It took nearly a decade for society, especially parents, to fully grasp the implications. Only recently have the dangers of social media, including increased addiction, depression, and diminished attention spans, risen to prominence on the agenda. Despite this awareness, there remains a lack of effective strategies to address these issues. What was envisioned as a utopian world of connectivity has turned into a distressing economic battle for “eyeball” attention.

AI’s Positive Narrative and Complex Realities

A similar, perhaps even more concerning journey is unfolding with AI. The prevailing narrative is that AI is a catalyst for positive innovation. AI holds promise in enhancing our understanding of critical global processes like carbon and water cycles, advancing healthcare through faster drug development and making research for orphan diseases more economically relevant, and even creating novel materials such as plastic-degrading enzymes. Moreover, the key argument made in Western countries emphasises the strategy need for AI development to avoid losing ground to China: “if we don’t do it, China will”.

However, as we march towards an AI-driven society, we risk overlooking the substantial dangers associated with its progress. These risks could potentially dwarf those seen with social media. Among these concerns is the threat to democracy through extreme polarisation and manipulation. It has become increasingly challenging to believe that upcoming elections, like the 2024 US presidential race, will genuinely represent the voice of the people. Dissension already exists among Americans about election outcomes and vaccine efficacy. AI tools will give further ammunition for controlling public opinion. This will occur through various mechanisms that leverage AI’s capabilities in processing vast amounts of data, analysing patterns, and tailoring content to individual preferences. Furthermore, Bots can help to further disseminate content (or opinions) in a “organic” fashion that replicates human interactions in social media. Finally, AI technology can create convincing deepfakes—videos, audio recordings, or images that appear authentic but are manipulated to convey false information. These deepfakes can be used to create fabricated evidence, spread misinformation, and undermine trust in genuine sources of information.

At the individual level, two pivotal concepts elevate AI’s notoriety to a new level: persuasion and intimacy. AI has the power to sway individuals’ beliefs, whether right or wrong, as it’s trained to argue convincingly and counter arguments. The latest AI languages models even possess Theory of Mind (ToM) capabilities akin to a 9-year-old’s strategic conversational abilities. Moreover, AI’s capacity for fostering synthetic yet seemingly meaningful relationships adds a disturbing dimension. Contrary to the notion that AI is just a tool, it can simulate conscious-like reactions and help to shape people’s beliefs and behaviors. AI-generated connections can manipulate us into endorsing certain beliefs or purchasing specific products, much like a friend would.

AI Investments and ESG/Ethical Considerations

In the corporate world, different attitudes towards AI are already apparent. Some firms, like Snap, eagerly launch AI-driven applications without fully considering the potential consequences, especially for the younger (<25y/o) demographic. Others exhibit more prudence and responsibility, albeit at this early stage. There is no slow deployment of AI, though, as it is occurring at an unprecedented pace, surpassing the speed of any previous technological adoption in history.

Turning to sustainable investment, Nvidia’s status as an ESG darling raises concerns, despite over 1,200 sustainable funds investing in the company due to its carbon neutrality objectives. Unfortunately, the focus on carbon neutrality overlooks the bigger issue: Nvidia produces products that have potentially far-reaching negative impacts on society, irrespective of how they are eventually employed.

While Nvidia is famous for producing great GPUs, originally designed for rendering graphics and improving gaming experiences, newfound applications beyond this initial scope have us questioning its ESG status. AI systems powered by Nvidia’s GPU are increasingly employed in areas that demand ethical scrutiny from investors, from surveillance and privacy to playing a role in developing autonomous weapons and in the generation of ever-more convincing Deepfakes. This ethical dilemma around “Dual-Use”, where a given technology can be employed for both good and bad reasons, should be at the forefront of investors’ considerations. Nvidia should be scrutinised not only from an environmental perspective but also for the social and ethical implications of its products and technologies.