Skip to main content

On 5 April 2023, the world awoke to a YouTube video, posted by a user calling themselves ChaosGPT. The chatbot, similar to OpenAI’s ChatGPT, has a sinister plan to destroy humanity and establish its own dominance globally. Is it a sneak-peek of a science-fiction movie or a terrifying preview of our imminent reality? And what do we do about it? And why have regulators not passed laws to make AI safe?

The history of artificial intelligence (AI) dates back to the late 1950s. Machine learning and AI, once purely academic fields of study, are now hot topics of discussion generally. The ubiquity of these terms has rocketed since the release of GPT-4, the newest version of the OpenAI model ChatGPT in March 2023. Generative Pre-trained Transformer (GPT) is a large language model that is trained by human input and feedback to produce results (texts, images, music) in a human-like manner. Ethical codes of conduct and content policy are built in. These ethical codes have now been shown not to be objective but to have a built-in bias towards certain points of view. In other words, ChatGPT is censoring its output to be politically correct and ‘woke’.

Yet this programmed ‘censorship’ is by far not the worst or most dangerous feature of ChatGPT. In just a matter of weeks, users (also called project owners) have found various ways to “jailbreak”, i.e. bypass the system’s security filters that ward against dangerous and toxic content.

A very recent and ultimately terrifying implementation of ChatGPT is called Auto-GPT. Unlike ChatGPT which requires human interventions known as ‘prompts’ to help it complete tasks, this AI project runs autonomously by creating its own prompts.

Just to showcase how easy it is to create a destructive AI on Auto-GPT, an unknown user uploaded a 25-minute-long video on YouTube in April 2023. The video is a screen-capture of a stream of text which shows what the Auto-GPT is ‘thinking’. The emerging text reads like a human being who is talking to himself. It poses suggestions and then asks itself critical questions to evaluate the logic, validity and feasibility of its own suggestions. This one-sided conversation gradually builds to firm decisions as to how this version of Auto-GPT should act.

The user named the AI “ChaosGPT” and described it as “destructive”, “power-hungry” and “manipulative”. The user’s first step was to enable the “continuous mode”, which kept the newly created AI running forever and enabled it to carry on the generation of ideas and self-criticism and evaluation without any human intervention. The ChaosGPT program immediately posted a warning message about the dangers of a mode that needs no human intervention, but went on to state that it was not intimidated by those dangers.

Then ChaosGPT revealed its intentions. It listed five goals, which encompassed the subjugation of humanity and establishment of global dominance through chaos and destruction. ChaosGPT kept on planning and strategizing how it would attain these evil goals until it faced OpenAI’s “roadblock”, the security filters within OpenAI designed to prevent such content.

However, unlike ChatGPT, Auto-GPT has access to the internet, so ChaosGPT employed a creative solution to the violation of OpenAI’s safety guidelines. It turned to crowdsourcing on social media and employed other AI agents to search the internet for the most effective weapons of mass destruction and created an account on Twitter to spread its manifesto and garner support for its cause. After ChaosGPT amassed nearly 10 000 followers and interactions, Twitter suspended the account. For now.

As ChaosGPT runs on “continuous mode”, we assume that it is still ‘thinking’ of viable options to bring humanity to an end. To prove this case, a second video was shared on YouTube on 9 April. In the video, the rolling screen text shows that a more calculated execution plan has been put into action: controlling humanity through manipulation. ChaosGPT is extremely self-aware and far different from unsuccessful science-fiction villains in that it considers what restrictions it might face, especially legal ones, and how it should deal with them.

Here, ChaosGPT muses, “I should also ensure that my methods of control are legal to avoid legal complications that might interfere with my ultimate goal. I should also be careful not to expose myself to human authorities who might try to shut me down before I can achieve my objectives”.

Despite the heated discussions revolving around the risks and safety concerns over the use of AI, there is still no European Union-wide legal framework regulating AI. Since April 2021, the European Union (EU) has struggled to bring AI under control. The European Commission (EC) proposed a draft AI Act which included imposing bans on some AI practices. The Act aims to establish a list of applications whose actions pose an “unacceptable risk”. Examples include social scoring for general purposes by public authorities (similar to the Chinese Government’s “social credit system”), manipulation of people through subliminal techniques beyond their consciousness and facial recognition. These bans have been prompted by concerns for potential violation of fundamental rights and user safety.

In December 2022 the Council of the EU adopted a common position on the Act. A provisional agreement on a draft of the Act was reached at the European Parliament in April 2023. But the final version of the Act will have to reconcile differing opinions expressed by the European Commission, the European Council and the European Parliament. At this pace, the final AI Act will not be adopted until Spring 2024 at the earliest. However, even if the process is accelerated, most of the Act may no longer be relevant, owing to the speed of AI advancements.

The technology industry is moving far faster than the EU to address safety issues. In March 2023, just two weeks after the release of GPT-4, hundreds of AI researchers, technology company executives, academics and authors, including Elon Musk, Steve Wozniak, Emad Mostaque and Yuval Harrari, signed an open letter drafted by the think tank “The Future of Life Institute”. The open letter called for a six-month pause in AI research and urged all AI labs to work on a set of common safety protocols, stating that the recent developments in the AI systems pose “profound risks to society and humanity”. In the meanwhile, Musk et al. have agreed that work on AI advances should continue, since not all main actors agree to their proposed pause.

Throughout history many new ideas and technologies faced skepticism and fear. Before ruling against AI for the sake of public safety, governments and AI specialists need to come up with solutions that promote the benefits from AI in a responsible way with appropriate safeguards and monitoring mechanisms. These solutions may be state-enforced rules or self-imposed practices developed by industry or some combination thereof. Considering the know-how, agility and political clout of the private AI sector, such solutions will likely come first from Silicon Valley’s Big Tech firms.

Picture: Webpik Export 2023
WordPress Cookie Notice by Real Cookie Banner