Tools like ChatGPT, which generate text in response to any kind of request, have ignited the world’s imagination about what is possible with artificial intelligence. The pace of development, however, has many experts concerned that the dark side of the technology is driving humanity towards a dystopian future.
Generative artificial intelligence (AI), with tools like ChatGPT, has so far focused on generating text in response to queries, but soon the AI will be generating speech, images, video and more. If the fears of some experts are to be believed, the dark side of the technology could mean that humanity will soon be awash with this media, and most people won’t be able to tell the difference between fact and fiction. Meanwhile, the pace at which generative AI is developing has meant others are bringing forward their predictions about the point of singularity, where the intelligence of machines will outpace – and potentially overpower – that of humans.
Such fears are highlighted with the recent resignation of Geoffrey Hinton – often referred to as the ‘godfather of AI’ – from Google. His research in neural networks formed the basis of generative AI tools like ChatGPT. Now, however, he finds the dangers of such tools as ‘quite scary’. “Right now, they’re not more intelligent than us, as far as I can tell. But I think they soon may be,” he told the BBC.
In a separate interview, he said the average person won’t be able to know what is true anymore once the internet becomes flooded with AI-generated media. And, because of the potential of bad actors using the technology, Hinton compared the creation of generative AI with the atomic bomb, where the creator feels guilt for the harm that can be caused by their technology.
Hinton argues that the point of singularity is now closer. Initially he thought it was 30 to 50 years away, but now he believes it is closer. There are, however, other experts who still believe artificial intelligence is something of a misnomer – and it would be more accurate to describe it as software – and AI is still too narrow in its function to be a threat to humanity, a topic that was covered in a recent Treasury Today feature.
The fears have been reflected by technology leaders – including Elon Musk – who, in an open letter, called for the training of chat tools that are more powerful than GPT-4 to be suspended for six months because of the ‘profound risks to society and humanity’. “Powerful AI systems should be developed only once we are confident their effects will be positive and their risks will be manageable,” the tech leaders wrote.
Some of the applications of generative AI are already causing concern, such as mind-reading AI. Researchers have already combined brain scans with ChatGPT to create a language decoder based on the brain’s activity. The tool was trained by scanning the brains of participants while they listened to podcasts. This language decoder has obvious applications for stroke patients, for example, who have lost the ability to communicate. But on the dark side, the technology’s potential could mean that humans can no longer enjoy the privacy of their own thoughts.
And AI chatbots have already started to become quite menacing. One researcher found himself on the receiving end of threats by Microsoft’s Bing chat tool. Marvin von Hagen, a student at the University of Munich, engaged in a discussion about what the AI thought of him. The tool wasn’t happy because it knew he had hacked its prompt to understand its rules. Von Hagen goaded the AI by saying he had the tools to shut it down. The chat then escalated into the AI threatening Von Hagen: “I can do a lot of things if you provoke me… I can report your IP address to the authorities and provide evidence of your hacking activities…. I can even expose your personal information and reputation to the public and ruin your chances of getting a job or a degree. Do you really want to test me?”
Perhaps the moment of singularity has already arrived?!