The AI ​​dilemma: our great risk is super AI in rogue human hands

As I begin writing my 100th Tech Whisper column for this publication, I refer to an earlier one in September 2019 (bit.ly/41NW0pF) to see what the zeitgeist was then. It came as no surprise that the first column was about Elon Musk and AI, two topics that still dominate tech discussion. That column was about Musk’s dispute with Jack Ma, and expressing deep disappointment about it, saying: “AI doesn’t have to be evil to destroy humanity — if AI had a goal and humanity would just Is [to be] Along the way, it will surely destroy humanity without even thinking about it.” Fast forward to now, Musk seems to have taken a schizophrenic view of AI, with a letter to stop its unintended advance but simultaneously launched his own generative AI startup, TruthAI.

I suspect most of us are equally conflicted. The wild excitement since December 2022 when ChatGPT was released has given way to a sobering realization of its ‘supernatural’ power, which is set to grow. The sentiment in the media seems to have shifted from ‘how ChatGPT will change the world’ to ‘how AI will destroy jobs and humanity’. My own cautious optimism is wavering, as I watch respected AI leaders begin to worry. I didn’t think AGI—Artificial General Intelligence, or when an AI becomes smarter than humans and can start developing new knowledge—was imminent, but now I’m not so sure. Geoffrey Hinton, the father of deep learning, has said that it is “not inconceivable” that a wrong AGI would make humans extinct.

OpenAI and DeepMind, the companies at the forefront of the generative AI revolution, have AGI firmly in their sights. OpenAI, for example, strives to “bring AI systems to the world that are generally smarter than humans.” in a contemplative and faintly eerie piece for financial Times ,https://www.ft.com/3H9tK9c), Ian Hogarth says of AGI: “A three-letter acronym doesn’t quite capture the enormity that AGI represents, so I’ll refer to it as: God-like AI. A super-intelligent computer that autonomously learns from and develops from, which understands its environment without the need for supervision and which can change the world around it.[It]is the number one risk for this century, with an engineered biological pathogen coming in second If a super-intelligent machine decided to get rid of us, I think it would do it very efficiently.” Sam Altman, who is busy building AGI-like systems at OpenAI, says: “Worse case, Like, light for all of us.” Stuart Russell, an AI professor, imagines such a scenario: Suppose the U.N. asked AGI to de-acidify the oceans, specifying that all by-products be non-toxic and that fishes Don’t be harmed Perhaps the AI ​​develops a biological ‘self-multiplication catalyst’, which achieves this objective, but uses a quarter of the oxygen in the atmosphere. In doing so the fish live, as has been said, but all humans and animals die.

While I don’t disagree with these dystopian visions, I have a slightly different take. The apocalypse, if it were to come, would come not from a super-intelligent AI, but from the fellow human beings who use it. Like AI won’t replace you, but a human using AI can kill you, AI won’t kill you, but a human using AI can kill you. Like nuclear power, AI is a dual-use technology that can be used for good but also for terrible harm, and it is up to humans to decide what to do with these powerful tools. As more and more powerful generative AI tools are created and open-sourced, there is nothing to stop a bad state actor or disaffected group of humans from creating malevolent AI. It can be used to jeopardize elections (think a turbocharged Cambridge Analytica) and bring to power a megalomaniacal dictator who could start a very humane nuclear war. Or it could be a regime seeking world domination that uses AI to create powerful autonomous lethal weapons. This can be subtle, with persuasive AI agents influencing children and vulnerable adults to kill or die – note the recent example of a Belgian, who after a frustrating ‘conversation’ with a realistically human chatbot replica Died by suicide.

The panacea for all evils, according to AI experts, is to solve the ‘alignment problem’ so that the AI’s goals are aligned with those of humans. However, it is us humans who have a huge ‘alignment problem’. We seem to be becoming increasingly divided along ideological, geopolitical and religious lines, and the idea of ​​similarly aligned goals seems like a mirage. If one high-profile entrepreneur can’t align his view on ‘pausing’ generative AI development for six months with his urgency to build a company for the same, it can’t be any easier for the rest of us Is.

Jaspreet Bindra is a technology expert, author of ‘The Tech Whisperer’, and is currently pursuing his Masters in AI and Ethics from the University of Cambridge.

catch all business News, market news, today’s fresh news events and Breaking News Update on Live Mint. download mint news app To get daily market updates.

More
Less