Mint Explainer: Why are AI experts in an apocalyptic bind?

What is their fear around AI?

Altman, Hinton, Scott, Internet security and cryptography pioneer Bruce Schneier, climate advocate Bill McKibben; Musician Grimes, and researchers from Google DeepMind and Anthropic, among thousands of others, recently signed a statement published by the Center for AI Safety, which reads: “Mitigating the risk of extinction from AI is important for other social- should be a global priority as well as leveling risks. In the form of pandemics and nuclear war.”

On 28 April, over a thousand people, including Elon Musk, Yoshua Bengio, Stuart Russell, Gary Marcus and Andrew Yang, called for a six-month moratorium on training systems that are “more powerful than GPT-4”, arguing that stating that such systems should only be developed if the world believes there may be risks involved. Yuval Noah Harari said in an April 23 interview with The Telegraph, “I don’t know that humans May or may not survive the AI.”

Why are these experts so scared of AI?

The arguments of Tristan Harris and Aja Raskin of the Center for Humane Technology give us a clue. In ‘The AI ​​Dilemma’, the authors outline that generative AI is formidable because it can treat everything as a language and predict not only the next word but also the next image, sound, etc. Even “DNA is just another kind of language. … and … any advance in one part of the AI ​​world became an advance in every part of the AI ​​world …”, he added. Wrote dubbing the model (LLM) as a ‘Gollum-class AI’ (the fictional split-personality creature in ‘The Lord of the Rings’).

He compared the threat of AI with nuclear power, saying that while they can be both helpful and destructive, it took the world 65 years to ban the use, possession, testing and transfer of nuclear weapons under international law. However, since “nukes don’t make strong atoms, but AI makes strong AI”, the authors argue that we may not have 65 years to put guardrails around AI before it spirals out of control. . They say that “AI can help us achieve that.” Major advances such as curing cancer or addressing climate change. But… if our dystopia is bad enough, it won’t matter how much we want to make a good utopia.

What about the experts who disagree?

Generative AI has undoubtedly polarized AI experts, with some calling the proposed six-month moratorium a “terrible idea” because the benefits of AI far outweigh the perceived risks of misinformation, deep fakes, voice-cloning, plagiarism and job loss. There are more. Tech developer Enias Cailliau, for example, is trying to ‘clone’ an AI replica of his girlfriend, and his open-source software is available for anyone to use.

But StanfordLab director Christopher Manning argued in a May 31 tweet that “… most AI people work in the quiet middle: We see huge benefits from people using AI in health care, education … And we see serious risks and disadvantages of AI but believe we can mitigate them with careful engineering and regulation, as happened with electricity, cars, planes.” Yann LeCun, Meta’s chief AI scientist and so-called ‘godfather’ of AI, who echoes these views, responded with a cryptic tweet that read: “AI’s silent majority”, hinting that such K’s arguments are not very popular.

What are governments doing to regulate AI?

The United Nations Educational, Scientific and Cultural Organization (UNESCO) is clear that AI “cannot be a no-law zone”, and recommends international and national policies and regulatory frameworks. It called for “human-centred AI” that would be “in the greater interest of people, not the other way around”.

The EU’s AI Act seeks to regulate AI systems in four categories ranging from “unacceptable risk AI” to “minimal risk AI”. Canada has drafted the Artificial Intelligence and Data Act (AIDA) while the US has an AI Bill of Rights and State Initiatives. China’s draft on ‘Administrative Measures for Production AI Services’ is currently open for public consultation, while Brazil and Japan also have draft regulations.

India’s Digital India Act (DIA), which will replace the IT Act, 2000 when notified, is expected to regulate AI and intermediaries, but does not see a separate law yet.

Will machines ever achieve Artificial General Intelligence (AGI)?

Generative AI, which refers to a broad swathe of AI models and tools such as ChatGPT, Dull-E, Mid-Journey, BingChat, Bard, LLAMA, etc., designed to generate content such as text, images, videos, music, or Has been done code (hence, the term ‘multi-modal AI’). But is this exponential growth the first step towards AGI?

These machines are becoming increasingly efficient at handling really specific tasks, which may lead us to aspire them to have human-like intelligence, but even driverless cars and trucks, no matter how impressive they are Do not be, there are still higher expressions of “weak” or “narrow”. “Aye.

As impressive as their achievements may be, can these systems really think and understand like humans? “Although the process of LLM is too complex for us to fully understand now… we can say with confidence that ChatGPT is not conscious. LLM are not intelligent. They are systems trained to give the outward appearance of human intelligence.” Scary, but not scary,” argues Philip Goff, an associate professor of philosophy at Durham University, in a May 23 article in Singularity Hub.

catch all business News, market news, today’s fresh news events and Breaking News Update on Live Mint. download mint news app To get daily market updates.

More
Less

Updated: June 01, 2023, 03:34 PM IST