The ‘godfather’ of AI should have spoken earlier

Geoffrey Hinton, the so-called godfather of artificial intelligence, says it’s hard not to be worried about AI, says he’s leaving Google and regrets his life’s work. Hinton, who made significant contributions to AI research in the 1970s with his work on neural networks, said this week that big tech firms were moving too fast to deploy AI for public use. Part of the problem was that AI was gaining human-like capabilities faster than predicted. “It’s scary,” he said.

Hinton’s concerns are understandable, but they would have been more effective if they had come several years earlier, when other researchers who didn’t have retirement to fall back on were ringing the same alarm bells. Remarkably, Hinton sought to clarify in a tweet how the new York Times characterized his motivations, concerned that the article suggested he had left Google for criticizing it. “Actually, I quit so I could talk about the dangers of AI without considering how it affects Google,” he said. “Google has acted very responsibly.”

While Hinton’s prominence in the field may have prevented him from being a jerk, the episode highlights a longstanding problem in AI research: AI research is so dominated by big tech firms that many of their scientists fear harming their careers. Afraid to vent their concerns to. ,

You can understand why. Meredith Whitaker, a former research manager at Google, had to spend thousands of dollars on lawyers in 2018 after she helped organize a walkout of 20,000 Google employees over the company’s contracts with the US Department of Defense. “It’s really, really scary to go up against Google,” he told me. Whitaker, who is now Signal’s CEO, eventually resigned from the search giant with public warnings about the company’s direction.

Two years later, Google AI researchers Timnit Gebru and Margaret Mitchell were fired from the tech giant after they released a research paper highlighting the risks of the bigoted language model on chatbots and generative AI. At the center of the concerns was technology. He pointed to issues such as racial and gender bias, vulnerability and environmental costs. Whittaker ranks over the fact that Hinton is now the subject of glossy paintings about his contributions to AI, because others took more risks to stand up for what he believed in. Working in Google. “People with far less power and in more marginalized positions were taking real personal risks to name the issues with AI and the corporations that control AI,” she says.

Why didn’t Hinton speak earlier? The scientist declined to answer questions. But it appears he has been concerned about AI for some time, with his colleagues agitating for a more cautious approach towards it for years. A 2015 new yorker The article describes him talking to another AI researcher at a conference about how politicians can use AI to terrorize people. When asked why he was still researching, Hinton replied: “I could give you general arguments, but the truth is that the prospect of discovery is very sweet.” The “technically sound” appeal of working on the atomic bomb. Hinton says that Google has acted “very responsibly” in its deployment of AI. But this is only partially true. Yes, the company shuttered its facial recognition business over abuse concerns, and it kept its powerful language model LaMDA under wraps. Two years to work on making it safer and less biased. Google has also limited the capabilities of its ChatGPT rival Bard. But being responsible also means being transparent and accountable, and Google’s history of suppressing internal concerns about its technology doesn’t inspire. Self-confidence.

Hopefully Hinton’s departure and warnings will inspire other researchers at big tech companies to speak up about their concerns. Technology conglomerates have swallowed up some of academia’s brightest minds, driven by the lure of high salaries, generous benefits and the vast computing power used to train and experiment on ever-more powerful AI models.

Still, there are signs that some researchers are at least considering being more assertive. “I often wonder when I will step down [AI startup] Leave Anthropic or AI altogether,” tweeted Katherine Olson, a technical staff member at AI security company Anthropic, in response to Hinton’s comments. “I can already tell this move will hit me.”

There seems to be a fatal acceptance among many AI researchers that little can be done to stem the tide of generative AI now that it has swept the world. “The cat is out of the bag,” Jared Kaplan, co-founder of Anthropic, told me in an interview published this week.

But if today’s researchers are willing to speak up now, while it matters, and not just before they retire, we all have the potential to benefit as a species. ©Bloomberg

Permi Olsson is a Bloomberg Opinion columnist covering technology,

catch ’em all business News, market news, today’s fresh news events and Breaking News Update on Live Mint. download mint news app To get daily market updates.

More
Less