Scientists warn of AI dangers but disagree on solutions

CAMBRIDGE, Mass. (AP) — The computer scientists who have helped build the foundation for today’s AI technology warn of its dangers, but that doesn’t mean they agree on what those dangers are or how to prevent them.

After retiring from Google so he can speak more freely, so-called Godfather of AI Geoffrey Hinton plans to outline his concerns Wednesday at a lecture at the Massachusetts Institute of Technology. He has already expressed regrets about his work and doubts about humanity’s survival if machines become smarter than people.

Fellow AI pioneer Yoshua Bengio, co-winner with Hinton of the top computer science award, told the Associated Press Wednesday that he is “pretty much aligned” with Hinton’s concerns about chatbots like ChatGPT and related technology. but fears that simply saying “We are doomed” won’t help.

“The main difference, I would say, is that he’s a rather pessimistic person, and I’m more of an optimist,” said Bengio, a professor at the University of Montreal. “I think the dangers – the short-term ones, the long-term ones – are very serious and need to be taken seriously not only by a few researchers, but also by governments and the public.”

There are many signs that governments are listening. The White House has summoned the CEOs of Google, Microsoft and OpenAI, creator of ChatGPT, to meet with Vice President Kamala Harris Thursday in what officials are describing as a frank discussion about how to mitigate both short- and long-term risks of their technology. European lawmakers are also speeding up negotiations to approve sweeping new rules on AI.

But all the talk of direr future dangers has some concerned that the hype around superhuman machines — which don’t exist yet — is distracting from attempts to establish practical safeguards on current AI products that are largely unregulated.

Margaret Mitchell, former leader of Google’s AI ethics team, said she was upset that Hinton didn’t speak up during her decade in a position of power at Google, especially after the 2020 ouster of prominent black scientist Timnit Gebru, who had studied the harms of large language models before they were widely commercialized in products like ChatGPT and Google’s Bard.

“It’s a privilege that I can now jump from the reality of the propagation of discrimination, the propagation of hate speech, toxicity and non-consensual pornography of women, all of these issues that are actively harming people who are marginalized in technology,” she said. said Mitchell, who was also forced to leave Google in the aftermath of Gebru’s departure. “He’s skipping all that stuff to worry about something more distant.”

Bengio, Hinton and a third researcher, Yann LeCun, who work at Meta, parent company of Facebook, received the Turing Award in 2019 for their discoveries in the field of artificial neural networks, fundamental for the development of today’s AI applications such as ChatGPT.

Bengio, the only one of the three who hasn’t taken a job with a tech giant, has for years expressed concerns about the short-term risks of AI, including job market destabilization, automatic weapons and the dangers of distorted data.

But those concerns have escalated recently, leading Bengio to join other computer scientists and tech business leaders like Elon Musk and Apple co-founder Steve Wozniak in calling for a six-month break on developing AI systems more powerful than the latest model of OpenAI, GPT-4.

Bengio said Wednesday that he believes the latest AI language models already pass the “Turing test” named after the method British codebreaker and AI pioneer Alan Turing introduced in 1950 to measure when AI becomes indistinguishable by a human, at least on the surface.

“This is a milestone that can have drastic consequences if we’re not careful,” Bengio said. “My main concern is how they can be exploited for nefarious purposes to destabilize democracies, for cyber attacks, disinformation. You can have a conversation with these systems and think you are interacting with a human being. They are hard to spot.

Where researchers are least likely to agree is on how current AI language systems — which have many limitations, including a tendency to fabricate information — will actually become smarter than humans.

Aidan Gomez was one of the co-authors of the groundbreaking 2017 paper that introduced a so-called transformer technique – the “T” at the end of ChatGPT – to improve the performance of machine learning systems, particularly in how they learn from steps of text. Then, just a 20-year intern at Google, Gomez recalls lounging on a couch at the company’s California headquarters when his team sent the paper around 3 a.m. when it was due.

“Aidan, it’s going to be so huge,” a colleague who told him recalls, of the work that has since helped lead to new systems capable of generating human-like prose and imagery.

Six years later and now the CEO of his artificial intelligence company, Cohere, Gomez is excited about the potential applications of these systems but annoyed by the scaremongering, says he is “detached from reality” of their true capabilities and “relies on extraordinary leaps of imagination and reasoning.”

“The idea that these models will somehow gain access to our nuclear weapons and launch some sort of extinction-level event is not a productive conversation to make,” Gomez said. “It’s bad for those really pragmatic political endeavors that are trying to do some good.”

Leave a Reply

Your email address will not be published. Required fields are marked *