He is (not) alive! Google Row reveals AI issues – Science

SAN FRANCISCO: An internal battle over whether Google built technology with a human conscience has spilled out into the open, exposing the ambitions and risks inherent in artificial intelligence that may seem too real.

The Silicon Valley giant last week suspended one of its engineers who argued that the company’s AI system, LaMDA, seemed “sensitive”, a claim that Google officially disagrees with.

Several experts told AFP they were also highly skeptical of the assertion of conscience, but said human nature and ambition could easily lead to confusion.

“The problem is…when we come across strings of words that belong to the languages ​​we speak, we give them meaning,” said University of Washington linguistics professor Emily M. Bender.

“We do the work of imagining a spirit that is not there,” she added.

LaMDA is an extremely powerful system that uses advanced models and training on over 1.5 trillion words to be able to mimic the way people communicate in written discussions.

The system was built on a model that observes the relationship between words and then predicts which words it thinks will come next in a sentence or paragraph, according to Google’s explanation.

“It’s still at some level about pattern matching,” said Shashank Srivastava, an assistant professor of computer science at the University of North Carolina at Chapel Hill.

“Of course, you can find bits of what would seem like really meaningful conversation, very creative text that they could generate. But it escalates quickly in many cases,” he added.

Yet the attribution of consciousness becomes tricky.

This has often involved benchmarks like the Turing test, which a machine is considered to have passed if a human has a written conversation with one, but cannot say it.

“It’s actually a pretty easy test for any AI in our vintage here in 2022 to pass,” said University of Toronto philosophy professor Mark Kingwell.

“A more difficult test is a contextual test, the kind of thing that current systems seem to be tripping up by, common sense knowledge or background ideas – the kind of thing that algorithms struggle with,” a- he added.

“No easy answers”

AI remains a hot topic inside and outside of the tech world, which can cause amazement but also a little discomfort.

Google, in a statement, was quick and firm in downplaying whether LaMDA is self-aware.

“These systems mimic the types of exchanges found in millions of sentences and can riff on any fantastic topic,” the company said.

“Hundreds of researchers and engineers have conversed with LaMDA and we don’t know if anyone else is making…wide-ranging claims or anthropomorphizing LaMDA,” he added.

At least some experts viewed Google’s response as an effort to end the conversation on an important topic.

“I think public discussion of the issue is extremely important, because public understanding of how vexing the issue is is critical,” said academic Susan Schneider.

“There are no easy answers to questions of consciousness in machines,” added the founding director of Florida Atlantic University’s Center for the Future of the Mind.

The lack of skepticism on the part of those working on the subject is also possible at a time when people are “swimming in a tremendous amount of AI hype”, as linguistics professor Bender put it.

“And a lot, a lot of money is spent on that. So the people working there have this very strong signal that they’re doing something important and real,” so they don’t necessarily “maintain appropriate skepticism,” she added.

In recent years, AI has also suffered from bad decisions — Bender cited research that found a language model could pick up on racist and anti-immigrant bias by training on the internet.

Kingwell, a professor at the University of Toronto, said the issue of AI sensitivity is part of the “Brave New World” and part of “1984,” two dystopian works that tackle issues such as technology and human freedom.

“I think for a lot of people they don’t really know which way to turn, and so the anxiety,” he added.