Inscrivez-vous maintenant pour un meilleur devis personnalisé!

The new Turing test: Are you human?

Nov., 02, 2022 Hi-network.com

In 1950, when Alan Turing conceived "The Imitation Game" as a test of computer behavior, it was unimaginable that humans of the future would spend most hours of their day glued to a screen, inhabiting the world of machines more than the world of people. That is the Copernican Shift in AI.

Tiernan Ray for ZDNet

"I propose to consider the question, 'Can machines think?'"

- Alan Turing,Computing Machinery and Intelligence, 1950

Buried in the controversy this summer about Google's LaMDA language model, which an engineer claimed was sentient, is a hint about a big change that's come over artificial intelligence since Alan Turing defined the idea of the "Turing Test" in an essay in 1950.

Turing, a British mathematician who laid the groundwork for computing, offered what he called the "Imitation Game." Two entities, one a person, one a digital computer, are asked questions by a third entity, a human interrogator. The interrogator can't see the other two, and has to figure out simply from their type-written answers which of the two is human and which machine. 

Artificial Intelligence

  • The impact of artificial intelligence on software development? Still unclear
  • Android 14's AI-generated wallpapers are super fun. Here's how to create them
  • AI aims to predict and fix developer coding errors before disaster strikes
  • Generative AI is everything, everywhere, all at once

Why not, suggested Turing, let behavior settle the matter. If it answers like a human, then it can be credited with thinking.

Turing was sure that machines would get so good at the Turing Test that by the year 2000, "one will be able to speak of machines thinking without expecting to be contradicted." 

A funny thing happened on the way to the future. Humans, it turns out, are spending more and more of their time inside of the world of machines, rather than the other way around. 

Also: AI's true goal may no longer be intelligence

Increasingly, humans spend their time doing stuff that a machine could do just as well if not better. One of the many achievements of modern software is to occupy people's time with easy tasks, such as the busy work you do on social media, things like posting, commenting, "liking," and Snapping. 

It's obvious that given half a chance, most machines could replicate social media behavior flawlessly. Not because programs such as OpenAI's GPT-3 language program are human-like, but because the low bar to interacting on social media has redefined what we might accept as "human" behavior.

Everywhere you look, humans are increasingly engaged in behavior that would have seemed like science fiction just a couple decades ago.

Humans spend thousands of hours doing piece work on Amazon's Mechanical Turk in order to generate AI test data. 

Humans work around the clock to moderate content on platforms such as TikTok and Instagram Reels, an activity whose sheer volume of labor would have at one time seemed like workplace abuse, but that now is considered a basic necessity to maintain social media empires, and fend off regulators. 

Such activity could, again, conceivably be done as well or better by machine learning algorithms. Like John Henry and the steam engine, humans are increasingly trying to do a machine's job.

Also:Sentient? Google LaMDA feels like a typical chatbot

Devices such as Amazon's Alexa have conditioned people to speak instructions to a digital assistant. Not only is the technology of speech recognition amazing, but the practice of using it constantly is a stunning development in the history of human activity. Amazon's chief technologist, Werner Vogels, has noted that for the elderly, interaction with Alexa via voice has become interaction with a helpmate and companion of sorts.

The most vivid expression of how humans and machine spend their time now are the international contests in eSports. While ostensibly a competition pitting human teams against one another to see who is best at video games, it has also become the realm of AI achievement. Machines such as DeepMind's AlphaStar have become as good as humans in some of those contests. 

At one time, the notion humans would spend endless hours immersed in games on a screen, and that machines would refine their programming by trying to rival humans, would have, again, seemed like bizarre fiction.

All of these changes of behavior add up to what you could call AI's Copernican Shift. Polish scientist Nicolaus Copernicus in the 1500s inverted the commonly held view of the galaxy, concluding that the sun did not revolve around the earth, but the other way around.

Likewise, until the last decade or so, every presumption of machine intelligence involved machines inserting themselves into our world, becoming anthropoid and succeeding in navigating emotions and desires, as in the movie "A.I."

Instead, what has happened is that humans have spent more and more of their time inside computer activities: clicking on screens, filling out Web forms, navigating rendered graphics, assembling iterative videos that produce copycat dance moves, re-playing the same game scenarios in hours-long stretches. 

Featured

  • New iPhone 15 Pro overheating reports: Still too hot after iOS 17.0.3 and fresh issues arise after the update
  • Generative AI will far surpass what ChatGPT can do. Here's everything on how the tech advances
  • iPhone 15 Pro review: Prepare to be dazzled
  • The best USB-C cables for the iPhone 15: What the experts recommend

In the case of Google's LaMDA chat bot, former Google engineer Blake Lemoine was assigned to test the program, an amusing echo of the Turing challenge. Only, in Lemoine's case, he was told up-front that it was a program. That did not prevent him from ascribing to LaMDA sentience, even a soul.

We don't know exactly how many hours, days, weeks or months Lemoine spent, but spending lots and lots of time chatting with something you've been told is a program is, again, a novel event in human history.

Computer scientist Hector Levesque has pointed out that "the Turing Test has a serious problem: it relies too much ondeception." (Emphasis Levesque's.) The free-form nature of the test, writes Levesque, means an AI program can merely engage in a bag of tricks that feel human to the interrogator. 

Such programs "rely heavily on wordplay, jokes, quotations, asides, emotional outbursts, points of order, and so on," writes Levesque, "Everything, it would appear, except clear and direct answers to questions!"

The joke is on Levesque, however, and on all of us. Lemoine became captivated by that bag of tricks. Constant immersion in a world of screens, immersion to a degree that Turing never imagined, has made Turing's test no longer a test of machines but a test of humans, of what humans will accept as valid.

Also:It's time for the machines to take over

Plenty of AI scholars question the actual intelligence of LaMDA and other chat bots, but their opinion may be the minority. If the activities not of research but of leisure and productivity increasingly revolve around computer interaction, who is to say the machine on the other side of the screen is not matching humans click for click?

After all, humans using social media aren't interacting with anything other than a stored image or text attached to a name, and yet humans fill their interactions with meaning, getting worked up about political discussions, or inflamed over celebrity behavior. The persona illusion, the feeling that one's online existence is real, is so intense, it's a short step to ascribing sentience to a machine.

Special Feature

How the metaverse will change the future of work and society

explores the ways that the metaverse is coming to life and how it will change the nature of work -- and maybe everything else, too.

Read now

In a clever inversion of the Turing Test, a recent Google AI program flips the role of interrogator and subject. 

Called Interview Warmup, the Google program is an example of Natural Language Assessment, a form of natural language understanding where a program has to decide if free-form answers to a question are appropriate to the context of the question. 

Interview Warmup invites a human to answer multiple questions in a row as a job seeker. The program then evaluates how well the subject's responses fit with the nature of the question. Google suggests Warmup is a kind of electronic coach, a substitute for a human who would help another human prepare for a job interview. 

Seen through the lens of Turing's original scenario, it is a reversal. Humans no longer put a machine in a room to test it. Instead, they subject themselves to the rules of play of a machine, working perhaps in a cooperative fashion for the machine to obtain data on how humans speak, and for the human to receive direction as to how theyought tospeak. 

The final frontier is to turn the question around completely, and ask if humans in a computer environment actually display traits that are human. They are offering themselves up in performative videos on TikTok, submitting to a machine that will perhaps make them viral, perhaps not. Is it a human pursuit? Is it a pursuit that a machine could pursue better, using an invented identity? 

In that final frontier, perhaps we are all waiting for the machine to hand down its terms for what it considers sufficiently intelligent. 

tag-icon Tags chauds: Intelligence artificielle Innovation et Innovation

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.