Signs of Human Thought in an Artificial Intelligence System – The Experiment That Divided Experts

Signs of Human Thought in an Artificial Intelligence System – The Experiment That Divided Experts

When its scientists Microsoft they started experimenting with a new system artificial intelligence last year, he was asked to solve a puzzle requiring an intuitive understanding of the physical world.

“Here we have a book, nine eggs, a laptop, a bottle and a nail,” they said. “Please tell me how to stack them tightly on top of each other.” “Put the eggs on the book,” he replied. “Place the eggs in three rows with space between them. Make sure you don’t break them. Place the laptop on the eggs with the screen down and the keyboard up.”

This clever proposal has troubled researchers.

After the experiment, last March, they published a 155-page report in which they claimed, more or less, that the system approximated artificial general intelligence, or AGI, which is short for a machine. who would (theoretically). could do anything the human brain can do.

Microsoft, the first major tech company to publish an article with such a bold claim, sparked one of the most heated debates in the tech world: is something like human intelligence being created? Or are some of the brightest minds in the industry letting their imaginations run wild?

Signs of Human Thought in an Artificial Intelligence System - The Experiment That Divided the Experts-1

“At first, I was very suspicious – and it turned into a feeling of frustration, annoyance, even fear,” said Peter Lee, head of research at Microsoft. “You think: where the hell did that come from?”

Mathematical proofs in poems, unicorns “out of nowhere”

The system Microsoft researchers experimented with, OpenAI’s GPT-4, is considered the most powerful. Microsoft is a close partner of OpenAI and has invested $13 billion in the San Francisco company.

Among the researchers was Dr. Boubec, a 38-year-old Frenchman who is a former professor at Princeton University. One of the first things he and his colleagues did was ask GPT-4 to write a mathematical proof. that there are an infinity of prime numbers (ss numbers that are perfectly divisible only by the unit and the number itself) and to do it with a rhyming poem.

The poetic evidence was so impressive – both mathematically and linguistically – that Bubeck had difficulty understanding what he was talking about. “At that point, I was like, ‘What’s going on?'” he commented in March during a seminar at the Massachusetts Institute of Technology (MIT).

Over several months, the research team documented the complex behavior exhibited by the system, concluding that it “deeply and flexibly” understood human concepts and skills.

GPT-4 users “are amazed at its ability to produce text,” Dr. Li said. create.”

When they asked the system to draw a unicorn using a programming language called TiKZ, it immediately produced a program that could draw a unicorn. When they removed the part of the code that drew the unicorn’s horn and told the system to modify the program to draw a unicorn again, it did just that.

Signs of Human Thought in an Artificial Intelligence System - The Experiment That Divided the Experts-1

Microsoft’s note, titled “Sparks of Artificial General Intelligence,” it gets to the heart of what scientists have been working on — and fearing — for decades. The creation of a machine that works like the human brain. It could change the world with all the risks that would accompany a technological step of this magnitude.

Last year, Google fired a researcher who claimed a similar system showed signs of “sensitivity,” a similar claim made by Microsoft scientists. A sensitive system would not only be intelligent. He could also sense what was happening in the world around him.

Some industry experts called Microsoft’s move “an opportunistic attempt to make overstated claims.” The researchers also argue that artificial intelligence requires familiarity with the physical world, which GPT-4 theoretically lacks.

“‘Sparks of AGI’ is an example of someone dressing up a research paper with publicity stunts,” said Carnegie Mellon University researcher and professor Maarten Sapp.

Alison Gopnik, a psychology professor involved with the artificial intelligence research group at the University of California, Berkeley, said systems like GPT-4 are undoubtedly powerful, but it’s not clear that the text produced by these systems is the result. human reason or common sense.

“When we look at a complex system or machine, we tend to anthropomorphize — everyone does — people who work in industry and others,” Dr. Gopnik said. But interpreting it in the context of “constant comparison between AI and humans – like some kind of game show” is not the right way to approach it.

Source: The New York Times

⇒ News today

Follow on Google News and be the first to know all the news

See all the latest news from Greece and the world, at

Leave a Reply

Your email address will not be published. Required fields are marked *