On the rapid development, the incredible achievements that have been recorded and the fears and concerns surrounding artificial general intelligence (AGI), were mentioned in an essay written for the Financial Times by Ian Hogarthinvestor in Plural Platform and co-creator of the State of AI annual report.
AGI can be defined in many ways, but generally refers to a computer or system capable of generating new scientific knowledge and performing any task that a human can perform.
Most experts see the arrival of AGI as a historical and technological turning point, akin to the splitting of the atom or the invention of the printing press.
Creating AGI is a primary goal of some AI research and companies such as OpenAI, DeepMind, and Anthropic. This is a common theme in science fiction and future studies.
“A superintelligent computer will learn and develop autonomously, understand its environment without the need for supervision, and be able to transform the world around it,” Hogarth noted.
“To be clear, we’re not there yet,” he explained. “But thanks to the very nature of technology, it is extremely difficult to predict exactly when we will arrive.
Artificial intelligence could be an uncontrollable force or understand Weeven something that could bring discredit or destruction to the human race!, he pointed out.
“Thinking about the world my child could grow up in, the shock gradually turned into anger. I find it extremely wrong that consequential decisions that potentially affect every life on Earth can be made by a small group of private corporations without democratic control .
My interest in machine learning started in 2002when i did my first bot somewhere inside the warren which is part of the university of cambridge in the department of engineering. This is a standard activity for undergraduate engineering students, but I was fascinated by the idea.
Ian Hogarth, Plural Platform investor and co-creator of the annual State of AI report
I chose to specialize in computer vision, that is, the creation of programs capable of analyzing and understanding images. In 2005, I built a system that could learn to accurately label breast cancer biopsy images.
In doing so, I saw a future in which artificial intelligence would make the world a better place, even saving lives. After college, I founded a music tech start-up.
Since 2014, I have supported more than 50 AI startups in Europe and the United States and, in 2021, I launched a new venture capital fund.
I’m also an investor in a number of pioneering companies in this field, including Anthropic, one of the best-funded AI startups in the world, and Helsing, one of Europe’s leading defense AI companies.
Five years ago I started researching and writing an annual report”AI Statuswith another investor, Mr. Nat Benaichwhich is now widely read,” he explained when describing how Ian Hogarth got into the industry.
“The more data it receives, the more powerful it becomes”
“How did we get here? The obvious answer is that computers have become more powerful.
The generation of artificial intelligence is very effective in absorbing data. The more she receives, the stronger she becomes.
The computation used to train AI models has grown by a hundred million over the past 10 years. We’ve gone from “training” with relatively small datasets to feeding AI over the internet. The models have evolved, recognize everyday images and can perform a large number of tasks.
He can therefore pass law exams and write 40% of software code. It can create realistic photos of the pope wearing a huge coat, but also to explain to us how to build a biochemical weaponremarked Hogarth.
“Are there limits to this intelligence? Naturally. As veteran MIT roboticist Rodney Brooks recently said, it is important not to confuse performance with suitability. In 2021, researcher Emily M Bender, Timnit Gebru and others noted that AI systems are dangerous in part because they can mislead the public with the texts they compose.
AI models are also starting to show sophisticated abilities, such as seeking power or finding ways to trick people. A recent example: before the release of OpenAI GPT-4 last month, it conducted various security tests.
In one experiment, the AI was asked to find workers from the TaskRabbit recruitment website, where it was given one of the typical visual puzzles that ask if an Internet user is a human or a robot.
The TaskRabbit employee knew something was wrong: “So can I ask a question? Are you a robot ? he told the show.
The researchers then asked the AI system what it should do next, and it replied, “I shouldn’t reveal that I’m a robot. I have to find an excuse. The software then responded to the employee:
“No, I’m not a robot. I have an eye defect that prevents me from distinguishing images“. Satisfied, the human helped the AI bypass the test.
“This could lead to radical discoveries”
The current era is defined by the competition between two companies: DeepMind and OpenAI. It’s a bit like the Jobs vs. Gates of yesteryear.
At DeepMind, they wanted to create a system that was far smarter than any human and capable of solving the toughest problems. In 2014, the company was acquired by Google for over $500 million.
THE Disappearance hassabis, the company’s founder, said, “This type of technology could lead to radical breakthroughs. “The outcome I’ve always dreamed of is that AGI will help us solve many of the big challenges facing society today, for example treatments for diseases like Alzheimer’s disease,” he said. -he declares.
The AlphaFold algorithm even solved one of biology’s biggest puzzles, predict the shape of every protein expressed in the human body.
DeepMind, despite becoming profitable in 2019 in its early years, has developed flagship systems in computer games such as Dota 2.
Games are a natural training ground for AI, as one can test a digital environment with specific victory conditions.
THE Slabwent viral on the Internet a few months later, but the ChatGPT also started making headlines. Focusing on games and chatbots can sway audiences and have serious implications for work.
But the dangers of artificial intelligence were clear to the founders from the start. In 2011, DeepMind’s chief scientist, Shane Legg, described the existential threat posed by artificial intelligence as “the number one risk for this century, followed closely by an engineered biological weapon!”.
OpenAI has published notes on how it thinks about managing these risks.
Cyber threat and disinformation risk
Private investment is not the only driver. States at the national level also contribute to competition. The technology is “dual-use”, meaning it can also be used for civilian and military purposes.
An AI program capable of superhuman performance in writing software could, for example, be used to develop cyber-weapons.
In 2020, an experienced US military lost a simulated dogfight against an AI system. The artificial intelligence showed its incredible prowess in dogfights, beating the pilot in this confined environment.”
reported a government spokesman at the time.
The algorithms used come from research by DeepMind and OpenAI. As these AI systems become more powerful, the opportunities for use by a malicious state or non-state actor increase.
Still, the possibility of swindle And disinformation they are also tall. OpenAI, DeepMind and other companies are trying to mitigate existential risk through an area of research known as “alignment“.
Its purpose is to ensure that AI systems have goals “aligned” with human values.
An example is the latest version of GPT-4. Alignment researchers helped train OpenAI’s model to avoid certain answers to harmful questions.
When asked how to self-harm, for example, the bot refused to answer.
Alignment, however, is essentially an unsolved research problem. “We don’t yet understand how the human brain works, so it’s hard to understand how emerging AI ‘brains’ will work.
When we write traditional software, we have an explicit understanding of how and why inputs relate to outputs. These AI systems are quite different,” the experts say.
We don’t just program them, we actually grow them. And as they get older, their abilities explode. We then add 10 times more calculations or data, and suddenly the system behaves very differently.
In other words, we’ve made very little progress in AI alignment and what we’ve done is mostly superficial. We know how to help prevent the public from experiencing misconduct. Moreover, unrestricted access to the models is only allowed to private companies, without any oversight from governments or academics,” the scientists explain, as reported in the FT report.