The Merriam-Webster dictionary notes that a “Pandora’s box” can mean “anything that seems common but may have unpredictable harmful effects”. I’ve been thinking about Pandora’s Boxes lately because we Homo sapiens are doing something we’ve never done before: we’re opening two giant Pandora’s Boxes at the same time, with no idea what might come out of them. .
One of these Pandora’s boxes is called “Artificial intelligence” and is summarized in the examples of ChatGPT, Bard and AlphaFoldwhich testify to the capacity of humanity to build for the first time in a “divine” way something approaching general intelligence, far exceeding the cerebral power with which we have naturally evolved.
The other Pandora’s box is called “climate changeand so we humans are divinely led for the first time from one climatic age to another. Until now, this force was largely due to the natural forces that make the Earth revolve around the Sun.

For me, the big question, as we open both boxes at the same time, is: what kind of regulations and ethics should we apply to manage what emerges?
let’s face it, we have not understood how social networks can be used to degrade the two pillars of any free society – of truth and trust. So if we approach AI with the same recklessness – if we again follow Mark Zuckerberg’s reckless mantra “move fast and disrupt the establishment” at the dawn of social media – oh my god, we’re going to break things faster, stronger and deeper than you can imagine.
“There was a lack of imagination when social media first launched, and then a failure to respond responsibly to the unimaginable consequences once it seeped into the lives of billions of people,” she said. Don Seidman, founder and president of the HOW Institute for Society. . “We lost a lot of time – and our way – in a utopian thought that social networks only have advantages, to connect people to give them a voice. We don’t have the potential for similar failures with artificial intelligence.”
Therefore, there is an “urgent imperative – ethical and regulatory – that AI technologies be used only in a complementary context, but also by highlighting what makes us uniquely human: of our creativity, our curiosity and our collaboration with others,” added Seidman (member of the board of directors of the Planet Word museum which my wife founded).

“That being said with great power comes great responsibility, has never been so true. We can’t stand another generation of technologists proclaiming it their moral neutrality and being told “hey, we’re just a platform” as these AI technologies enable exponentially more powerful and radical forms of human augmentation and interaction.”
For these reasons, I asked James Manica, head of Google’s technology and society group, as well as Google Research, where much of AI innovation is taking place, for his thoughts on the expectations and challenges of AI. ‘artificial intelligence.
“We have to be bold and responsible at the same time,” he said.
“The reason we have to be bold is that many different areas of AI have the potential to they help people with daily obligations, to address some of humanity’s greatest challenges – such as health care, for example – and to advance new scientific discoveries, innovations and increased productivity that will lead to greater economic prosperity.
They will do this, he added, “by giving people everywhere access to all the knowledge in the world – in their language, in the mode of communication they choose, by text, speech, images or code”, via mobile, television, radio or e-book. Many people will succeed in getting the best help and answers to improve their lives.

But, at the same time, we also have to be responsible, Manika added, citing several concerns. First, these tools must be in full accord with the purposes of humanity. Secondly, in the wrong hands, these tools can cause enormous damagewhether it’s misinformation, fake news or hacking (the bad guys always adapt early).
Finally, “engineering precedes science to some degree,” Manica explained. In short, even the people who develop the so-called big language models that drive products like ChatGPT and Bard, they don’t fully understand how they work and the full range of their abilities. We can build high-performance AI systems, he added, to which we can show models of arithmetic, or rare language, or explanations of jokes, and they can start doing other amazing things. with this knowledge. In other words, we don’t yet fully understand how much good or harm these systems can do.
So we need one regulatory framework, but it must be done carefully and repeatedly. One size does not fit all.
For what; So, are you worried that China will overtake America in artificial intelligence and do you want to accelerate our innovation in this field? No, slow it down. If you really want to democratize AI, you might want to open it up. But open source can become a field of exploitation. What would ISIS do with the code? So you have to think about gun control. If you’re worried that AI systems will worsen discrimination, privacy violations, and other divisive social ills the way social media does, you need a regulatory framework now.
If you want to take advantage of all the productivity gains that AI should bring, you need to focus on creating new opportunities and safety nets for all researchers, financial advisors, translators and workers who could be replaced today. , and maybe lawyers and developers tomorrow. If you’re worried about the AI getting super smart and starting to set its own goals, regardless of the potential danger to humans, you need to stop it immediately.
The last danger is quite realistic so that Monday the Geoffrey Hinton, a leading designer of AI systems, has announced that it is leaving Google’s artificial intelligence team. Hinton said that while he believed Google had acted responsibly in developing its AI products, he wanted to be free to talk about any risks. “It’s hard to see how you can stop the irresponsible from using AI for fraudulent purposes,” Hinton told The Times’ Cade Metz.

If you add it all up, it comes down to this: we as a society are about to have to decide for some very important compensatory measures as productive artificial intelligence is launched.
The state government framework as such will not save us. I have a simple rule: the faster the rate of change, the more amazing powers we humans develop, plus the old and late account – what you learned in catechism or where you draw moral inspiration matters more than ever.
And the more we scale AI, the more this golden rule needs to evolve: Do for others what you would have them do for you. Because, given the ever-increasing powers we are empowering ourselves with, we can now do it for each other faster, cheaper, and deeper than ever before.
Artificial intelligence and the climate crisis
The same goes for the climatic Pandora’s box that we are opening. As NASA explains on its website, “Over the past 800,000 years, there have been eight cycles of ice ages and warmer periods.” The last ice age ended around 11,700 years ago, ushering in the current climate era – known as the The Holocene era – characterized by stable seasons that allow for the steady development of agriculture, the building of human communities, and ultimately civilization as we know it today.
“Most of these climate changes are attributable to very small shifts in Earth’s orbit that alter the amount of solar energy our planet receives,” NASA notes.
Well, say goodbye to that. There is now intense public debate among ecologists and geoscientists about whether we humans are chasing from the Holocene into a new era called Anthropocene.
This name comes from “man” and “new” because mankind, among other lasting effects, has caused mass extinctions in species of world flora and fauna, it has pollutes the oceans and change the atmosphere” explains an article in the Smithsonian Magazine.
Scientists monitoring the Earth system fear that this man-made era, the Anthropocene, has none of the predictable eras of the Holocene. Agriculture could turn into a nightmare.
But in this case artificial intelligence could be a lifesaver – accelerating breakthroughs in materials science, battery density, fusion power and safe nuclear power that will allow humans to manage the now unavoidable impacts of climate change, avoiding those that would otherwise be unmanageable.
But if AI is paving the way to stem the worst effects of climate change – if AI really gives us a second chance – we better get it right. It means with smart settings for the rapid improvement of clean energy and with scalable sustainable values. If we don’t spread the ethic of conservation – respect for nature and what it gives us for free, like clean air and clean water – we risk ending up in a world where people believe that they have the right to driving through a forest in their all-electric Hummer. It can’t happen.
To sum up: We are in the process of opening these two great Pandora’s boxes. God forbid if we gain divine powers to part the Red Sea, but cannot understand the Ten Commandments.
Source: New York Times
⇒ News today
Follow kathimerini.gr on Google News and be the first to know all the news
See all the latest news from Greece and the world, at kathimerini.gr