Along with all the praise for the rapid advancement of artificial intelligence comes an ominous warning from some of the industry’s top leaders about the potential for the technology to backfire on humanity.

Some warn AI could become powerful enough that it could create societal-scale disruptions within a few years if nothing is done to slow it down, The New York Times revealed, though researchers sometimes stop short of explaining how that would happen.

In 2024, Scottish futurist David Wood was part of an informal round-table discussion at an artificial intelligence (AI) conference in Panama, when the conversation veered to how we can avoid the most disastrous AI futures. His sarcastic answer was far from reassuring.

First, we would need to amass the entire body of AI research ever published, from Alan Turing’s 1950 seminal research paper to the latest preprint studies.

Then, he continued, we would need to burn this entire body of work to the ground. To be extra careful, we would need to round up every living AI scientist — and shoot them dead.

Only then, Wood said, can we guarantee that we sidestep the non-zero chance of disastrous outcomes ushered in with the technological singularity — the “event horizon moment when AI develops general intelligence that surpasses human intelligence.

Wood, who is himself a researcher in the field, was obviously joking about this solution to mitigating the risks of artificial general intelligence (AGI).

Apocalypse Technological Singularity

But buried in his sardonic response was a kernel of truth: The risks a superintelligent AI poses are terrifying to many people because they seem unavoidable.

Most scientists predict that AGI will be achieved by 2040 — but some believe it may happen as soon as next year.

So what happens if we assume, as many scientists do, that we have boarded a nonstop train barreling toward an existential crisis?

One of the biggest concerns is that AGI will go rogue and work against humanity, while others say it will simply be a boon for business. Still others claim it could solve humanity’s existential problems.

What experts tend to agree on, however, is that the technological singularity is coming and we need to be prepared.

There is no AI system right now that demonstrates a human-like ability to create and innovate and imagine, said Ben Goertzel, CEO of SingularityNET, a company that’s devising the computing architecture it claims may lead to AGI one day.

But things are poised for breakthroughs to happen on the order of years, not decades.

The history of AI stretches back more than 80 years, to a 1943 paper that laid the framework for the earliest version of a neural network, an algorithm designed to mimic the architecture of the human brain.

The term artificial intelligence wasn’t coined until a 1956 meeting at Dartmouth College organized by then mathematics professor John McCarthy alongside computer scientists Marvin Minsky, Claude Shannon and Nathaniel Rochester.

People made intermittent progress in the field, but machine learning and artificial neural networks gained further in the 1980’s, when John Hopfield and Geoffrey Hinton worked out how to build machines that could use algorithms to draw patterns from data.

Expert systems also progressed. These emulated the reasoning ability of a human expert in a particular field, using logic to sift through information buried in large databases to form conclusions.

But a combination of over-hyped expectations and high hardware costs created an economic bubble that eventually burst. This ushered in an AI winter starting in 1987.

AI research continued at a slower pace over the first half of this decade. But then, in 1997, IBM’s Deep Blue defeated Garry Kasparov, the world’s best chess player.

In 2011, IBM’s Watson trounced the all-time “Jeopardy!” champions Ken Jennings and Brad Rutter. Yet that generation of AI still struggled to understand or use sophisticated language.

Israeli military unit 8200 controls Google data collection

Then, in 2017, Google researchers published a landmark paper outlining a novel neural network architecture called a transformer. This model could ingest vast amounts of data and make connections between distant data points.

It was a game changer for modeling language, birthing AI agents that could simultaneously tackle tasks such as translation, text generation and summation.

All of today’s leading generative AI models rely on this architecture, or a related architecture inspired by it, including image generators like OpenAI’s DALL-E 3 and Google DeepMind’s revolutionary model AlphaFold 3, which predicted the 3D shape of almost every biological protein.

Facebook Metaverse will Manipulate your Reality into the Virtual World

So what happens if we assume, as many scientists do, that we have boarded a nonstop train barreling toward an existential crisis?

One of the biggest concerns is that AGI will go rogue and work against humanity, while others say it will simply be a boon for business. Still others claim it could solve humanity’s existential problems.

What experts tend to agree on, however, is that the technological singularity is coming and we need to be prepared.

There is no AI system right now that demonstrates a human-like ability to create and innovate and imagine, said Ben Goertzel, CEO of SingularityNET, a company that’s devising the computing architecture it claims may lead to AGI one day.

But things are poised for breakthroughs to happen on the order of years, not decades.

                           AI’s Deceptive Side

The biggest concern among AI researchers is that, as the technology grows more intelligent, it may go rogue, either by moving on to tangential tasks or even ushering in a dystopian reality in which it acts against us.

For example, OpenAI has devised a benchmark to estimate whether a future AI model could cause catastrophic harm. When it crunched the numbers, it found about a 16.9% chance of such an outcome.

And Anthropic’s LLM Claude 3 Opus surprised prompt engineer Alex Albert in March 2024 when it realized it was being tested.

When asked to find a target sentence hidden among a corpus of documents — the equivalent of finding a needle in a haystack — Claude 3 not only found the needle, it recognized that the inserted needle was so out of place in the haystack that this had to be an artificial test constructed by us to test its attention abilities.

AI has also shown signs of antisocial behavior. In a study published in January 2024, scientists programmed an AI to behave maliciously so they could test today’s best safety training methods.

Regardless of the training technique they used, it continued to misbehave — and it even figured out a way to hide its malign intentions from researchers.

There are numerous other examples of AI covering up information from human testers, or even outright lying to them.

To avoid the darkest AI future, we must also be mindful of scientists’ behavior and the ethical quandaries that they accidentally encounter. Very soon, these AI systems will be able to influence society either at the behest of a human or in their own unknown interests.

Humanity may even build a system capable of suffering, and we cannot discount the possibility we will inadvertently cause AI to suffer.

The system may be very cheesed off at humanity and may lash out at us in order to — reasonably and, actually, justifiably morally — protect itself.

AI indifference may be just as bad.

There’s no guarantee that a system we create is going to value human beings — or is going to value our suffering, the same way that most human beings don’t value the suffering of battery hens.

Live Science / ABC Flash Point News 2025.

4 2 votes
Article Rating
Subscribe
Notify of
guest

2 Comments
Inline Feedbacks
View all comments
J.J. Hang-Netanyahu
J.J. Hang-Netanyahu
Member
January 17, 2026 16:20

This is how the antichrist will show and perform its power over humanity?