Not Everyone Thinks AI Is Going To Destroy Us

AI, artificial intelligence, robots

If ever there were two schools of thought about technology, the discussions surrounding artificial intelligence (AI) would be a great example. There are stark differences between experts in assessing the existential risks posed by AI, underscoring the ongoing debate surrounding AI’s potential impact on humanity and commerce.

Case in point: A recent study from the Forecasting Research Institute asked researchers, AI experts and top-notch predictors called “super forecasters” to share their views on how dangerous AI could be. They found that AI experts were much more worried about AI risks than super forecasters. Despite somber reports about a possible imminent AI takeover, many AI professionals hold a more tempered view of the technology. 

“These tools are not sentient,” Beth Simone Noveck, director of the Burnes Center for Social Change and professor of experiential AI at Northeastern University, told PYMNTS in an interview. “They are not human. This is data-crunching software. It is true that when, for example, generative AI predicts the next word in a sentence based on having ingested billions of other words, we don’t exactly know how it works.

“But the power of these tools to analyze data, words, and images is not a reason to distract our attention, resources, and focus from addressing how to use these tools right now to address very urgent problems like inequality, climate change, racial justice and more,” she added.

A Deep Divide Over AI’s Future

The recent Institute study looked at the divide between optimists and pessimists on AI. To understand why these two groups saw things so differently, they set up discussions where both sides could share information and arguments (experts spent about 31 hours, while super forecasters spent about 80 hours). They wanted to see if learning more and hearing the other side’s best points might change anyone’s mind.

The researchers were also interested in finding specific issues that could sway people’s opinions. One big question was whether any organization could prove by 2030 that AI could start making copies of itself, gather resources, and avoid being shut down. If that scenario turned out to be accurate, even the skeptics (super forecasters) would start worrying more about AI. 

The study reflects the growing alarm around the risks of AI. A recent report sponsored by the U.S. State Department reveals national security threats due to fast-developing artificial intelligence, highlighting the urgent need for federal action to prevent a crisis.

Compiled after a year of research and interviews with over 200 experts — including senior leaders at major AI firms, cybersecurity specialists, weapons experts and government security officials — the report prepared by Gladstone AI delivers a stark warning: In the worst-case scenario, cutting-edge AI technologies might represent a threat to human survival.

AI Optimism

Certain AI specialists see grounds to dismiss gloomy headlines questioning AI’s potential threat to existence.

“Simply put, the machine needs humans — and will for quite some time,” Shawn Daly, a professor at Niagara University who was not part of the study, told PYMNTS in an interview.

“We provide not only the infrastructure but also critical guidance the machine can’t do without. As for evil influences utilizing AI to nefarious ends, we’ve managed the nuclear age pretty well, which I find encouraging,” Daly added. 

Recent progress in AI has been about getting better at handling and generating things like text, voice, images, videos and code, not about making better decisions, pointed out Kjell Carlsson, head of AI strategy at Domino Data Lab, in an interview with PYMNTS.

“This is the reason that assisted driving features have advanced dramatically, while a world of fully self-driving vehicles remains science fiction,” he added. “Since AI systems are no more likely to be ‘taking control away from people’ than they were many decades ago, there is no mechanism by which AI can cause existential risks without radical scientific breakthroughs. Instead, any existential risks of AI come from humans using AI for malicious purposes.”

And while many worry about AI’s impact on the workforce, it’s possible that AI could create as many new jobs as it eliminates. Carlsson said that most of the successful applications of AI today are in organizations that empower expert professionals — researchers, engineers, lawyers and software engineers.

“They are dramatically increasing their productivity, effectiveness and value. In essence, they are increasing the returns to skilled labor, which is wonderful news for advanced economies like the U.S., where we can expect economic growth and more, higher-paying jobs,” he added.

Carlsson noted that concerns about AI are justified in low-income countries dependent on outsourced customer service and manual back-office work.”Here, AI is rapidly automating these tasks, which will lead to significant job losses,” he added.

In an interview with PYMNTS, Bruno Farinelli, senior director of operations and analytics at ClearSale, an eCommerce protection company, argued that AI pessimism often results from underestimating human ingenuity and misunderstanding the capabilities of current AI systems. He noted that AI fears frequently stem from a lack of knowledge about how existing AI technologies work.

“Modern AI, like large language models, excels at specialized pattern matching, not generalized reasoning,” he added. Their capabilities remain narrow and bounded — they simply do not have the drive or ability to become self-motivated threats to humanity.” 

Noveck proposed that instead of existential risks, the focus should be on using AI to accelerate the search for cures to diseases like cancer and Alzheimer’s.

She pointed out that fears about AI posing a significant future risk typically arise from discussions with software developers, who rightly insist on the necessity of oversight and regulation. However, Noveck argued that our lack of complete understanding of AI’s mechanics shouldn’t lead us to overly exaggerate its dangers.

“As with any tool, its benefits derive from how we intend to use it,” she added. We can use these tools to do more meaningful, good work, helping us to be more productive and eliminate drudgery while better-serving customers.” 

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.