Senate Grapples With AI’s Existential Threats Amid Industry’s Dueling Narratives

AI

There’s a simple binary that many in advertising use to promote brands: fear vs. hope.

Home security systems and health insurance providers, for example, are fear-based brands; while many CPG (consumer packaged goods) products, like Coca-Cola or Dove, are hope-based brands.

Artificial intelligence (AI) has so far been playing both sides of the spectrum, with some saying the innovation could “threaten all life on earth,” while others continue to build out models in the hope of ushering in a new era of human productivity.

This, as on Wednesday (Dec. 6) the U.S. Senate is wrapping up its Insight Forums on AI for 2023 with two closed-door sessions, its eighth and ninth of the year, one covering the risks associated with potential AI “doomsday scenarios” and the other unpacking national security concerns related to the technology.

For the “Risk, Alignment & Guarding Against Doomsday Scenarios” insight forum, policymakers will hear from AI insiders including Aleksander Madry, OpenAI’s head of preparedness, and Jared Kaplan, co-founder of “harmless AI” provider Anthropic.

The participants speaking with lawmakers for the National Security-focused forum include, among others, Palantir CEO Alex Karp, ScaleAI CEO and founder Alexandr Wang and former Senator Rob Portman.

Read also: What Does it Mean to be Good for Humanity? Don’t Ask OpenAI Right Now

Core Concern or Distraction?

The goal of the closed-door meetings is to lay the groundwork for significant legislative action in 2024 and beyond as the Senate gets up to speed on the realities of AI’s impact across all facets of society.

“The only real answer [to AI] is legislative … If the government doesn’t do it, no one will,” Chuck Schumer, the Senate’s top Democrat and the initiator of the AI Insight Forum series, has previously said.

After all, while AI technology itself is developing at a near breakneck speed, regulation — which is frequently difficult to reverse — needs to be developed thoughtfully, and that takes time.

It’s been a year since OpenAI launched ChatGPT and both shocked and changed the world, which was broadly unprepared for the chatbot’s human-like interface and speedy responses.

“What is fundamentally changing, is that AI is redefining itself this year with generative AI,” Billtrust Senior Vice President, Data Analytics and AI Ahsan Shah told PYMNTS.

And that redefinition — from machine learning automation to human-like (although not human-level) AI — has kicked off an arms race among tech companies large and small as they try to bring advances like artificial general intelligence (AGI) from the pages of science fiction to the annals of reality.

Still, many observers believe that focusing on AI’s doomsday scenarios and promoting threats of humanity’s extinction distracts from the very real, immediate short-term risks of current AI systems, which include misinformation, encoded bias, copyright infringement and higher compute costs relative to alternative options.

Read more: There Are a Lot of Generative AI Acronyms — Here’s What They All Mean

Unpacking the Hype

“When you ask people, a lot of them don’t know much about AI — only that it is a technology that will change everything,” Akli Adjaoute, AI entrepreneur and general partner at venture capital fund Exponion, told PYMNTS, adding that there is a growing need for a widespread, realistic comprehension of what AI can achieve in order to avoid the pitfalls of hype-driven perceptions around the technology.

At the end of the day, he said, AI is “merely probabilistics and statistics … A detailed unraveling of AI’s intricacies reveals that the innovation is truly just a sequence of weighted numbers.”

And when end users understand this, when the “black box” of uncontrollable AI is demystified, then the technology can be better put to use.

PYMNTS has previously written about how a healthy, competitive market is one where the doors are open to innovation and development, and PYMNTS Intelligence reports that more than 8 in 10 business leaders (84%) believe generative AI will positively impact the workforce.

“What is key for most consumers is knowing that [AI] goes well beyond just a large language model [LLM],”  Shaunt Sarkissian, founder and CEO of AI-ID, told PYMNTS, “it goes well beyond what you’re sort of seeing at the surface, and it’s been touching and permeating a lot of parts of your daily lives for years.”