A PYMNTS Company

Humanity Must Establish Its Rules of Engagement with AI — and Soon

 |  October 9, 2023

By: Ronald Crelinsten (Center for International Governance Innovation)

As with any emerging technology, generative artificial intelligence (AI) models present a dual nature. They offer innovative solutions to existing problems and challenges, but they can also enable harmful activities such as revenge porn, sextortion, disinformation, discrimination, and violent extremism. Worries have grown regarding AI “going rogue” or being misused in inappropriate or unethical manners. As aptly noted by Marie Lamensch, “generative AI creates images, text, audio, and video based on word prompts,” thereby exerting a wide-ranging influence on digital content.

Stochastic Parrots

The term “stochastic parrot” was coined by Emily M. Bender, Timnit Gebru, and their colleagues to describe AI language models (LMs). They defined an LM as a system that haphazardly strings together sequences of linguistic forms observed in extensive training data, guided by probabilistic information on their combinations, yet devoid of any understanding of meaning. These models essentially mimic statistical patterns gleaned from large datasets rather than comprehending the language they process.

Bender and her team shed light on some detrimental consequences, pointing out that prevailing practices in acquiring, processing, and filtering training data inadvertently favor dominant viewpoints. By considering vast amounts of web text as universally representative, there’s a risk of perpetuating power imbalances and reinforcing inequality. Large LMs can generate copious amounts of coherent text on demand, allowing malicious actors to exploit this fluency and coherence to deceive individuals into perceiving the content as “truthful.” Lamensch contends that without appropriate filters and mitigation strategies, generative AI tools absorb and replicate flawed, sometimes unethical, data.

In specific terms, generative AI models are trained on a limited dataset often rife with misogyny, racism, homophobia, and a male-centric perspective. A persistent gender gap exists in internet and digital tool usage, as well as in digital skills, with women less likely to engage with such tools or develop related skills, particularly in less developed countries. Women who do participate online often become targets of sexualized online abuse more frequently than men. This represents the darker facet of AI, whereby generative AI models reinforce harmful stereotypes, mirroring the biases and ideologies embedded in their source material…

CONTINUE READING…