Nations around the world are asking questions about how to use artificial intelligence to solve existing problems while avoiding — or controlling for — the new problems AI might create.
The Japanese government has called on its domestic tech companies to be human-centric when developing or using generative AI.
Japan has the G7 presidency, and the country’s 10 AI principles will be adopted officially by this March. The principles are meant to help advance global discussions on AI issues and a multilateral framework for supranational oversight.
The 10 principles call on domestic Japanese companies to be “focused on human beings,” and require firms integrating and developing AI systems to “respect the dignity of individuals and pay careful attention to potential harm” and “not develop or provide AI for the purpose of wrongfully manipulating human decisions or emotions.”
The guidelines are based on the G7’s Hiroshima AI Process (HAP) for international rulemaking.
G7 nations, including Canada, France, Germany, Italy, Japan, the United Kingdom and the United States, as well as the European Union, all ranked human and fundamental rights as the most important and urgent consideration for AI policy development. Aligning AI systems with human values was also ranked as among the top three most urgent priorities by G7 members.
Read also: AI’s Future Is Becoming Indistinguishable From the Future of Work
Japan’s upcoming guidelines — which do not impose any penalties on noncompliant businesses — call for organizations to respect human rights and diversity, take measures against disinformation, and never engage in developing, providing and using AI services that negatively manipulate people’s decision-making and emotions as they develop and use AI.
Human-centric innovation was the central theme of another Japan-hosted international conference, the Internet Governance Forum 2023 (IGF) in October, which centered on the theme “The Internet We Want – Empowering All People.”
Productivity gains and promoting innovation and entrepreneurship are also viewed by each of the G7 members as among the major opportunities made possible by generative AI.
This tracks with research undertaken by PYMNTS Intelligence in the July report “Understanding the Future of Generative Al,” which found that large language models — the neural networks behind OpenAI’s ChatGPT and Google’s LaMDA — could impact 40% of all working hours.
New jobs are directly enabled by technology because the trajectory of work creation directly mirrors the path of innovation. More than 60% of jobs done in 2018 were not even invented back in 1940, per an MIT paper.
MIT has also created a policy paper around developing “pro-worker AI.”
AI will likely change the very nature of work. Elon Musk said that advances in AI will eventually render all jobs obsolete, while executives at OpenAI said the technology will be able to do any task a human can do within 10 years, which is why developing the technology through the lens of supporting humans, and not removing them, is important.
As Robin AI Co-founder and Chief Technology Officer James Clough told PYMNTS, AI is “called a co-pilot because a co-pilot implies the existence of a pilot, and it’s still the pilot who’s in control. It’s the pilot who’s setting the direction.”
Experts that PYMNTS has spoken with consistently emphasized that AI should be viewed as a way to augment and enhance, not replace, the work done by humans.
See also: Tech Experts Share What to Ask When Adding AI to Business
The generative AI industry is expected to grow to $1.3 trillion by 2032 and is projected at the same time to give back workers and employees much of their productivity by optimizing legacy processes.
The development of generative AI systems should not be a one-time process. Companies must implement continuous monitoring mechanisms to assess the impact of their AI systems on users and society. Regularly updating models based on user feedback and societal changes helps in adapting AI systems to evolving needs and values. This iterative process ensures that generative AI remains aligned with human-centric principles over time.
According to PYMNTS Intelligence, 70% of consumers believe AI can already replace at least some of their professional skill sets. Young consumers, those earning over $100,000, and those working in an office environment are most aware of this skill overlap.
By embedding ethical practices, ensuring transparency, promoting inclusivity, empowering users and maintaining a commitment to continual improvement, organizations can harness the benefits of generative AI while minimizing potential risks. A human-centric approach not only safeguards against unintended consequences but also fosters a positive and responsible integration of AI into the rapidly evolving digital landscape.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.