Google has been making waves in the tech industry with its latest artificial intelligence (AI) plans, which include integrating AI into 25 products, such as Search Generative Experience (SGE) and Help Me Write.
However, the former safety chief at Google, Arjun Narayan, has raised concerns about the potential dangers of using AI to write news stories. Narayan warned in an interview with Gizmodo that AI could pose a real risk if it’s used to write news stories, as the capabilities of AI come with the risk of being inaccurate.
Among the many risks of using generative AI in news is training the model to suit the purpose and convey the truth. These dangers have only been exacerbated by recent cuts at tech companies, often hitting safety and AI ethics teams. Despite these concerns, news services have been dabbling in AI to grow their product offerings, generate more content and customize services for readers. However, the threat of conveying false information without human oversight could make it dangerous too.
In response to these concerns, Narayan emphasized that AI still needs human oversight. It needs checking and curation for editorial standards and values. He highlighted the importance of transparency, stating that he personally believes there is nothing wrong with AI generating an article, but it is important to be transparent to the user that this content was generated by AI.
Industry leaders warn that future AI systems could be as deadly as pandemics and nuclear weapons. Recent advancements in so-called large language models have raised fears that AI could soon be used at scale to spread misinformation and propaganda or that it could eliminate millions of white-collar jobs. Eventually, some believe, AI could become powerful enough that it could create societal-scale disruptions within a few years if nothing is done to slow it down.
Despite these concerns, Google has been incorporating AI into its search engine, with SGE using AI to answer questions directly on the Google Search webpage. Google pulls this information from websites and links to sources used when generating an answer. With the launch of ChatGPT late last year, an AI chatbot that could answer almost any question with a unique answer, companies have been adding generative AI features to their products amid increased public interest.
Google’s efforts to make ASL more accessible and provide flood forecasts demonstrate the positive impact AI can have on society when used responsibly. Google has launched an AI-enabled platform that displays flood forecasts, available in 80 countries across Africa, Asia-Pacific, Europe, and South and Central America. The flood prediction information is available up to seven days before an incoming flood. Regions with high percentages of the population vulnerable to flood risk are now on Google’s list of forecastable places.
Google also uses AI tech to make American Sign Language (ASL) more accessible. Google launched a competition earlier this year to use artificial intelligence to decode sign language in real time.
As Google continues to push the boundaries of AI, it must balance innovation with responsibility. While AI has the potential to revolutionize industries and improve people’s lives, it must be used ethically and with caution. As Narayan noted, human oversight and transparency are crucial to ensuring that AI is used responsibly. By keeping these principles in mind, Google and other tech companies can continue to innovate while also protecting the public from the potential dangers of AI.