Imitating Creators: Prospective Governance And Mechanisms For Identifying AI-Generated Content
By: Coran Darling (DLA Piper)
We are slowly seeing the emerging trend of organisations considering the use of generative technologies across all areas of business. Collectively known as ‘generative AI’, these technologies (such as the popular Chat-GPT and Dall-E) are capable of taking a prompt from its user and creating entirely new content, such as blog posts, letters to clients, or internal policies.
In a previous article, we examined several points that organisations should consider, such as potential for IP infringement and inadvertent PR issues. The article goes on to consider several steps organisations can take to mitigate these risks throughout the process, such as regular testing and ensuring appropriate safeguards are put in place. As will be clear for those who have already interacted with these technologies, while there is certainly value in implementing them within certain processes, these safeguards are clearly a necessary step to ensure that the AI is behaving accurately and, in the case of written works, in a way that is not misleading.
The need for these internal processes can be displayed using a simple riddle. For this example, Chat-GPT was asked the following:
‘Mark has three brothers: Peter, Alex, and Simon. Who is the fourth brother?’.
The AI quickly, though inaccurately, responded that there was no fourth brother even though, as would be obvious to a human, Mark is clearly the fourth brother.
While the error in this example is humorous, and without consequence, the severity changes when using generative AI for more technical or material uses. The creation of news articles informing the public of political updates, for example, would clearly require that the information is accurate and reliable. The same could be said for the creation of a letter regarding a failure to adhere to contractual terms or service levels. It is clear that a number of outcomes may materially mislead parties (whether mistakenly or otherwise) to their detriment which, in turn, may lead to concerns of dishonesty, fraud, and misinformation…
Featured News
Big Tech Braces for Potential Changes Under a Second Trump Presidency
Nov 6, 2024 by
CPI
Trump’s Potential Shift in US Antitrust Policy Raises Questions for Big Tech and Mergers
Nov 6, 2024 by
CPI
EU Set to Fine Apple in First Major Enforcement of Digital Markets Act
Nov 5, 2024 by
CPI
Six Indicted in Federal Bid-Rigging Schemes Involving Government IT Contracts
Nov 5, 2024 by
CPI
Ireland Secures First €3 Billion Apple Tax Payment, Boosting Exchequer Funds
Nov 5, 2024 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Remedies Revisited
Oct 30, 2024 by
CPI
Fixing the Fix: Updating Policy on Merger Remedies
Oct 30, 2024 by
CPI
Methodology Matters: The 2017 FTC Remedies Study
Oct 30, 2024 by
CPI
U.S. v. AT&T: Five Lessons for Vertical Merger Enforcement
Oct 30, 2024 by
CPI
The Search for Antitrust Remedies in Tech Leads Beyond Antitrust
Oct 30, 2024 by
CPI