The U.K. government on Monday (July 18) presented its plan to regulate artificial intelligence (AI) in the country. Like the European Union, the U.K. adopted a risk-based approach, where only the high-risk activities will be regulated, leaving the low-risk ones unsupervised to foster innovation.
The main difference between the two approaches is that the U.K. is not proposing a specific AI bill with one central authority to regulate the use of the technology like the EU. Instead, the U.K. is relying on a group of regulators to enforce the new AI principles.
The British government published a policy paper called “establishing a pro-innovation approach to regulate AI” on July 18 at the same time as the new Data Protection and Digital Information Bill was introduced in Parliament.
The U.K. has chosen not to heavily regulate AI, almost not to regulate at all, but to adopt a principle-based approach that would allow companies to experiment with the technology. For instance, the government is planning to introduce an AI sandbox for companies to test new products and services in a safe regulatory environment.
See also: UK Regulators’ Path for AI Starts With Auditing Algorithms
The approach is based on six core principles that regulators must apply, with the flexibility to implement these in ways that best meet the use of AI in their sectors, the government said. The cross-sectoral principles, using the OECD Principles on AI as a reference, require developers and users to ensure that AI is safe, it is technically secured, transparent and explainable, consider fairness, identify a legal person to be responsible for the AI and clarify routes to redress or contestability.
The government admits that a principle-based approach could cause confusion among companies about which areas of business will be regulated, the scope of these principles or which regulator will be in charge when two or more authorities overlap. To provide more clarity, the government entrusts the enforcement of these principles to regulators such as Ofcom, the Competition and Markets Authority, the Information Commissioner’s Office, the Financial Conduct Authority and the Medicine and Healthcare Products Regulatory Agency, although this list seems to be open and other regulators could also be asked to interpret and implement these principles. There is no information yet about how these principles will be interpreted, leaving a wide margin of discretion for regulators.
However, regulators will be encouraged to consider lighter touch options, the government argued, like guidance and voluntary measures rather than prohibitions.
This new approach will require strong collaboration among regulators to make sure that they all interpret and apply the principles in a similar way although the powers to do that may be different among regulators. Probably this is one of the reasons why the government has decided to put the cross-sectoral principles on a “non-statutory footing.” In other words, regulators will be tasked with identifying, assessing and prioritizing specific risks addressed by the principles, but this will not necessarily translate into mandatory obligations. Regulators will use their existing powers and legal mandates to enforce these principles, whether it is a provision in a data, privacy law or competition law. This period where the principles will be more voluntary than mandatory is meant to be temporary, and it will allow the government to “evaluate and update” their approach and to decide whether additional powers need to be granted to some regulators to be able to enforce these principles.
The government has opened this white paper with a principle-based approach to the public for comments. Stakeholders will have until Sept 26 to share their views on whether this system is adequate to foster innovation in AI and how would be the best way to implement these principles.
Read More: UK Seeks Its Place to Shape Global Standards in Artificial Intelligence
For all PYMNTS EMEA coverage, subscribe to the daily EMEA Newsletter.