A Light Touch Regulatory Framework for AI
10 min to read

A Light Touch Regulatory Framework for AI

Date
13 November 2019

Background

As the use of Tech becomes more central to our everyday lives the potential need for regulatory controls over the use of innovative Tech solutions is becoming a more frequently raised question. 

Many of us had accepted the inevitability of the state scrutiny of online communications as revealed by the initial Wikileaks/Snowden Prism revelations.  However, the scale and extent of on-line political manipulation brought to light in the Facebook/Cambridge Analytica scandal was something of a “game changer”.  It made us much more aware of the real and significant risks associated with the everyday Tech tools that most of us use all the time.  Bias in the use of AI algorithms is well-documented with real and significant consequences for individuals.  There is a risk of “knee-jerk” and potentially inappropriate responses. For example, San Francisco has recently banned the use of facial recognition technology by police officers.            

Whilst there are clear risks to individuals and wider society from the introduction of innovative Tech, there ought to be a balance between overly protective and legalistic regulatory controls (which could limit and stifle innovative Tech developments) and an unregulated Tech “Wild West”.  An overly restrictive legal framework is likely to stifle development, innovation and usage. Effectiveness and efficiency benefits could be delayed or may not be achieved at all.

Previous attempts to regulate new Tech in other fields have not always been notably successful. An alternative to the regulation of Tech would be to leave this area to the courts to assess on the basis of existing laws.  This has some attractions but it would lead to considerable legal uncertainties over an extended period and a piecemeal approach until the courts develop a comprehensive body of case law on the issues.

As a result, some form of legal framework for the control of innovative Tech in certain areas should be considered. We advocate a ‘light-touch’ regulatory framework which would be sector specific, with the regulatory controls having a primary focus on transparency obligations and being dependent on the human impact of the Tech.  This legal framework needs to take into account the increasingly centrality and importance of innovative Tech, which did not apply in the early development of the on-line world. Paul Nemitz in “Constitutional Democracy and Technology in the Age of Artificial Intelligence”[1] has reminded us of the “principle of essentiality”. Under this principle any matter which either concerns fundamental rights of individuals or is important to the state should be dealt with by a parliamentary, democratically legitimized law.   

This principle is now particularly relevant to innovative Tech developments which can have fundamental consequences.  A more controlling framework is needed for Tech that impacts on fundamental rights, such as predictive policing and the use of AI in criminal justice, potentially dangerous environments (e.g. autonomous vehicles) and the health sector. Tech developments that do not relate to fundamental rights (even though important to individuals), such as shopping apps and some aspects of social media, do not need to be subject to regulatory controls.     

We recommend that a ‘light-touch’ but graduated regulatory framework for innovative Tech with components of self-regulation should be developed. This regulatory framework should take account of, reinforce and build in the focus on corporate responsibility that many companies are now developing. It should provide a regulatory environment that is applicable to and responds to the Tech environment, using – wherever possible – review and transparency mechanisms, instead of heavy-handed, inappropriate and difficult to exercise legal remedies.

Under this approach a balance should be achieved between enabling innovative AI solutions to be developed and implemented whilst giving sufficient regulatory protection for problems to be tackled. 

Key Components of the Light-Touch Framework

The key components for a Light touch framework should be organized around two key pillars as follows:

  • Registrable Tech Products: an index should be developed to determine when innovative Tech solutions should be subject to the framework.  This needs to be made up from a number of factors, including at least:
  • a quality, rational and sensibility assessment of the Tech solution and data sets used, indicating the overall quality of the product for use in production systems that either directly or indirectly interface with individuals. Quality includes using data sets that covers data samples from all possible inputs to account for biases in the end products. Rational includes the product developer to attest that the AI algorithms — without disclosing the algorithms if it is not open sourced — used have gone through a product design cycle where the algorithms being used have been vetted by product managers and development team with sign-off from management. Sensibility relates to common sense approach of deciding whether AI is needed or some other pattern machine algorithms will be better suited for the products.
  • the impact on fundamental rights of individuals or importance to the state: an assessment of the extent to which the Tech solution to be deployed will have an impact on the fundamental rights of individuals, such as their liberty, political, religious, and sexual freedoms, or on issues that are of importance to the state, such public order, the payment of taxes, and defence and security.  Of course, there may be conflicts between the interest of individuals and the interests of the state but the purpose of this assessment is to determine if innovative Tech solutions should be subject to the transparency requirement and so this assessment does not need to reconcile these sometime competing principles.     

The level of the combination of the quality, rational and sensibility assessment and the impact on the fundamental rights of individuals or the importance to the state will determine whether an innovative Tech solution should be subject to a transparency obligation. 

  • Transparency: in order to create trust and increased openness in the use of innovative Tech solutions, central public registers should be developed for the notification of registrable Tech products built and deployed by organizations and these should be held by independent bodies. They would allow for public scrutiny of the usage of these solutions. Public scrutiny would put pressure on developers, companies and public entities to give greater consideration to the impact and consequences of their products. The systems should also not be shielded by confidentiality concerns, although in some areas national security considerations would need to be taken into account. 

Clearly, there are IP and proprietary right considerations associated with any transparency regime. However, the level of transparency that is important for public understanding and public scrutiny may not necessarily involve the disclosure of IP rights and trade secrets. Consideration should also be given to greater transparency over the data sets that are used to ‘drive’ the AI. Much of the bias that occurs in the use of these solutions is derived from these data sets.

In passing, it should be noted that the patent system shows that public disclosure is possible in innovation environments.  Patent applications require public disclosures of inventive technologies and this does constrain IP owners from maintaining confidentiality in their patented inventions.         

The level of transparency should also be dependent on the position on the index of registrable Tech products as follows:

  • Below an initial threshold – no transparency requirement
  • Product exceeds “basic” threshold – notification only: information relating to:
    • The problem that the AI is aiming to solve in the product?
    • Details on what the AI algorithm is calculating to solve that problem?
    • Does AI decision making and/or operation form a major part of the product operation?  
    • Does the AI decision making and/or operation can critically affect human lives?
    • Has the AI algorithm been tested in a sandboxed environment for boundary scenarios and bias?  
    • What are the data sources used by the algorithm?
    • Is AI used in the product a learning system that can change behaviour in flight?
    • What is the AI learning as it encounters new data?
  • Product exceeds “intermediate” threshold – notification (as above) and the conduct of an impact assessment: for example, the Canadian government has recently issued a Directive[2] requiring Algorithmic Impact Assessments (AIA) to be carried out prior to any public sector use of AI in Canada. The AIA is provided as a tool for companies building AI government solutions. Companies access the AIA online and fill out a 60-plus question survey about their platform, and once finished, they will be returned with an impact level. The survey asks questions like “Does the system enable override of human decisions?” and “Is there a process in place to document how data quality issues were resolved during the design process?”  This approach could become a more general model, applicable to both public sector and higher-risk private sector applications of innovative Tech solutions.
  • Product exceeds “critical” threshold – notification and impact assessment (as above) and peer or “drug testing” review. Whilst the authors consider that in the vast majority of situations involving the use of innovative  Tech transparency and provider conducted AIAs will be sufficient, there may be circumstances where the risks are so great (potentially predictive policing) where some form of review prior to deployment is advisable.  We recognize that it is not realistic to expect a government agency or independent transparency agency to be able to recruit AI experts in enough number to be able to monitor, assess, rate and certify AI in products. Any kind of approvals and certifications required prior to the use and deployment of AI products would stultify development – the time and costs would be considerable and the skill-sets of any approval body are likely to run well behind those of commercial developers.  However, for the most critical deployments some practical form of prior assessment may be desirable.       
  • Legal remedies. Legal changes may be needed in order to give citizens effective legal remedies if problems arise through the use of AI in Tech Products. At its most basic level, this review should assess whether effective remedies already exist for the protection of human rights and civil liberties that could be impinged by the use of these solutions. For example, what legal rights of redress would an individual have who is arrested or subject to some other form of police action as a result of the output of an AI/algorithmic solution? In the United Kingdom, would the Equality Act 2010 apply if actions are taken by justice system entities based on the outputs of AI/algorithmic solutions that infringe a protected characteristic identified by the Act? If not, could the act be modified so that these actions would fall within its protections?  

In future articles the authors will expand on the ideas set out above.  In particular, the authors will provide further details of how individual Tech solutions should be categorized within the proposed index of registrable Tech products and provide additional consideration of the types of legal remedy that would be appropriate.


[1] See – https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3234336

[2] See https://www.canada.ca/en/government/system/digital-government/modern-emerging-technologies/responsible-use-ai/algorithmic-impact-assessment.html

About the authors:

Roger Bickerstaff – is a partner at Bird & Bird LLP in London and San Francisco and Honorary Professor in Law at Nottingham University.  Bird & Bird LLP is an international law firm specializing in Tech and digital transformation.

Aditya Mohan – is a founder at Skive it, Inc. in London and San Francisco. He has research experience from Intel Research, IBM Research Zurich, MIT Media Labs and HP Labs. He studied at Brown University and IIT. San Francisco’s Skive it, Inc. with additional registered office in the United Kingdom is a Deep Learning company building autonomous machines that can feel. 

Share
Written by
Roger Bickerstaff
Roger Bickerstaff
United Kingdom
Roger is a partner at Bird & Bird LLP in London and San Francisco and Honorary Professor in Law at Nottingham University. Bird & Bird LLP is an international law firm specializing in Tech and digital transformation.
View profile
AM
Aditya Mohan
United Kingdom
Aditya Mohan is a founder at Skive it, a deep learning company building autonomous machines that can feel. He studied at Brown University and IIT. He has research experience from Intel Research, IBM Research Zurich, MIT Media Labs and HP Labs.
View profile
Related articles
Smart Contracts – Recognising and Addressing the Risks
4 min to read
29 December 2021
Smart Contracts – Recognising and Addressing the Risks
Smart contracts, where some or all of the contractual obligations are defined in and/or performed automatically by a computer program, are expected to have a significant impact on the way business is...
Technology Projects: Managing the Risks of Innovation and Change Part 3: Contract Reset and Dispute Resolution
Technology Projects: Managing the Risks of Innovation and Change Part 3: Contract Reset and Dispute Resolution
Customers in long-term technology projects can find that while they have been working towards their chosen solution a more advanced, cheaper, or simply more desirable technology has become available....
Digital dispute resolution rules to facilitate rapid and cost-effective resolution of disputes involving novel digital technologies
Digital dispute resolution rules to facilitate rapid and cost-effective resolution of disputes involving novel digital technologies
While some saw the development of products using blockchain technology leading to the demise of disputes, the reality is that disputes in the arena of digital technology are increasing in number. Lawtech’s...
Technology Projects: Managing the Risks of Innovation and Change Part 2: During the Life of the Project
Technology Projects: Managing the Risks of Innovation and Change Part 2: During the Life of the Project
Customers in long-term technology projects can find that while they have been working towards their chosen solution a more advanced, cheaper, or simply more desirable technology has become available....
Cookies
We use analytics cookies to help us understand if our website is working well and to learn what content is most useful to visitors. We also use some cookies which are essential to make our website work. You can accept or reject our analytic cookies (including the collection of associated data) and change your mind at any time. Find out more in our Cookie Notice.