Does your machine mind? Ethics and potential bias in the law of algorithms
13 min to read

Does your machine mind? Ethics and potential bias in the law of algorithms

Date
19 June 2017

The Law Society in London held a stimulating debate last week (June 14th, 2017) with the splendid title Does your machine mind? Ethics and potential bias in the law of algorithms” https://events.lawsociety.org.uk/ClientApps/Silverbear.Web.EDMS/public/default.aspx?tabId=37&id=1809&orgId=1&guid=fd1a8ff4-c4e3-4bcc-8b90-783988b20305. The event was filmed and will be available through the Law Society before too long.

At the event Christina Blacklaws (incoming vice president of the Law Society and  director of innovation at  Cripps) called for leglisation to govern AI. See https://www.lawgazette.co.uk/law/call-for-legislation-to-govern-ai/5061524.article

This could be overly restrictive to the successful development of AI – see http://digitalbusiness.law/2017/02/do-we-need-robot-law/. Whilst appropriate legislative developments may be needed where there are real existing barriers to AI adoption  (potentially, the Text and Data Mining Exemption for copyright materials may need review) or where there is a real need to manage and control the implementataion or use of AI, there is little need for a new regulatory regime specifically for AI.  Some  existing laws may need tweaking and modification and there may be a need for some new laws in specific circumstances (e.g. driverless cars) but not a whole new legal framework.

In the area of algorithmic bias, the Equality Act 2010 already provides considerable protection for minority interests.  The Act applies perfectly well to service providers using AI. Is there any real need for a further regulatory regime in this area that is specific to AI?

Also, in the digital world, laws tend to be overlooked and regarded as relatively unimportant.  Greater transparency of the principles, parameters and logic underpiining AI and algorithms in particular may lead to public review and scrutiny.  This is likely to a lot more effective in putting pressure on digital players to conform with good principles. Experience shows that a bad review on a review website is likely to lead to almost immediate action by digital companies, compared to a sluggish and legalistic response to claims of breach of the law.  Perhaps we need greater legal compulsion on these transparency principles.  In fact, Data Protection already gives some rights in this area – these rights may need further developement.

Here are my speaking notes for the event:

Law Society – Ethics and Potential Bias in the Law of Algorithms

As the practising Technology lawyer on the panel I confess that I feel a little uneasy in discussing ethical and social issues. My day-too-day work relates to legal implications of the application of technology. A few years ago my work primarily involved the implementation of office automation systems – often “back office” systems. But with the increasing involvement of Tech in our everyday business and personal lives, we cannot ignore the ethical and social dimensions, particularly as IA and machine learning brings Tech into closer interaction with activities that we consider as being intellectual endeavours, rather than simply automation.

The issues of unintended or intentional bias in algorithms takes us into the territory of the principles that should underpin the operation of AI solutions. This is also quite “hot” news with the European Parliament’s resolution for a voluntary ethical code of conduct on robotics for researchers and designers to ensure that they operate in accordance with legal and ethical standards and that robot design and use respect human dignity. They’ve also asked the EU Commission to consider creating a European agency for robotics and artificial intelligence, to supply public authorities with technical, ethical and regulatory expertise.

As with many technological developments, there is some momentum developing for a new regulatory framework for AI. I agree that we need to assess carefully the extent to which the regulatory environment needs to be modified to allow for the introduction of AI – and to assess the extent to which the regulatory environment may need modification to control undesirable social and economic aspects of AI. I remain to be convinced that we need a specific regulatory framework simply for AI. AI is simply a sophisticated IT tool and should be regulated on that basis.

What I am starting to think may be increasingly important – and this is specifically brought into focus in the discussions over the use of algorithms – is greater legal compulsion to make public on a transparent basis the parameters, logic and principles which underpin AI solutions (including algorithms) so that they can be given greater public review and scrutiny. I’ll say more about this later transparency agenda when I look at the existing legal framework for AI.

Firstly – a few words about the idea of a specific legal framework for AI. At its most basic level the idea of a specific regulatory framework for robotics have been debated in science fiction for over 50 years – Asimov’s 3 or 4 rules of robotics emerged in the early 50s. The idea was that these rules would be programmed into robots so that they would govern their activities.

To recap: Asimov’s Laws are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law

NB.0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

BUT Asimov’s rules are – of course – fictional devices. They simply don’t work in practice. How can a robot be programed to identify all the possible ways a human might come to harm? How can a robot understand and obey all human orders, – even people get confused about what instructions mean? Most importantly, Asimov’s laws are inappropriate because they try to insist that robots behave in certain ways, as if they were people. In real life, it is the humans who design and use the robots who must be the actual subjects of any law. Robots are simply tools of various kinds, albeit very special tools, and the responsibility of making sure they behave well must always lie with human beings.

In order to overcome these limitations the Engineering and Physical Science Research Council identified 5 Principles of Robotics in 2010. They provide a useful framework for the implementation and operation of AI and machine learning –

  1. Robots should be designed and operated to comply with existing law, including privacy.
  2. Robots are products: as with other products, they should be designed to be safe and secure
  3. Robots are manufactured artefacts: the illusion of emotions and intent should not be used to exploit vulnerable users.
  4. It should be possible to find out who is responsible for any robot.
  5. Robots should not be designed as weapons, except for national security reasons.

In my view the EPSRC principles set out a good basis for the development of a legal framework around AI solutions. In particular, their focus on AI solution as products working within an existing legal framework takes the debate away from concepts, such as legal personality for robotics – which in my view are likely to obscure the real issues.

Earlier this year (February) in response to concerns around the potential negative implication of the development of AI the Future of Life Institute (funded by the co-founder of Skype and a DeepMInd researcher) convened a conference to identify principles designed to ensure that AI remains a force for good. These principles were developed at the Asilomar conference venue in California through an extensive process of discussion and consensus with the delegates at the conference.

They are already being referred to as the Asilomar Principles. They consist of three categories:

    1. Research issues
    2. Ethics and values
  • Longer-term issues

 

The Ethics and Values Principles identified by the Asilomar Conference are:

6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.

13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.

14) Shared Benefit: AI technologies should benefit and empower as many people as possible.

15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.]

Whilst the Asilomar principles do not specifically refer to compliance of AI with the existing framework they do refer to compatibility with human values of human dignity, rights, freedoms and cultural diversity

So how does the current legal framework apply to Algorithms? Its worth looking at two areas. I’ll spend a few minutes looking at equality legislation and the emerging transparency obligations in the GDPR.

Firstly I will quickly review how the Equality Act 2010 applies in these circumstances. Section 29 of the EA applies to service providers – persons concerned with the provision of a service, goods or facilities to the public or a section of the public, whether or not for payment. Service providers can be individuals, businesses or public bodies. Payment is not a pre-requisite – so it is clear that Act applies to free IT services – such as search engines, online marketplaces, online recruitment agencies.

Where these services utilise algorithms in the provision of the services to the public these services are also subject to the Act.

Section 29 prohibits direct and indirect discrimination:

  • Direct discrimination – Direct discrimination is where a person is treated less favourably than another person and the reason for the less favourable treatment is one of specific range of “protected characteristics”
  • Indirect discrimination – Indirect discrimination occurs when a policy, criteria or practice which is applicable to everyone is shown to put those with a relevant protected characteristic at a disadvantage (this can be either a group of people or a particular individual).

A policy, criteria or practice will not be considered indirect discrimination if it can be shown that it was a proportionate means of achieving a legitimate aim.

The relevant protected characteristics covered by this section are:

  • Age.
  • Disability.
  • Gender reassignment.
  • Marriage and civil partnership.
  • Race.
  • Religion or belief.
  • Sex.
  • Sexual orientation.

So, algorithm based-services which have either a deliberate or an un-intentional bias will be in breach of s. 29. The fact that an AI solution or algorithm was the cause of the discrimination is irrelevant. The service provider will be liable.

I think this is the correct approach. The service provider in these circumstances is responsible for the consequences of the use of the algorithm. The service provide may have bought in the AI solution from a third party and may not be the cause of the bias. It may well be that the bias simply emerges from the way that an AI solution interacts with its database. I.e. asking the question what does a successful CEO look like will inevitably result in images of white middle class men

Secondly, looking at privacy issues and transparency obligations of Algorithmic Principles under privacy legislation

Legitimacy of automated decision making:

The automated decision-taking rules in the GDPR are similar to the equivalent rules contained in the Directive (proposals to introduce restrictions on any ‘profiling’ were, in the end, not included in the final GDPR).

The rules relate to decisions:

−− taken solely on the basis of automated processing; and

−− which produce legal effects or have similarly significant effects.

Basically automated processing can be used where the processing is:

−− necessary for the entry into or performance of a contract; or

−− authorised by Union or Member State law applicable to the controller; or

−− based on the individual’s explicit consent.

However, suitable measures to protect the individual’s interests must still be in place.

There are additional restrictions on profiling based on sensitive data – which need explicit consent, or to be authorised by Union or Member State law which is necessary for substantial public interest grounds

Transparency of Algorithms

There are already provisions in the DP Directive (which will continue in the GDPR) which impose transparency obligations on the use of algorithms when PD is involved. Article 12(a) Directive/ Article 15(1)(h) GDPR – Subject Access Rights

As well as specific data access rights (confirmation whether his/her personal data are being processed and access to the data), the data controller must provide “supplemental information” about the processing.

Already in the Directive “Supplemental Information” includes any regulated automated decision taking (i.e. decisions taken solely on an automated basis and having legal or similar effects; also, automated decision taking involving sensitive data) – including information about the logic involved and the significance and envisaged consequences of the processing for the data subject.

This is a starting point for regulatory compulsion over the principles that used in algorithmic processing. The obligation does not provide the level of clarity that would be desirable in order for greater public review and scrutiny to be achieved in the use of algorithmic decision making. At the moment the rights is limited to situations where the algorithmic processing has some form of legal effects or similarly significant effects. It does not apply to services that provide information only services.

In my view, by giving greater transparency over the principles that underpin the use of algorithms greater public review and scrutiny of algorithms will occur. This will result in pressure on the service providers to changes these algorithms where they cause problems. I the digital world this public review and comment is likely to be far more effective in controlling the use of algorithms than purely legal remedies. Service providers are responsive to public comments whereas they can be quite resistant to legal compulsion.

So – in conclusion – let’s not get too carried away by the possibly exciting prospect of some new form of legal status being granted to robots. Let’s analyze carefully how the existing legal framework applies to these developments in a hard-headed and pragmatic way. Yes – there will be a need for devel0pmmets to the law to accommodate new technology – there always is but let’s aim to do this on an incremental and sensible way which is consistent with and effective in the digital age.

Thank you….

Share
Written by
Roger Bickerstaff
Roger Bickerstaff
United Kingdom
Roger is a partner at Bird & Bird LLP in London and San Francisco and Honorary Professor in Law at Nottingham University. Bird & Bird LLP is an international law firm specializing in Tech and digital transformation.
View profile
Related articles
Smart Contracts – Recognising and Addressing the Risks
4 min to read
29 December 2021
Smart Contracts – Recognising and Addressing the Risks
Smart contracts, where some or all of the contractual obligations are defined in and/or performed automatically by a computer program, are expected to have a significant impact on the way business is...
Technology Projects: Managing the Risks of Innovation and Change Part 3: Contract Reset and Dispute Resolution
Technology Projects: Managing the Risks of Innovation and Change Part 3: Contract Reset and Dispute Resolution
Customers in long-term technology projects can find that while they have been working towards their chosen solution a more advanced, cheaper, or simply more desirable technology has become available....
Digital dispute resolution rules to facilitate rapid and cost-effective resolution of disputes involving novel digital technologies
Digital dispute resolution rules to facilitate rapid and cost-effective resolution of disputes involving novel digital technologies
While some saw the development of products using blockchain technology leading to the demise of disputes, the reality is that disputes in the arena of digital technology are increasing in number. Lawtech’s...
Technology Projects: Managing the Risks of Innovation and Change Part 2: During the Life of the Project
Technology Projects: Managing the Risks of Innovation and Change Part 2: During the Life of the Project
Customers in long-term technology projects can find that while they have been working towards their chosen solution a more advanced, cheaper, or simply more desirable technology has become available....
Cookies
We use analytics cookies to help us understand if our website is working well and to learn what content is most useful to visitors. We also use some cookies which are essential to make our website work. You can accept or reject our analytic cookies (including the collection of associated data) and change your mind at any time. Find out more in our Cookie Notice.