Robots and AI: Giving Robots ‘Personhood’ Status’ – Lawyer Monthly | Legal News Magazine

Robots and AI: Giving Robots ‘Personhood’ Status’

Recent advancements in technology have been significant and brought a wealth of benefits to society. Leading the way in innovative developments is artificial intelligence (AI), which is rapidly becoming embedded in day-to-day lives – take Siri and Cortana who live in our back pockets, assisting with directions, restaurant choices and shopping.  Whilst innovation continues to gather pace, the increasing concern surrounds the lack of legal certainty on key issues that are affected by AI and is something that has been recently addressed by the Legal Affairs Committee (Committee) of the European Commission. Andrew Joint reveals a little on the future of AI, robots and the way legislation will form around the development.

 

A 17-2 Vote For EU-Wide Rules on AI and Robots

On 12 January 2017, the Committee passed a draft report (Report) detailing recommendations on updating Civil Law Rules on Robotics. The proposed suggestions marked an interesting step forward for AI within Europe as it explored the potential legislation of AI. Notably, these included discussions regarding introducing personhood status, defining “smart autonomous robots” and creating a European Agency for robotics and AI.  In addition, it discussed providing accountability in the form of a registration system for smart autonomous robots, mandatory insurance schemes and an advisory code of conduct to guide ethical design, production and use of robots.

 

Personhood Status for Robots – What, Why, How and Does It Work?

Society has historically placed a strong emphasis on the legal concept of a “person”; it determines the approach of rules on ownership and liability. Throughout history we have attached personhood to the human: individuals own items, commit crimes or enter into contractual relations.  Personhood already exists for non-sentient beings in the form of corporate personhood, extending to entities such as limited companies, PLCs and trusts. A company can enter into contracts, incur debt and be held accountable for its actions and these legal obligations can be distinct from those attached to their parent or subsidiary companies, directors and shareholders.

The question has now been raised as to should robots or AI attract a form of personhood status?  As the Report notes, a “robot’s behaviour potentially has civil law implications” and accordingly, “clarification of responsibility…and legal capacity and/or status of robots and AI is needed in order to ensure transparency and legal certainty for producers and consumers across the European Union”.

The Report draws on the comparison between personhood for companies and personhood for robots. It also goes further by recommending a scale of liability that is “proportionate to the actual level of instructions given to the robot and of its autonomy”. The law has shown itself as malleable enough to stretch the concept of legal personality to corporates in the past and so, should we philosophically decide we want to, in theory applying the same approach to robots is easily achievable.

 

Points of Deliberation

The impact of personhood discussions for AI are easily seen in a couple of areas – intellectual property rights and liability, both of which are considered in the Report.

The law in the UK already deals with attaching rights to machine created content and with non-humans owning intellectual property. For example, with copyrights, the author of computer-generated work is the person “by whom the arrangements necessary for the creation of the work are undertaken” (Section 9(3) Copyright, Designs and Patents Act 1998 (CDPA 1988)) and this can include an individual, or “a body incorporated under the law of…the United Kingdom or of another country (Section 154(1)(c) CDPA 1988). IP legislation can already deal with machines creating work and non-humans owning rights.  However, this current law may not suit advanced cognitive AI where the intelligent function is wholly machine generated, thus removing the traceability of ownership back to a human programmer. Accordingly, the Report urges the Commission to “elaborate criteria for an ‘own intellectual creation’ for copyrightable works produced by computers or robots”.

AI also challenges the view of responsibility for the actions of technology. At present, the legal framework offers an answer which is indifferent to whether technology is intelligent and where responsibility lies depends on facts surrounding use and damage. That is, typically responsibility for the outputs of use of technology lies with users whilst providers of technologies are accountable for the technology that they provide. The fact that personhood and liability for robots is discussed at length in the Report is indicative of the need for debate to ensure that there is an agreed “degree of transparency, consistency and legal certainty throughout the European Union for the benefit of consumers and businesses”.

 

What Now?

The Report feels like the first time a major legislative body has considered with substantial granularity how legislators might approach the status of AI and the laws for development. The European Commission has provided relatively comprehensive recommendations; it now remains for the legislators to decide on the detailed legislative proposals.  These more substantive discussions within the EU is expected to happen in Spring 2017.

 

Leave A Reply