In this article we will reflect on the civil liability derived from the use of artificial intelligence (AI) and the problems of its use.
This article aims to raise awareness of the different legal issues arising from the use of systems equipped with artificial intelligence, in particular civil liability for damage caused mainly by machines or devices incorporating this new technology. We will also look at some of the proposals for dealing with the legal issues surrounding damages caused by the actions or omissions of AI machines.
We will refer to the recent case publicised in the press on 16 February of corporate liability arising from the malfunctioning of an artificial intelligence system. “Air Canada must comply with the refund policy invented by the airline’s chatbot.
We will also look at some of the proposals for dealing with the legal issues surrounding damages caused by the actions or omissions of AI machines.
The fundamental problem we face is the impact on society and on the law of actions performed by machines that manifest themselves capable of acting and thinking on their own and the corresponding consequences that this may entail when it comes to establishing the chain of liability when this autonomous action causes a harmful result.
If we add to this the component of a “lack of regulation”, the result cannot be more explosive.
Nowadays we encounter driverless vehicles that cause traffic accidents, that “nobody was driving” or decisions taken in the business sphere that have resulted in substantial economic losses.
In these cases where a vehicle equipped with an autonomous driving system causes an accident, who is liable for the damages? How do we determine the causal link between the decision taken by the AI and the harmful result?
Who is liable for damages, the manufacturer, the owner of the vehicle that decides to use autonomous driving, or the manufacturer that installed it and created the risk?
The first question to ask is what is AI? The doctrine has defined it as a scientific discipline that deals with the creation of computer programmes that perform operations comparable to those performed by the human mind, such as learning or logical reasoning.
It was John McCarthy who, in 1956, defined AI as “the science and ingenuity of making intelligent machines, especially intelligent computer programs”.
The European Commission in 2020, defines AI as “any system based on software or embedded in physical devices that exhibits behaviour simulating intelligence, inter alia, by collecting and processing data, analysing and interpreting its environment and taking action, with some degree of autonomy, to achieve specific objectives”.
In short, what we are interested in is identifying the sectors that apply the use of AI, such as mobility and transport, social robots with artificial intelligence, and telephony with systems such as Alexa or Apple’s Siri.
From a legal point of view, we do not have specialised AI or robotics regulations, so what is being done in practice is to adapt existing regulations to these cases.
In practice, the Civil Code and regulations such as the Intellectual Property Law, the Trademark Law and the General Law for the Defence of Consumers and Users are used to solve the legal problems regarding the use of intelligent systems in our society.
However, this is still a problem because these rules were not created for this purpose and specific regulation is increasingly required.
AI and Robotics, due to their characteristics of learning capacity, decision-making capacity and unpredictability, may affect their civil liability regime.
The main problem is the difficulty in determining the person or subject that can be held liable for the damage, which greatly affects the civil liability claim, causing problems for the imputation.
These cases are characterised by the difficulty of predicting the behaviour of a product based on AI and understanding what could have been the causes of the damage caused.
On the other hand, it should be borne in mind that the traditional notion of the “finished product” concept under the Defective Products Directive does not work, as these products are subject to constant upgrades and subsequent improvements after they have been put into circulation. Therefore, the mutability of intelligent systems in short periods of time must be taken into account if a liability regime is to be established.
Liability arising from the use of AI
It is important to establish legal liability for damage caused by the actions of robots. How will the CR system be established in cases where the robot commits a fault that causes damage to a third party. Who is liable?
A new civil liability system will have to be constructed that is adjusted to the peculiarities of an intelligent system.
Do we consider that if the AI causes the damage, who is liable for it?
In a factual scenario, how can we impute liability to a person who is not driving the vehicle in an accident as a consequence of an AI failure? In our legal system, the person who causes the damage is the one who is obliged to repair it… but in this case it is the AI.
This is one of the problems because in these cases the identity between the tortfeasor and the person responsible for the damage is completely broken, to which must be added that the tortfeasor is not a person or a minor or an animal, for which its owner is liable, but a “thing”.
In practice, the vehicle being driven by the AI that hits a pedestrian, the driver is liable, even if it is not being driven by him at the time of the accident. This leads us to the assumption that he is liable for an act that he has not directly caused, in the same terms as a parent who is liable for damage caused by a minor or an animal in his care, even if he has not directly caused it.
Serious difficulties are encountered when it comes to identifying the action/omission, the damage and the causal link when AI is involved, since in the majority of cases it is necessary to abstract from each element that makes up civil liability in order to conclude on the existence of liability.
“Whoever causes damage must repair it and therefore, civil liability requires human conduct, the production of damage and a causal relationship between the conduct and the result produced”.
But in the cases in which AI intervenes, the person causing the damage is not a natural or legal person, but a machine designed to make decisions on its own after having carried out the work of “processing or reflection” that any person would have done.
All these elements, and the absence of a specific regulation, make it difficult to compensate the injured party for damage in which AI is involved.
The Spanish legal system does not have a specific regulation on AI, so we must turn to the European Union.
We refer to the 2017 European Parliament Resolution which focuses on the study of robotics and AI, and refers to civil liability for damage caused by autonomous robots and the legal status of robots.
It sets out recommendations to be followed in order to regulate the creation, use and effects of robotics.
The resolution deals with liability for the harmful action of a robot and proposes the creation of a directive establishing legislation to regulate these actions within a legal framework of obligations and rights of robot manufacturers, institutions, owners, users and citizens in general.
The future regulation on liability for acts or omissions of robots has to determine whether a machine can be held liable for its conduct and whether it can fall under the current Jurica category, i.e. whether a robot can be considered for liability purposes as a natural person, a legal person or an object.
The European Parliament’s Resolution of 20 October 2020 sets out in a novel way the process of regulating robotics and artificial intelligence, with the proposal of a regulation on AI that includes a specialised regulation on civil liability when damage is caused by an intelligent system.
The Resolution aims to have a legal and harmonised framework based on common principles in order to ensure legal certainty, to establish a level playing field throughout the Union and to protect our European values and citizens’ rights.
The future regulation establishes two liability regimes, depending on the type of AI system that caused the damage.
An objective type for damage caused by high-risk AI systems and subjective liability for damage caused by non-high-risk AI systems. This is based on the potential to cause harm to the public. The user of such systems will be held strictly liable for damage caused by the physical or virtual activity of the devices or processes governed by such a system. The following are compensable damages: bodily injury, property damage and non-material damage.
Everything that is not a high-risk system is classified as “other AI systems”.
Fault liability applies to these systems. This is a quasi-strict liability, or with reversal of the burden of proof of fault, where the operator (initial, final or both) will be subject to subjective liability for any damage or harm caused by the physical or virtual activity of the system.
Finally, we will refer to the recent AIR CANADA case published in the international press on 16 February, on corporate civil liability arising from the malfunctioning of the system, where the operator (initial, final or both) will be subject to subjective liability for any damage or harm caused by the physical or virtual activity of the system.
The news reports that “Air Canada must comply with the refund policy invented by the airline’s chatbot.
The Allegation of Fact, is the malfunctioning of an AI system by issuing confusing and contradictory information as company policy that results in a consumer’s decision to incur an expense that is not partially reimbursed.
The user, Jake Moffatt, on the day his grandmother died, visited Air Canada’s website to book a flight from Vancouver to Toronto. Unsure of how Air Canada’s bereavement fares worked, he asked the Air Canada chatbot to explain them to him.
The chatbot provided inaccurate information, encouraging Moffatt to book a flight immediately and request a refund within 90 days.
In reality, Air Canada’s policy explicitly stated that the airline would not reimburse bereavement travel expenses once the flight was booked.
Mr Moffatt, following the chatbot’s advice, requested a refund, but Air Canada refused and promised to update the chatbot and offered Moffatt a $200 voucher to use on a future flight. Mr. Moffatt refused the coupon and filed a small claim in the Canadian Civil Resolution Tribunal.
The court, deciding the case in Moffatt’s favour, said: “Air Canada argues that it cannot be held liable for information provided by one of its agents, servants or representatives – including a chatbot” but “does not explain why it believes that to be the case” or “why the website entitled ‘Grief Trip’ was inherently more reliable than its chatbot”. The court ruled that Moffatt was entitled to a partial refund of 650.88 Canadian dollars (CAD) of the original fare (about USD 482), which amounted to CAD 1,640.36 (about USD 1,216), as well as additional damages to cover airfare interest and court fees. Air Canada stated that it would comply with the judgment and considered the matter closed; with the practical result of the closure of Air Canada’s chatbot.
Air Canada’s defence is that Mr Moffatt should never have relied on the chatbot and the airline should not be liable for the chatbot’s misleading information because “the chatbot is a separate legal entity that is responsible for its own actions”.
This argument was not accepted by the Court when it considered that “Air Canada did not take reasonable care to ensure that its chatbot was accurate and Air Canada is responsible for all information on its website (…) It makes no difference whether the information comes from a static page or a chatbot”.
This leads to the conclusion that the principle of consistency of the information provided by AI systems applies, with the trader being responsible for its content, while the consumer is not responsible for having to check its veracity.
Up to the present time, the legal systems have the tools to respond to this problem, but what is certain is that what we could call the “Law of Robots” is appearing on the legal horizon of the European Union, and AI is constantly evolving, anticipating current legal solutions and creating scenarios that break down the current structures to build civil liability for which those who create the risk and generate the damage must necessarily respond, so that insurance law fulfils the social function that is the basis of its existence.