The use of artificial intelligence (AI) systems has been present in our daily lives for several years now, however, with the irruption of generative AI systems (those capable of creating content) everything seems to have changed at a speed that is difficult to keep up with. Traditional AI (a term that evokes something that has been around for a long time) is present in our daily activities, processing data in quantities that are quite difficult to conceive. This directly and indirectly influences our consumption habits, both physical and digital, the way we move, shop, entertain ourselves, among many others, however, generative AI, which is quite new, has shown to have an important capacity to produce consequences in people.
Generative AI has the ability to create texts, images, sounds, programming codes, use personal data such as image or voice, credibly impersonate an identity, among many other uses, some good, others less so. All this comes along with the obvious ethical concerns raised from the use of generative AI and the dual capacity it has, to generate good things as well as to produce harm. Traditional AI has its own challenges and characteristics that, although innumerable, we can mention the very close autonomous transportation technology (cars, aircrafts, drones) and its obvious capacity to produce damages derived from errors whose attribution of fault is diffuse and complex.
At the forefront of this issue is the European Union, which through the European Parliament and its enviable ability to identify legislation necessary for society at specific times, as well as the ability to develop such legislation under technical and professional parameters, has produced three important initiatives. The first, an AI law that establishes and regulates the obligations and actions of AI providers, classifying them according to their potential risk of harm, focuses on the protection of fundamental rights and establishes standards of transparency and security. Thus, it classifies them into AI systems of minimal risk; others of limited risk, being those systems that allow the user to discern and decide whether to use the system.
Then, it classifies high-risk AI systems and another interesting category called “unacceptable risk” that addresses very logical collective ethical concerns such as cognitive behavioral manipulation (especially of vulnerable groups susceptible to indoctrination such as children and minorities at social risk); social classification or scoring (based on ethnicity, socioeconomic or other characteristics) and biometric identification systems, the latter for reasons that could be the subject of a whole book.
In addition to the European regulation on AI (of general application) there is the draft Directive on Civil Liability for the use of AI which, after extensive studies and discussions, represents a paradigm shift in the application of traditional civil liability rules. This is because it is recognized that damages caused from the use of AI systems generate evidentiary challenges for the victim who is seeking compensation. In a traditional tort, the victim must sue the right person, prove the damage and very importantly, must prove the causal link of the defendant’s actions with the damage he suffered. The European Union Directive seeks to limit the burden of proof to the victim, establishing a rebuttable presumption (iuris tantum) of the causal link, in addition, it facilitates the victim’s access to evidentiary material through the corresponding courts.
Finally, and with good sense, the European Union, proposes reforms to its product liability directive, which already since 1985 provides an strict liability (the existence of negligence is not relevant) for damage caused by defective products, while recognizing again very specific problems posed by the use of products based on AI systems that can be considered defective, for example, who to sue (the developer, the owner, the supplier, the user? ) and the consideration of software and digital files (application, program, language and their related) as a “product”. The traditional rules on subsidiary liability for damage caused by things, developed by Roman law and perfected by the French codification with the Industrial Revolution, are obsolete.
All of the above has a direct impact on the world of liability insurance. There is a new way of causing damage, there are new risks derived from the use of products and conducts that are insurable, and therefore, the insurance and reinsurance market faces important challenges ahead to adapt. On the one hand, due to the discussions of the European Union’s directive on civil liability for damage generated from the use of AI, raise the possibility of mandatory insurance for providers of AI systems, which would create immediate demand. On the other hand, there is a natural demand arising from market competition from companies using AI systems.
The main challenges which most experts agree are related to the design of adequate products, considering the lack of sufficient data on claims (both the occurrence and the form and causes of such claims) as well as the absence of accurate actuarial models related to frequency of occurrence and magnitude of damages, among other essential elements for the adequate management of risk by the insurance sector.
Algorithmic risk (as Zurich and Microsoft called it in the White Paper entitled “Artificial Intelligence and Algorithmic Liability”, 2021) is a reality present in daily life, and in a very short time, it will increase directly proportional to the use of AI systems, so we must aspire that the antiquated regulatory and jurisprudential systems in Latam start, at least, to emulate systems from other latitudes to allow us to move forward and be closer to the world in which we live.
Jorge Alexander Olivardía
Insuralex Exclusive Member in Panama.