Amazon Go Grocery, the first full ‘cashier-less’ supermarket, opened in Capitol Hill this week. With the increase in the use of Artificial Intelligence and computers carrying out roles that had previously been undertaken by humans, this article considers the benefits as well as the risks – particularly from the perspective of the insurance industry and whether it is keeping up-to-date with the ever- evolving demands of its insureds.
What is AI
Artificial intelligence, otherwise known as AI, is the creation of intelligent machines and computer systems that are designed to act and react like humans. The term ‘intelligence’ can be misleading as computers are only as intelligent as the individuals that build them and create the algorithms that allow them to react to many different situations . A complex algorithm may make a computer appear intelligent but they do not have the ability to think freely, that would give them intelligence. AI is already fully integrated into most of our lives. A government publication in September 2019 reported that one in five homes now owns a smart speaker; most people have encountered a chatbot at some point during a customer service call; and autonomous cars are being road tested globally.
AI is also on the increase in professional industries; it is being used at a phenomenal rate and it is easy to see why. Computers can complete a task in a fraction of the time that it would take a human to do the same thing, without the need for breaks or refreshments. In some instances, it can even eliminate the risk of human error.
To take an example in a legal setting, in a large commercial litigation claim where personal details needed to be redacted from thousands of documents, what would previously have taken a team of paralegals several days can now be completed by a computer system in a matter of minutes. In medicine, reviewing routine scans for relatively easily identifiable abnormalities can be undertaken by a computer, providing more time for doctors to undertake examinations which require face-to-face consultations. The potential time and cost saving benefits are enormous. However, there is also an increase in risk.
As AI develops, the scope of AI’s ability also increases. AI can simulate human cognitive processes so that with the right information input into the algorithms, it can interpret vast amounts of data, recognise patterns and make assessments and predictions. It is estimated that AI will soon be so advanced that it will be able to offer legal advice and predict the outcome of legal claims.
Ultimately, with increased learning, the machines get better and better. However, because it is not really artificial ‘intelligence’, it is computer learning through the input of information, at its heart, there is still a human element and the risk of human error remains.
By way of example, in the banking sector AI has already been implemented significantly to assist in the processing of the millions of transactions that occur each second across the world, and new processes are being developed continuously. Financial institutions have developed voice recognition software where your voice can be used as your unique identifier to provide an additional layer of security. However, as Matt Wilson of the Fraud Risk Services Team at RSM advises, “one of the emerging fraud risks we are seeing is the ability of fraudsters to utilise new computer technology to aid their scams. Fraudsters can use voice recognition technology to replicate a voice, cold calling victims to record enough of their voice that they can use AI voice spoofing software to replicate it. They can then call a person known to the victim and request a money transfer, or mandate amendment, with the person receiving the call being unable to distinguish between the computer and the real person. Any subsequent investigation is hampered, as the person that received the call would be sure they had spoken to the real victim, and so liabilities could be contested if the use of AI is not identified or evidenced”.
Where does the liability lie?
A significant issue with the reliance on AI is establishing where the liability lies when things go wrong. As noted in The Emerging Risks Report produced by Lloyd’s in 2019, the creators of the systems can be encouraged to think about the long-term effects of what they are making, but what is also required is a legal framework for determining responsibility. When an error occurs, was it a design flaw? Was it human error? Or, perhaps, was it a failure to predict how the computer was coded to think?
AI is going to impact all lines of business and when claims arise it may not be easy to determine if, and under what policy, it would be covered. It is easy to see how a claim could fall under product liability, cyber or Professional Indemnity. Using the legal example above, if something was missed and a critical document disclosed to the other side, where does the liability lie? Was it a product malfunction where the system simply had an error, in which case it may be a manufacturing issue.
Was the information input by the lawyer incorrect or insufficient, in which case there may be a professional negligence claim. Was the system hacked so that the information was deliberately misinterpreted, in which case it could be a cyber matter. The complexity of AI products and the very existence of the ‘artificial’ ‘intelligence’ means that it may not be readily possible to determine the underlying cause.
CPB Comment
If AI is not to be given its own legal identity, then the responsibility for any failures to which it is applied has to fall elsewhere. How this will be covered by insurance depends on the AI being used and for what purpose. Ultimately, the liability needs to be made clear in the contract. However, until we reach that position, businesses will be looking to their current insurances to cover the gap.
Although there are some insurance products out there which may cover these circumstances, the market is unlikely to be able to come up with products as fast as the AI technology is developing. Unless, and until, we reach a position where AI has its own identity, or the AI product sets out a clear line of liability, insurers and insureds will need to keep an open line of communication. Businesses should report if and how they are using AI products and brokers should have a clear idea of where such products should be covered and ensure this is identified to insurers at the proposal stage. While AI offers incredible opportunities for businesses to save money and become more efficient, is still very much a grey area of how to deal with the consequences when things go wrong.
Samantha Zaozirny
Associate
T: 0203 697 1906
M: 007880 221676
E: Samantha.zaozirny@cpblaw.com
Dean De Cesare
Solicitor
T: 0203 697 1912
M: 07425 355252
This information has been prepared by Carter Perry Bailey LLP as a general guide only and does not constitute advice on any s pecific matter. We recommend that you seek professional advice before taking action. No liability can be accepted by us for any action taken or not as a result of this information, Carter Perry Bailey LLP is a limited liability partnership registered in England and Wales, registered number OC344698 and is authorised and regulated by the Solicitors Regulation Authority. A list of members is available for inspection at the registered office 10 Lloyd’s Avenue, London, EC3N 3AJ.