Artificial Intelligence: The Implications for Commercial Law in Asia

By the SMU Social Media Team

Artificial Intelligence (AI), Machine Learning, Neural Networks, Cognitive Devices – all buzzwords that you’re guaranteed to see in any self-respecting article forecasting trends for 2017.

This year, it seems, is the year when the future will finally arrive – in the form of AI. IBM’s chief innovation officer Bernie Meyerson says 2017, “will be the year of the solution as opposed to the year of the experiment” so we can expect to see the financial services industry, retailers, pharmaceutical companies, healthcare providers and manufacturers among others, become wholehearted adopters of AI, as the technology goes mainstream.

But is this a positive development? Sure, AI can certainly take the grunt work out of mundane tasks like mortgage approvals, medical diagnoses and even legal rulings – but what are the implications for legal liability and responsibility when things go wrong?

We asked Assistant Professor of Law, Eliza Mik from SMU School of Law to share her insight into the implications of AI on commercial law in Asia.

 

 

AI looks set to pervade every aspect of our lives – the way we communicate, transact, conduct business, travel – what are the key implications for commercial law in Asia as you see them?

EM: Commercial law follows commercial practices, and as the commercial applications of AI are nearly limitless and affect every sector of the economy – from oil and gas to healthcare and banking – the implications for commercial law are also potentially limitless. But, they depend very much on the way in which AI is used, so that means there will be different implications for different sectors – airlines, finance, healthcare, retail and so on.

 

How prepared is the legal system in the commercial sphere in Singapore and across Asia, to tackle these issues as they arise?

EM: The widespread use of AI brings many legal challenges, particularly around the issues of liability and responsibility. The specific type and range of such challenges depends upon the particular application of the technology. In some instances, the law can remain as it is. In other instances, it may be necessary to introduce outright prohibitions or restrictions on the use of AI.

Arguably, certain decisions, such as those relating to human life, should be made by humans, not by machines. In some instances the existing regulations can be reinterpreted to address AI-specific issues, in other instances it will be necessary to enact new laws. Laws are designed to be flexible and future proof – but AI may test their limits.

 

How does one determine liability if an AI application causes some kind of harm such as injury or financial loss?

EM: This presents a considerable challenge for many reasons. An AI application may employ several types of software from different manufacturers, and establishing which of them caused a particular malfunction and which of them should be liable will be very difficult.

It may also be difficult to determine whether the harm is the result of a malfunction or whether the AI operated correctly but produced an unplanned result. It’s important to remember, that AI systems are designed to learn from their experiences and gradually change the way they operate. That is the very point of machine learning.

There may be instances where liability is imposed on those who design the self-learning algorithm and others where it is imposed on those who created the system as a whole. But, every situation must be looked at individually.

Equally, the person using or interacting with the system may also be to blame. For example, if a “self-driving” car, instructs you not to take your hands off the wheel and you decide to do so nonetheless, even momentarily, should you remain responsible in the event of an accident? Lawyers and insurance companies will have their hands full for many years to come.

 

What happens when AI devices begin ‘thinking’ for themselves? How does one ascertain liability when things go wrong?

EM: This question really highlights the extent of the issue. Even if AI devices do not think for themselves in the same way that humans do, we must anticipate that they will become increasingly independent and may produce results that were simply not anticipated or intended. And the consequences may be disastrous.

We must not forget, at any stage, that AI has superior data processing skills to humans, but it cannot differentiate between right and wrong. AI can make sophisticated decisions on the basis of hundreds of simultaneous inputs, but it cannot exercise judgment.

It may be still too early to worry about Hollywood movie-scenarios like Terminator, but technology develops quickly and we may be facing some pretty tough legal and ethical questions very soon. There are already proposals in the European Union to create a special type of “legal personhood” for autonomous machines. It is also being debated that companies using such devices should obtain special insurance.

 

Even the legal profession is being disrupted by AI. What are the key opportunities for the legal profession through the adoption of AI?

EM: AI enables law firms to automate a large number of tasks that do not directly involve higher level legal expertise. AI assists lawyers to analyse enormous amounts of text in a very short time. It helps with document management, especially during the discovery process in litigation.

AI can optimise many processes but it must not be forgotten that human involvement in legal decisions remains indispensable. Most legal work involves direct interactions with clients and a certain set of “soft skills.” I do not think that AI assistants will be advising on complex commercial and family issues anytime soon. When it comes to legal problems, face-to-face contact with a human lawyer remains indispensable.

 

Are the big firms under pressure to evolve because of competition from new so-called Law-tech start-ups?

EM: Yes and no. Some of those startups make unrealistic promises, such as replacing lawyers altogether. At this stage of development, it is rather unlikely that computers will be able to draft complex legal documents or provide concrete legal advice.

Despite massive advances in machine learning and natural language processing, computers are still not very good at tasks that require an understanding of the meaning of words (as we might have noticed when using Google translate!). Some Law-tech starts up may introduce technologies that facilitate certain legal tasks, but there should always be a human lawyer who double-checks the output produced by the computer. AI systems may be super-intelligent but they lack common sense and empathy. AI systems are impartial and unbiased – but do we really want to eliminate the human element from all areas of the law?

 

How should governments approach AI regulation? Should each approach it in isolation or is a regional or even global approach preferred? How are governments in Asia specifically looking at this issue?

EM: It would be optimal if governments could approach AI in a more organised fashion and develop common approaches to dealing with technological progress. In the near future we can expect some developments in the areas of self-driving cars and, most probably, in banking in finance.

The challenge lies in ensuring that regulation doesn’t hamper the development of AI technology, but at the same time is sufficient to prevent AI systems from discriminating against certain groups of people or infringing upon their rights. We do not want to regulate too soon but, at the same time, we do not want to regulate too late. It is very difficult to strike a balance between technological progress and basic human rights.

 

And what about cross-border issues? Thinking particularly about the development, deployment on our roads, and regulation of driverless vehicles.

EM: The UN has a working group that is dealing with these issues. Last year, amendments to the 1968 Vienna Convention on Road Traffic came into force, paving the way for driverless cars. In Singapore, the Ministry of Transport has set up a Committee on Autonomous Road Transport for Singapore.

We must realistically expect that the regulations will probably start locally and then, progressively, encompass more and more countries. Be it driverless cars or the deployment of AI in finance or healthcare – these technologies have significant implications for both commerce and consumers. International co-operation is indispensable. After all, it may prove difficult to confine the AI to one specific jurisdiction.

Learn more about the SMU Master of Laws and SMU Dual LLM in Commercial Law (Singapore and London) programmes and find out how you can be part of the next intake. 

 

Read also Taming the Machine?