By the SMU Social Media Team
The replacement of human beings with machines is no longer the stuff of science fiction. It’s happening all around us.
In some cases, it’s very visible. We’ve seen the testing of driverless cars begin on Singapore’s streets and then halt abruptly because of an accident. Thankfully no one was injured, but the incident raises complex questions about liability and responsibility.
In other cases, it’s behind the scenes. Whether we’re aware of it or not, machines are already making decisions for us in everything from financial services to medical diagnoses – decisions which can sometimes be discriminatory and infringe upon our fundamental rights.
And so how should the law respond in these circumstances?
That was the question which brought an audience of more than 220 from the legal, technology, financial services, telecommunications and academic sectors to SMU’s Mochtar Riady Auditorium for the Jones Day Professorship of Commercial Law Lecture, entitled, Taming the Machine? Liability and Responsibility for Machine Learning Technology.
The session examined whether our existing legal system, which has evolved over hundreds of years, based on the activities of human beings, is still viable as we move towards an environment where machines are making decisions rather than humans.
Professor Chris Reed speaking at the Jones Day Professorship of Commercial Law Lecture
“This is not an area where we have any answers,” opened speaker Chris Reed, Professor of Electronic Commerce Law at the Centre for Commercial Law Studies, Queen Mary University of London, “At the moment society has no idea how it is going to use Machine Learning technology. It is only just arriving with us, and we are only just learning about its effects, but it really has important effects in a whole range of fields.”
In the Professor’s view, the fact we’re in unchartered territory doesn’t mean we should sit back and wait for developments. He outlined three specific reasons why the legal profession should be fully engaged in Machine Learning Technology.
First, is the very human reaction to call for regulation when a new technology is created, “…certainly we’re going to have some regulation in some places but one of the arguments I tend to make today is we should be careful about regulating because we don’t know what it is we’re regulating and if you’re not careful you can get entrenched in a regulatory regime that’s inappropriate.”
Second – machines are only as reliable as their programming and cannot always be perfect. “They will sometimes make incorrect decisions”, says Professor Reed, “and, “if those incorrect decisions have consequences that are costly for someone, then the law is looking to compensate in some way or to punish the person responsible for the machine depending on the nature of the incorrect decision.”
And third, because machines make decisions based on their programming, and not based on human insight, they sometimes make correct decisions that are still wrong. For example, when Facebook decides whether or not to let you see a particular post in your newsfeed, it produces an objectively correct decision based on its algorithm, but that decision can sometimes be discriminatory. “What if it infringes your rights to free speech or your rights not to be discriminated against on the grounds of your race or sex or religion?” asks Professor Reed, “How do we cope with that? How should the law respond? Is that just a matter of the law compensating you afterwards or is it something that needs to be thought about at the design stage of machine learning technology?”
In Professor Reed’s view, there is a place for the law in the development of new technology itself, so that issues of liability and responsibility when things go wrong can be considered from the outset.
Elizabeth Cole speaking at the Jones Day Professorship of Commercial Law Lecture
The legal system is going to have to move fast to keep up with the pace of change, as Elizabeth Cole, partner at Jones Day, pointed out in her commentary which followed the lecture, “According to Gartner, in 2023, one-third of all highly skilled work done by doctors, lawyers, traders, professors, will be replaced by machines or by less skilled workers assisted by cognitive computing technology.”
She went on to underline the potential complexity of unresolved liability issues, particularly when we consider Machine Learning Technology as just one aspect of Artificial Intelligence. She asked, “What happens if the machine starts learning for itself, starts making independent decisions, what if it decides that it is driving along and the only way to save itself as a machine is to hit someone on the sidewalk? Do you make the manufacturer responsible because it put the machine in place? Even though when the machine learned for itself – it developed a self-preservation instinct?”
While many questions were raised during the session, the rapid evolution of Machine Learning Technology meant there were few answers. However, the audience left knowing one thing for certain – that the complexity of legal and ethical issues that arise from Machine Learning Technology are sure to keep lawyers of today and the legal graduates of tomorrow busy well into the future.
Learn more about the SMU Master of Laws and SMU Dual LLM in Commercial Law (Singapore and London) programmes and find out how you can be part of the next intake.
You might also be interested in: