The Internet of Things, Ethically Speaking

By the SMU Social Media Team

Driverless cars, smart cities and homes, devices to monitor and maintain our health – Gartner forecasts that by 2020 there’ll be more than 20 billion internet-connected devices in use.

It’s difficult not to get excited by the possibilities of the Internet of Things (IoT).

But as we all get swept up in anticipation for the way in which these new devices and technologies will improve our lives, we asked Tan Hwee Pink, SMU Associate Professor of Information Systems at the School of Information Systems, if we’re overlooking a more serious side to the 4th Industrial Revolution.


In our excitement about the promise of IoT, are we overlooking the potential downsides?

THP: As more smart devices get connected to the Internet and the surface area of vulnerability to cyber attacks increases, the key downsides encompass both data security and privacy.


In the context of smart homes, for example, where we increasingly see voice-activated home automation gadgets such as Amazon Echo and Google Home, a malicious security attack on one of these devices could be severe – disabling all your smart appliances, including your smart door lock.

Privacy concerns will increase exponentially as more and more data is collected, often passively, through IoT, via connected devices including smartphones, wearables, and smart appliances. While on the surface the data is used to make our lives easier, it’s often unclear where and with whom this data is shared. Opportunities for it to fall into the wrong hands are also on the rise.


How will these downsides lead to vulnerability in cities like Singapore where smart buildings, driverless cars and assistive healthcare technology are already a reality?

THP: In Singapore, buildings are being made smarter through the installation of networked sensors, actuators and other systems which are collecting more and more information in real time, enabling the system to respond accordingly to environmental changes.

There is the potential that these systems could come under attack. For example, information collected about the indoor temperature and humidity can be used to intelligently regulate the settings of air conditioners for users’ comfort, but a security breach could allow a malicious attacker to disable air conditioners.

A different scenario could unfold through assistive healthcare technologies, such as networked medical devices, which measure physiological parameters like blood pressure and weight. These measurements are shared with family doctors so they can schedule follow-up visits on a data-driven, as-needed basis. However, unauthorised sharing of this data to third parties such as insurance companies could lead to an invasion of privacy with unsolicited sales visits.


Is there an ethical obligation for developers of IoT technology to consider these security and privacy concerns?

THP: My view is that it may be difficult to impose an ethical obligation on developers of component IoT technologies and products targeted at individual consumers, but developers should be obliged, at least, to caution users on potential security and privacy risks, and methods to mitigate them.

That said, solution providers and system integrators who are responsible for putting together different pieces of IoT technology for larger scale Smart City applications such as Singapore’s Smart Nation applications, should have an ethical obligation to consider the long-term impact of security and privacy because of the increased scale on which organisations and people will be affected by security and privacy breaches.


Most developers are motivated by pushing the boundaries of technology, not ethics – so is there a danger that a focus on long term ethics will stifle innovation?

THP: Such a risk is plausible – there is often a trade-off between security and performance.

To overcome this, the level of security and privacy provisioning should be use-driven, tailored to each device, instead of a one-size-fits-all. So, while pushing the boundaries of technological possibility, developers would consider an appropriate level of security and privacy provisioning from the outset, instead of it being applied retrospectively, as an after-thought.


How might a code of ethics be applied in so-called Smart Cities like Singapore?

THP: At the national level, a realistic solution is to impose security and privacy requirements in technical references relevant for IoT solutions for Smart Cities. In Singapore, the Information Technology Standards Committee has formed technical committees and released several technical references related to this since 2014. But ultimately, a smart city can only be as smart as its citizens who will need to embrace and adopt this technology as part and parcel of their lives. For them to do so, they should be made aware of, but also educate themselves about the potential security and privacy risks.


How can consumers protect themselves and what safeguards can they put in place while enjoying the benefits of IoT?

THP: Consumers should certainly take a proactive approach. Incredibly, a recent study found that 80% of IoT apps are not tested for security flaws!

While there is a growing awareness of the potential downsides of the IoT, unfortunately, most of us still tend to overlook the fine print and just click ‘accept’ to terms and conditions of use for mobile applications, unwittingly signing away our personal data to third party companies.

The best protection for consumers is to make a point of understanding the agreement terms of their IoT devices and associated mobile applications, as well as actively securing all their devices connected to the Internet, rather than assuming that their devices come with built-in protection.