How AI Could Affect Well-Being, Privacy, and Leadership at Work

[Feature Image: Associate Professor of OBHR Michael Bashshur (left) and PhD in Business (OBHR) graduate, Laurel Teo]

 

By the SMU College of Graduate Research Studies Team

From facial-recognition systems to camera-phone gaze trackers / image stabilisers, to search-engine optimisation, online translation tools, chatbots and even X-ray diagnostics, artificial intelligence (AI) has become common in everyday applications.

Organisations, too, are increasingly adopting AI – including using AI to make or aid in making decisions. In such instances, AI is meant to enhance efficiency, increase objectivity and reduce bias in decision-making. In short, AI helps organisations achieve better and fairer outcomes.

Nevertheless, research has shown that people consistently resist or have reservations about decisions made by AI.

In this third episode of the Theory of Curiosity podcast series, which features conversations with SMU’s brightest researchers on their work, SMU Lee Kong Chian School of Business (LKCSB) Associate Professor Michael Bashshur and LKCSB final-year PhD candidate Laurel Teo delve into how people respond to the use of AI decision tools at work and why they respond as such. Michael focuses on fairness and corruption research in the field of Organisational Behaviour and Human Resources, and has been advising Laurel on her PhD dissertation.

Before enrolling in graduate school, Laurel worked in journalism, consulting and finance and experienced first-hand how recent technological disruptions such as social media, Big Data, and AI and have transformed jobs and entire careers in these industries. She witnessed the uneasiness that employees felt with these developments – even as organisations marched ahead with the changes.

“I saw that [employees’ discomfort] wasn’t being addressed, and people weren’t necessarily understanding why that was the case. And that’s why I thought it was important to understand why this is happening and why is it important. Why should organisations care?”

 

AI and HR

Laurel’s research on the use of AI decision-making in Human Resource Management (HRM) shows that it triggers a response called “uniqueness neglect”. This is a concern that our unique situation and special individual characteristics are “not being adequately considered” in a decision process.

Existing research suggests that people like being special. If they are made to feel that they are too similar to others, it arouses discomfort in them. At the same time, people do not believe that machines – or AI tools – are as capable as human beings at understanding and addressing individual characteristics and unique circumstances. Therefore when AI, rather than a fellow human being, is used to evaluate people, it triggers concern in people that their unique attributes and circumstances may not be properly recognised and appreciated.

But not every decision made by AI sparks concerns of uniqueness neglect. “For instance, when it’s about song selection or airline flights – something that’s not terribly important – you might be okay,” Laurel explains.

 

Therefore when AI, rather than a fellow human being, is used to evaluate people, it triggers concern in people that their unique attributes and circumstances may not be properly recognised and appreciated.

 

“If it’s about a medical condition [however] there’s a lot of research about how patients are just not comfortable with a robot assessing their medical condition compared to a human doctor. And we found it’s the same in HR management. For example, if a robot or an AI is deciding on your salary or promotion where your career or even the rest of your life is at stake, people tend to be very uncomfortable with that.”

Laurel has found that when employees feel that their uniqueness has been neglected, it impairs their psychological well-being – they feel more stressed and experience greater negative moods and emotions. The consequences of that can be severe. “[We] know that when people are unhappy, they don’t perform well, they’re less satisfied with their jobs, and then they leave, they quit.”

Stress can also lead to absenteeism and hurt physical health. As Michael notes: “With stress, you end up with people not showing up for work more, reporting ill more [often], in fact, getting more ill… It’s not just a cost to the organisation, it’s also a cost to the individual.”

 

AI and Big Data – Privacy Implications

Aside from unintended psychological consequences, the rise of AI and Big Data in tandem with remote work and hybrid work arrangements has other implications – privacy and ethical concerns – in the work context.

Lockdown measures during the Covid-19 pandemic forced organisations worldwide to implement work-from-home practices. To keep an eye on employees who are out of physical sight and to study employee productivity levels, organisations have been putting in place more surveillance technologies and systems.

Such systems collect copious amounts of information about employees and their behaviours – from tracking keystrokes and mouse movements, to monitoring websites surfed or personal emails transmitted using office-issued devices during office hours, to webcams that take regular snapshots to monitor employee presence, and other devices that track physical location and movements.

To safeguard employee privacy, national legislations may require that employers disclose to employees such surveillance. But legislations are not going to help employees who feel that such digital monitoring is too intrusive. Says Laurel: “If the employer says, I need this information from you, are you going to say, ‘No, I’m going to quit my job’? No, not really.”

The best course of action for employees would be to be clear about their rights, scrutinise employment contracts, and “be very careful before you sign on the dotted line, and make sure you know what you’re getting into”, she adds.

Employers, too, should do their part by taking the initiative to be transparent about the data they’re collecting on employees. As Michael notes: “Why do people trust other people or organisations? One reason is we believe they are benevolent; they actually have our best interests at heart. So when an organisation is hiding how they are using data or not highlighting what’s going on with employee privacy and an employee finds out, that perception of benevolence goes away, trust goes away, perceptions of fairness go away. And this employee will walk when the opportunity comes.”

“This points to the responsibility that comes with this power in terms of how [employers] manage that data and protect that data and use it responsibly.” It also raises questions about how leaders lead in the age of AI and what the advent of AI means for leadership, he adds.

 

AI and Leadership

When it comes to AI decision-making, it’s not only rank-and-file employees receiving outcomes from such decisions who are concerned. Decision-makers in leadership roles, too, have reservations about it.

Laurel says, “In the usual course of events, [supervisors and managers] would be the ones making the decision. But if they have to delegate it to an AI, then they might feel that there’s a loss of power there.”

This affects how managers see themselves – as leaders at work – and leads them to feel that their uniqueness or opportunity to be unique have been compromised. In short, they too, feel that their uniqueness has been neglected, and such perceptions can lead to higher levels of stress and negative emotions.

Michael describes this as “taking both the carrot and the stick out of the leader’s hands”. “The leader doesn’t have reward or punishment power anymore,” he notes. And the implications? Other aspects of leadership such as soft skills like persuasion and getting people on board with ideas could become more important, suggests Michael.

 

Other aspects of leadership such as soft skills like persuasion and getting people on board with ideas could become more important.

 

It could also change how subordinates or followers perceive leaders. As Michael notes: “It’s going to be really interesting to see whether followers actually do see leaders as less effective when they use AI because they know the decision is not in their hands anymore.”

He also suggests that the use of AI in decision-making could have a greater impact on certain styles of leadership over others. Transactional leadership, which operates on the “carrot-and-stick” model, could become less effective. Other types of leadership that focus on inspiring followers (“where you get people really excited about the vision, are excited about the direction or help them understand how it’s going to help them or develop them”) or building close relationships with followers may be less affected, he adds.

Another way to mitigate perceptions of uniqueness neglect would be to allow leaders to retain some control – or even the perception or “illusion” of control – in the decision-making process even if AI is being deployed, says Laurel.

“If leaders can retain a final sort of last overview decision before it goes out and it’s being implemented, I think a lot of people will be a lot more comfortable with that,” she notes.

“This means offering a ‘panic button’ that leaders can press, even if it’s only pressed 0.1 per cent of the time,” she elaborates.

Michael notes that ironically, allowing for human intervention in AI decision processes is basically re-introducing the potential for bias – to something that was meant to cut bias. “But to get humans to use it, we need them to reintroduce some of that bias or at least feel like they could if they wanted to,” says Michael.

 

 

“Human Behaviour: The Effects of AI on People at Work” is the third episode of Theory of Curiosity – a podcast series dedicated to showcasing the findings of SMU’s postgraduate research professors and students on topics in digital transformation, growth in Asia and sustainable living. Listen and subscribe to the full podcast series here.