The only way is ethics: the five crucial questions we all have to ask about customer experience

Throughout history, people have always had a ‘What can we do?’ mentality about technology. It is an attitude that has kept us moving forwards, but in future decades, we may also need a ‘What should we do?’ mentality to make sure we are making the right choices, says Steven van Belleghem

Technology is increasingly moving into the realm of human rights: the right to privacy, to happiness, to equality and to be treated in a trustworthy manner. And this has a huge impact on our personal wellbeing and the future of society. We really need to think about these things, because besides a lot of think tanks and clumsy governmental laws (like GDPR) nothing much is happening. That’s why I wanted to present you with five ethical questions that will be crucial for human existence in the coming years:

1How much power should tech companies be allowed to have?

Every new technological revolution – from the first industrial revolution and the age of steam to the current information age – produces giant pockets of power. History shows we have always had trouble finding a balance between powerful companies, consumers and governments, who should be protecting the interest of consumers.

In 1911, Standard Oil was ruled to be an illegal monopoly by the U.S. Supreme Court and was dissolved into 34 smaller companies. They may not have had the information about consumers that tech companies today have, but even back then, these companies had their own impact on the elections, the economy and the environment.

But today this is not just a legal or economical matter. Allowing these types of monopolies the power to act as they want and control so much data also a huge impact on the end-game of customer experience. Where do we draw the line to their power? Can they convince us what to buy and how to dress? Does that then also mean we allow them to influence how we vote?

Where we draw the line is a question that gets increasingly difficult to answer with each passing year. The inability of Twitter and Facebook to control the disinformation around the elections showed even they cannot control their own power, so how are our slow, laggard governments going to protect consumer rights?

Perhaps more importantly, who will draw that line? Will it be the governments? Will it be self-regulating systems within the tech companies? Will it be full algorithmic transparency? Or will it be the consumer, through technologies like blockchain, who claims back the power over his/her own data? Only time will tell.

2Who can use our data?

There is an old phrase, ‘there is no such thing as a free lunch’, and when we use ‘free’ services today, we pay with our data. But recently, people no longer seem satisfied by this trade-off.

Facial recognition, for example, could be the root of potentially fantastic customer experiences. But it could also potentially allow a medical insurer to diagnose high blood pressure or certain genetic diseases and demand for higher fees. Understandable from their side, but definitely less great for us, so is that something that we want?

And what if non-democratic governments get a hold of our data? China, for instance, has invested a lot in facial recognition. And the customer experience that companies over there are offering is mind blowing. But what when the government decides to use that data to irradicate dissent?

This ethical discussion goes to another level with effective brain-computer-interfaces like Elon Musk’s Neuralink project. It could help people walk or see again, it could make us smarter or less clumsy. But that also means it could make us more aggressive or increase our need for sugar or alcohol. And who will own the data that flows between the BCI and our brain?

3Who can control what we feel?

I am a true optimist, and I love technology that offers us convenience so we have more time to spend with our family and friends. But we are gradually moving into an era of technology not just tracking our behaviour but is also zooming in on our emotions.

Technology is already affecting our emotions, of course. Social media, for instance, is well known to trigger teenage anxiety, depression, self-harm and even suicide, and this has everything to do with the addictive design purposefully created to make us ‘hooked’. It is clever, no doubt, but should we be allowed to build something that is addictive?

But that was ‘just’ one step into tech impacting our emotions. Amazon recently unveiled Halo, as a competitor to Apple Watch and Fitbit, which goes beyond tracking health to analyse the user’s voice and present a picture of how they felt. Microsoft, too, plans to embed Teams with a series of ‘wellness’ tools to address mental health problems. Hypothetically, these are both fantastic developments, but ‘using’ the emotions of consumers is a thin line to walk, ethically. According to Gartner, by 2024, AI identification of emotions will influence more than half of the online advertisements you see. Is this ethical?

4Do we trust the algorithms to make decisions that are good for us?

So if brands can monitor our behaviour and our emotions, the should be able to make good decisions. But do we trust them to make the right decisions?

The laws of trust are fairly simple – most of us are quick to offer it, but we will take it away just as soon. If Booking.com keeps suggesting hotels that I like, I will trust them quite blindly. If it doesn’t, I will look elsewhere. Trust is no longer an issue of digital versus human, but of delivering what you promise.

The next step is automated buying, when this trust will become increasingly important. Our fridge will decide if we need more milk. But what about wine? It might notice that the wine bottles keep getting emptied faster and faster. Should it anticipate that and buy more of them? Or maybe less? What is ethical and trustworthy behaviour?

The challenge is that what is ‘good’ for us in the short term – buying chocolate because we feel sad – might not be the best decision in the long term. Automated buying will free up so much time for us, but it also potentially has a very dark side which we really ought to think about.

5How do we stop algorithms from increasing inequality?

There is still a big inequality problem in the world. Algorithms can only judge the data that they are fed, and since the majority of people working in tech are WEIRD – Western, Educated, Industrialized, Rich and Democratic – a lot of the data they use is WEIRD to and thus exceedingly biased. Facial recognition, for example, works best on white male faces, because that is what the system is mostly fed.

The COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm was developed to predict the likeliness of a criminal reoffending. It predicted that black defendants pose a higher risk of reoffending than white defendants, and so black defendants were almost twice as likely to be misclassified with a higher risk of reoffending (45%) in comparison to their white counterparts (23%).

Technology has always had a tendency to magnify existing trends. And so, when it comes to the ‘broken’ parts of our society – that lie at the roots of sexism, racism, ageism and a lot of other biases – it unsurprisingly follows the same dynamic, because that is what it finds in our data. This should be high upon the agenda of every company that is investing in customer analytics: how can we make sure that our AI systems do not further boost the existing inequality?

Steven van Belleghem is a speaker and author on the topic of customer engagement. His new book, The Offer You Can’t Refuse is out now

You may also like...

MBA success stories

MBA success story: Steven Marshall

The Honourable Steven Marshall (pictured below) is an MBA alumnus from Durham University Business School who went on to become a prominent politician in his native Australia

Read More »
Business Schools

Covid-era alumni show greater engagement with business schools

Business school alumni who experienced study disruptions due to the Covid-19 pandemic are more engaged with their alma mater compared to those who studied under normal circumstances, according to the latest Alumni Matters study by Carrington Crisp in association with EFMD

Read More »
MBA success stories

MBA success stories: Marc van Tongeren

Marc van Tongeren studied for an MBA at Nyenrode Business University. Here, he relates how it helped him to understand the importance of good governance to the corporate sector – as well as gaining valuable insights into his own leadership style

Read More »