• Editor

The investor's dilemma: do sustainable funds need a digital detox?

Netflix documentary The Social Dilemma makes for uncomfortable viewing for users of social media channels such as Facebook, Instagram, Snapchat, YouTube or Twitter. i.e. pretty much all of us. And as a sustainability specialist who invests (at the time of writing) in Alphabet , the parent company of Google and YouTube, I find the issues covered extra troubling.

What’s The Social Dilemma about?

The documentary consists mostly of interviews with “witnesses”, tech industry insiders who have grown increasingly uneasy over the impact of their creations, interspersed with the story of a fictional family battling the ills of social media.

As we are all well aware by now – and if you aren’t, go watch the documentary – these tech companies make most of their money by selling advertising space to other companies. As they say: “if it’s free, you are the product”. Their services are free to use, but they are not philanthropic endeavours and need to make money.

Unless you take issue with the basic principle of capitalism, the debate is over how these companies make their money and the unintended consequences of them getting very, very good at it.

Why might sustainable portfolios need a digital detox? I see two main issues: addiction and polarization.

My concerns: addiction and polarization

1. Opiate of the masses: addiction

The platforms increase their value to advertisers by maximising user engagement: the more time you spend on their sites, the more adverts they can place. And the more they learn about you, the more they’re able to refine their targeting further and earn more.

As the insiders explain, the services have been designed to create a “dopamine-driven feedback loop”. Dopamine, the “feel good” chemical linked to addictive behaviour, is released through positive social interactions and validation from our peers. Social media delivers social stimuli like a slot machine: irregularly-timed rewards in the form of likes and other notifications.

The aim is to stop you from putting down your device, and it works remarkably well. Many of us are literally addicted to our tech. One study in the US suggested the average person touches their phone more than 2,600 times per day.

It’s affecting the quality of our sleep and has even been linked to a higher risk of road traffic accidents. The pressure for validation and unrealistic examples displayed on social media have also been blamed for rising incidences of depression, eating disorders and suicide – especially for young people.

2. Rules of engagement: polarization

This leads to the second problem.

Engagement will be higher if the platform serves you content that you will find, er, engaging: this could be stuff you’ve previously shown an interest in or that “people like you” click on.

This is naturally going to be content that chimes with your existing view of the world, and the algorithms work so well that we are less likely to see content that opposes our existing view of the world. This isn’t malicious: it’s designed to provide the results that are most relevant to you. But it can lead to confirmation bias.

We all tend to seek out sources and company that agree with the views we already hold. People of different political persuasions will buy different newspapers or watch different TV channels, where the columnists and anchors represent their views. Technology has ramped this up to the nth degree. This matters for society.

There have been numerous studies suggesting that political polarisation has intensified since the 1990s, especially in the US.

Social media isn’t the only reason but it’s plausibly a significant contributor. Heading into the US election, Pew Research found that 90% of registered voters on both sides said a victory by the opposing candidate would lead to “lasting harm” to the US.

Democracy relies on the premise that everyone’s voice carries equal weight, but when political adversaries become mortal enemies this breaks down.

Stretching these concerns to their logical conclusion, you could say tech firms can be blamed for hate speech and even political violence. I am inclined to think this is an overly simplistic view. The complexities of geopolitical and social schisms cannot be pinned on a single factor.

But what is certainly true is that social media has provided a platform for damaging content and enabled, and potentially encouraged, it to spread further and wider than previously possible.

Can tech companies still be a force for good?

There are two sides to this story and it’s easy to get caught up in the melodrama of tech-bashing.

If you could wave a wand and rid the world of big tech, would you do it? There’s no way I would.

Especially in the last 12 months, life without the services of these companies would have been almost unimaginable, and certainly poorer.