New World Same Humans #23
Should you let an all-knowing algorithm vote on your behalf? No, you shouldn't.
Welcome to New World Same Humans, a weekly newsletter on trends, technology, and society by David Mattin.
If you’re reading this and you haven’t yet subscribed, then join 9,000+ curious souls on a journey to build a better shared future 🚀🔮
🎧 If you’d prefer to listen to this week’s instalment, go here for the audio version of New World Same Humans #23. 🎧
Power is a necessary part of our collective lives. Most people agree that without political power of some form, there would be chaos. And apart from the anarchists, we all agree that this would be a bad thing.
But political power – who has it, the way it works, and what we think about it – is changing.
This week’s newsletter is all about a crucial aspect of that change. It’s an essay that draws on a range of ideas I’ve been developing for a while: on the nature of politics, the nature of our politics in 2020, and an emerging crisis in our understanding of political representation, among others.
This essay is a great introduction to some emerging trends in power and politics that I think are important for an understanding of our shared future.
But the essay is also long. So this is a great time to remind you that these days I’m recording every Sunday newsletter as a podcast. If you don’t want to read, then listen! Just see the link above.
No snippets in this instalment. Instead they’ll be included in a short note that I’ll send in the middle of this week.
Designs for life
This week brought reminders of an emerging and increasingly important form of power. I mean algorithmic power.
The New York Times ran a long piece about Robert Julian-Borchak Williams, a 42-year-old African-American who, says the paper, is the first person to be arrested for a crime he didn’t commit after a being misidentified by a facial recognition algorithm.
Another story this week helped put that one in context. In a paper submitted to the journal Springer Nature in May, researchers from Harrisburg University claimed to have created a facial recognition algorithm that could ‘predict if someone is a criminal based solely on a picture of their face.’ More than 1,000 machine learning specialists, historians and ethicists published an open letter pointing out that the research traded on racist stereotypes, and recalled the abhorrent and long-debunked ‘race science’ of a previous age. Springer Nature say they won’t be publishing.
Meanwhile, the newest social media titan on the block, TikTok, published a limited explanation of its For You algorithm, which is intended to serve users a curated feed of content that matches their interests. For TikTokers the For You feed is a promised land that guarantees a huge audience. The company has faced criticism that its algorithm is biased against black and disabled people, as well as people deemed overweight or unattractive. If you don’t use TikTok this story might seem trivial compared to the first two. Just remember this algorithm decides what 800 million active users get to watch. As with the Facebook algorithm, it now helps shape the global collective consciousness.
All this, and especially the third story, reminds me of the big idea in circulation right now when it comes to algorithmic power. That is the idea that ‘algorithms will soon know us better than we know ourselves.’ Its most famous proponent is the superstar historian and techno-futurist Yuval Harari, and via his writings it has become a common vision of the tech-fuelled dystopia that many fear awaits us.
It’s a powerful idea, and there are good reasons for the traction it has found.
He knows me so well
This much we all know: Big Tech funnels our data through algorithms, and creates models of each us that are used to predict our preferences and future behaviours. As Harari rightly points out, the existence of these models and the likelihood that they will become more sophisticated in future seems to undermine foundational aspects of the societies in which we live.
Our liberal democratic, consumer system is founded on an understanding of human beings as rational choosers, who make decisions – to buy a product, say, or vote for a party – based on an analysis of their own interests. But if people come to believe that algorithms understand their own preferences better than they do, this system starts to fall apart. People, says Harari, will ask: why don’t I let algorithms do my deciding for me? Harari extends this process to its logical conclusion with a chilling question about democracy. That is: in a world in which The Algorithm knows me better than I know myself, why bother to vote at all? What would voting even mean anymore, in such a world? The algorithms knows my self-interest better than I do! Surely I should just let it vote for a leader on my behalf? Or, even better, let the algorithm be our leader?
It’s a dark and compelling question. And Harari says it may be a question with no workable answer; this challenge, he thinks, might be fatal for liberal democracy. But I think there is an answer. Indeed, I think the traction this question has gained – I mean, the fact that it appears to us so compelling – exposes exactly the way in which our system is breaking down, and what we need to do about it.
So first, my answer. In a world in which algorithms know you better than you know yourself, why would you bother to vote?
To answer that question, we need briefly to establish what we mean when we say that ‘algorithms will know you better than you know yourself.’ What we really mean is these algorithms will know your likes, dislikes, interests, biases, purchasing choices and other online behaviours. And they will use all that to build a model of your personality that is extremely successful when it comes to predicting future preferences and behaviours.
So in that world, why not let an algorithm vote for you? The answer is: because you’re aware that you have a vision of the collective good that is more than simply an aggregation of the preferences and behaviours you express in daily life. More even that an aggregation of everyone’s preferences and behaviours as understood in any simple way.
An authentic political vision must in the end be informed by a set of ethical principles – about the nature and purpose of human beings, and the nature of the good life – that cannot be instantiated entirely in our day-to-day preferences or choices. Such a vision transcends those things. Indeed, pursuing that vision may require us to do things that actively runs counter to our ‘preferences’ as we typically express them in ordinary life. It might mean doing things that are hard, unpleasant, or even dangerous. This kind of ethical vision can only be offered, or understood, by a person. An algorithm, even one that knows the videos you watch and the products you buy, can’t handle it.
Hold that in mind, and the reason Harari’s question seems such a challenge becomes clear. It’s because today we lack the kind of visions just outlined; that is, the kind of visions of the collective good that must fuel any meaningful politics. For three decades now there has been little sense of our politics offering competing visions for our shared future. Indeed, for the most part our politics feels so far from doing anything of the sort that we’ve almost forgotten that this is what it’s for. And in that environment, sure, it’s easy to believe that one day an algorithm could vote on my behalf. Or even run the country.
Why has this happened? It is, as the saying goes, complicated. But I think we can break out a couple of related reasons.
The artificial man
First is the presiding mode of political thought of the last 30 years, which has been a neoliberal mode.
Neoliberalism is explicitly hostile to overarching narratives of the collective good. If we want to solve our problems, it says, we should get politics out of our way and instead let markets do their work. This is politics as technocratic management, and it has been the dominant prescription in the west since the early 1980s. Neither progressives nor pre-neoliberal conservatives have been able to counter it with a compelling alternative vision – as an idea for governance, or a vision of our shared future.
One objection to that argument runs as follows: ‘that might be true before 2016, but look at what’s happened since then. Look at Trump, and Brexit! Politics is back!’ It’s true that Trump is no neoliberal technocrat. So yes, his Presidency does mark a new chapter in our politics. But it’s one that points to the continued absence of any compelling alternative to the neoliberal vision. That absence created a space into which Trump was able to pour his own concoction; crucially, not a political vision, but the hollow anti-vision of the populist.
Meanwhile, and relatedly, a culture of hyper-individualism has helped bring us to a place where politics is seen not as the arena of collective vision, but of individual preferences. The history of this shift is long, and runs through the advent of mass media, the rise of the focus group in the 1990s, and, of course, the emergence of the internet. But the result is that citizens and politicians alike increasingly think of voters as customers to be pleased, rather than citizens to be governed by their representatives.
Indeed, in a connected age fuelled by an assertive equality, the very idea of political representation has started to break apart. The overarching attitude towards elected representatives these days is often some version of: ‘who are you to set rules for me?’, despite the fact that this is what we elected them to do. Thomas Hobbes imagined the state as an ‘artificial man’ created by the people and embodied by the representatives – the King or members of parliament – who hold sovereignty. But in 2020 new artificial people stalk the land: the networks that are Facebook, Twitter and Instagram, where billions represent themselves and together exert new and often more gratifying forms of power. These networks are our new Leviathans, and they are starting to compete with national governments.
A new design
To recap: the erasure of ideology, politics as preferences, and the crisis of political representation. These are the conditions that make Harari’s question about algorithms seem a devastating challenge to the foundations of liberal democracy. Because when the political realm is divested of any ethical vision of human collective life, when it becomes about little more than the generalisation of my preferences, and when I no longer believe in the legitimacy of elected representatives, then it becomes easy for me to think that an algorithm could – or even should – one day do my politics for me.
Except it wouldn’t really be doing politics at all, but only the hollow version of politics that we’ve made.
The answer to Harari’s challenge, then, is clear. If we want to people to vote in a world in which ‘algorithms know them better than they know themselves’, then we must create a new politics. If we want to stop political power becoming the possession of a set of Algorithmic Overlords, we need new and compelling narratives of our future, based on ethical visions of what human beings are and how they should live together. Such visions, founded as they must be in our moral intuition, are ones that only a human can offer up or comprehend. Faced with competing visions of this kind, people will intuitively understand that the algorithms that surveil their daily lives cannot be trusted to choose between them. Only they, the people themselves, can do that.
We must create a politics, in other words, worthy of the name. Of course, that realisation is the easy part. The hard part is doing the work.
Some think the Green New Deal contains the seeds of a new and compelling vision of our collective life. Others look to new forms of conservatism that blend environmentalism, intervention in markets, and a focus on the local over the global. Wherever you think it might lie, it’s clear the new visions we need are unlikely to come via a narrow adherence to the traditional ‘conservative vs progressive’ polarity that still structures our political thinking. Instead, we should look to new syntheses that contain ideas drawn from each of those poles.
Building these new visions is the work of a generation. And understanding that journey is a huge part of what New World Same Humans is about. Once we’re able to come together and talk as a community – and yes, I promise the Slack group is coming! – that project will advance in entirely new ways, fuelled by you.
Then we can all share our own ideas on the purposes and meanings of human life. We can talk about the collective challenges we face, how to overcome them, and how we should live together. What’s more, we can empower one another to not only talk, but act: to play our own role, whatever that might be, in building the better shared future that we know is possible. It’s our belief in that possibility, after all, that drew us to this community in the first place.
New World Leviathan
The iconic illustration that accompanied Thomas Hobbes’s Leviathan is one of my favourite ever images. It so perfectly captures the powerful, frightening idea that stands at the heart of the book: that sovereign power comes into being when a multitude of people hand their natural power over themselves to a single individual.
Just like Hobbes’s Leviathan, New World Same Humans is a single entity made up of many people. Luckily, that’s where the similarity ends; no sovereign power is involved! Instead, we’re building a decentralised community where everyone gets to represent themselves, and everyone gets to have a say.
As promised above, the Slack group that makes that promise a reality is on the way!
In the meantime, we can make our community more powerful in one important way: by growing it! So if you found today’s instalment valuable, please forward this email to one person – a friend, family member or colleague – who’d also enjoy it. Or hit the share button below and let people know why you enjoy NWSH.
The more great people who join our community, the better for all of us!
Until next week, thanks for reading,
David.