Welcome to New World Same Humans, a newsletter on trends, technology, and our shared future by David Mattin.
If you’re reading this and haven’t yet subscribed, join 24,000+ curious souls on a journey to build a better future 🚀🔮
A few days ago Sam Altman, the CEO of OpenAI, put on a suit and went to Capitol Hill.
Members of the US Senate had gathered to hear him talk about the budding AI revolution. On first hearing Altman’s message was stark. ‘AI can go quite wrong’, he explained, and politicians should do something about it.
Fears of that kind are widely shared. According to a poll conducted this week by Ipsos and Reuters, a full 61% of US citizens say that ‘AI threatens the future of humanity’. Only 22% disagree.
That finding comes against a backdrop of even deeper and more widespread concern. Ipsos recently published their Global Trends 2023, a wide-ranging look at the big forces and currents of public opinion shaping our world. Their report showed that here in the UK agreement with the statement, ‘I fear technological progress is ruining our lives’ is now in steep ascent. Between 2020 and 2022 the proportion of people in agreement rose ten percentage points; that’s against a four point rise across the three decades from 1990 to 2020.
*
What to make of all this?
This week’s Senate hearing — and the now mainstream sentiment that AI and other emerging technologies pose a grave collective threat — are reminders that the technology revolution is the primary force shaping our shared future. Another way to say this is that our relationship with technology is now the primary political question.
A few weeks ago I argued that the public conversation about this question is structured by an overarching dichotomy. On the one hand are those who say that we can soon, via new technologies, transcend the limits that have always governed our experience as a species. Those limits are material and economic, social, and even bodily and organic.
On the other hand are those who believe the opposite. That is, that we must accept new limits on our technological and economic activities if we are to avoid civilizational collapse, or a pseudo-transcension that will rob us of everything valuable about what we are.
Seen through the lens of these opposing ideas, the limitations of the Senate hearing on AI — its resemblance to a pantomime — become apparent.
Altman told US lawmakers that global cooperation was needed to regulate AI. He pointed to the International Atomic Energy Agency as a model.
These are reasonable ideas, and they had many of the politicians in attendance nodding in assent. But they leave unspoken a set of underlying and all-important assumptions. It’s all very well — and on the face of it sensible enough — to argue for regulation to promote good AI outcomes.
But what are good outcomes? Who are they good for? Who gets to arbitrate on those questions?
And here’s the rub. Because on these questions, people will never agree. Different people will have radically different views on what constitutes a good relationship between machine intelligence and humans.
Some will prioritise the transcension of human limits — the way AI can, as they see it, accelerate us into a new and exciting future in which we break free from traditional constraints. Those people would, for example, embrace a future of AI-fuelled material abundance, even if it means vast changes to the structure of the economy and the role of labour in our lives. Others will prioritise the conservation of recognisably human modes of life, and would prefer to suppress the impacts of AI on the economy, creative endeavours, our politics, and more.
At the outer edges of this debate we find two starkly opposed parties. On the one hand, those who believe that if we humans eventually merge with AI and become a post-human superintelligence then this will represent ultimate and nirvanic transcension. On the other, those who view that eventuality as the total destruction of what is most valuable in human life.
*
There is no single right answer to the dilemmas I’ve outlined above.
The dichotomy I’m talking about has its origins in two eternal and equally legitimate sides of our shared nature, and as such is ultimately irresolvable. We humans are both infinite, information-processing mind-bearers, and finite, embodied creatures bounded by our own biological and historical selves. We are all left to navigate that uniquely human tension, and many legitimate accommodations with it are possible.
What’s urgently needed, then — and what was entirely absent from this week’s Senate hearing — is a new acknowledgement of human plurality and its implications. That is, of the many forms of human flourishing that are possible, including the many different but equally legitimate possible relationships with AI and other emerging technologies.
At the moment, the baseline assumption that governs our shared conversation on technology is that we must all be herded towards one destination: The Single and Unifying Right Way of Life for All of Us. Broadly speaking, that single destination is one of maximum technology, with a few globally agreed guardrails to ensure we aren’t all transformed into paperclips. But the assumption that one solution can ever be best for all contravenes so much that we know about ourselves and our history. It is incoherent in principle and oppressive in practice.
Instead, we should start with the opposite assumption: that different people will hold radically different and often mutually incompatible views when it comes to the proper relationship between humans and the technologies now emerging. We should ask, then: how can we empower these different groups to enact the relationship that they want? Instead of a single and global IAEA-like agency handing down the One Answer, how can a million and more local and context-specific answers bloom?
If we are meaningfully to address the challenges that AI and other emerging technologies pose to our collective lives, this is the way forward. That is, to find a way for different groups, with wildly different approaches to the technology revolution, to live alongside one another. Sure, there needs to be a maximum technology destination for those who want it. But there also need to be exit routes for those who want to situate themselves at some distance from everything that is coming.
All this is, I realise, easy to say. It’s a vast undertaking to create that kind of political settlement; the work of decades, probably centuries. But that process can start now, if each of us asks: what relationship do I want? How do I want to live in a world of machine intelligence, virtual realities, technologies of genetic manipulation, and more? In this way communities such as the Amish, or subcultures such as the transhumanists, set an example that we must all now follow: they already live their day-to-day lives in an explicitly mindful relationship with technology.
The rest of us have tended to simply take what we’re given. And then wonder, after the fact, if it’s really made our lives any better. Now, the stakes are rising. We’ll be called upon to do better. We cannot ask only, when can I get this? And how much will it cost? We must also ask: does this technology really take me closer to the kind of life I want? We need to design our relationship with AI and other technologies, and then find ways to put that design into practice. Rather than have that relationship handed to us wholesale.
Despite talks of a pause, machine intelligence isn’t going away. The inexhaustible engine that is techno-modernity is only set to accelerate. We are all on this rocket. It should fall to each of us to decide whether we want to ride it all the way to the end.
I found political echoes here with Brazilian philosopher Roberto Unger's take earlier this year on why globalist approaches are single, tyrannical points of failure and we would be wiser to embrace plurality -- the lesser evil of nation-states and diversity among regional governance:
https://www.noemamag.com/how-to-govern-the-world-without-world-government/