Welcome to New World Same Humans, a newsletter on trends, technology, and our shared future by David Mattin.
If you’re reading this and haven’t yet subscribed, join 24,000+ curious souls on a journey to build a better future 🚀🔮
At the heart of this newsletter is an attempt to understand technological modernity and our relationship with it.
In this essay I attempt to draw together much of the thinking I’ve done across the last three years, and to summarise my overarching position.
Given that, this can be thought of as a totemic New World Same Humans instalment. I hope it proves valuable, and there will be more along these lines in the coming months.
Searching for the Exit Routes
The co-founder of OpenAI, Greg Brockman, spoke at TED last week.
In a post-talk question and answer session, TED founder Chris Anderson asked him if it was reckless of OpenAI to allow public access to its large language models, including GPT-4. We lack, said Anderson, a full understanding of their emergent behaviours. ‘Isn’t there,’ he asked, ‘a possibility of something terrible happening?’
Brockman had an answer. OpenAI had decided that the best way to proceed was to release their models incrementally, and gather as much feedback as possible. Only via that feedback could they ensure that these technologies ‘benefit all humanity’.
Two stark and mutually opposing futures, then, were thrown into the air. One in which AI is the best thing that’s ever happened to us. Another in which it’s the worst.
*
These days, strange new futures are being cast in front of us faster than we can process them.
It’s hard to make sense of what’s happening. But our collective experience of this moment — and our attempts to understand where it’s heading — are being shaped by a single, overarching framework.
That framework is about the relationship between technology and human limits. And it manifests itself as an argument between two parties.
Today, there are people who believe that via new technologies we’ll soon transcend, in some definitive way, the limits that have always governed our experience as a species. Those limits are bodily and organic, material, social, and planetary.
Under this view, technology will soon lead us to infinite free energy and a world of endless material abundance. It will allow us to create new and decentralised social forms that liberate us from all power relations. To merge with the intelligence we’re building, and become all-knowing immortals. To build new settlements among the stars.
See OpenAI founder Sam Altman’s Moore’s Law for Everything. See the transhumanist movement. See, of course, Elon Musk.
On the other hand, there are those who believe the opposite. That is, that we must accept new limits on our technological and economic activities if we are to avoid either imminent civilizational collapse, or a pseudo-transcension that will rob us of everything valuable about what we are.
The degrowth movement, which argues for planned economic curtailment in the Global North, is one example of this position. The doomsday warnings of the AI alignment expert Eliezer Yudkowsky are another.
Are we about to fly higher than ever before, or fall to Earth with a thud? This question — the dichotomy between no limits and new limits, between transcension and collapse — underlies much of the conversation about technology and our shared future. Which, it’s increasingly clear, is the conversation.
*
This dichotomy is not new. Modernity was always a project in the transcension of human limits, and as such we moderns have always been troubled by questions about the possibility, and wisdom, of seeking to escape them.
The difference now is that those questions are arriving at a terminal station. Processes of technological modernity — and especially the rapid onset of machine intelligence — are reaching an intensity that seems to transform what is foundational about us, including even the coherence and integrity of the human person.
This leaves us asking: what happens next?
Seen through the lens of the new limits/no limits argument, that question becomes: who is right? Those who say we should lean into the technological project to transcend human limits, and become something new? Or those who call on us to impose new limits in order to avoid collapse?
For a long time, I’ve been troubled by my inability to pick a side. I can find myself drawn, for example, to the writings of the English ‘recovering environmentalist’ and lately orthodox Christian Paul Kingsnorth, who in a long series of essays across the last three years has anatomised what he calls the machine: a global techno-capitalist system that, in his formulation, is eating everything that is valuable about human life. On the other hand I often watch what is happening now — AI, genomics, blockchains, space travel — with awe and wonder.
At its most acute, the feeling is of oscillating between these two positions almost by the day. And I suspect that many readers of this newsletter can relate. This disorientating to-and-fro is now a feature of the culture we live in, visible in the questions we ask ourselves about the practise of our technologies. Should we pursue nuclear fusion and near-limitless clean energy, or seek instead to reduce our energy use? Are new techniques of genetic manipulation a gift to the life sciences, or an example of dangerous hubris? Is direct interface between the human brain and the internet a wonderful advance, or a step towards the unhuman?
Each of those issues contains a world of its own, with its own particulars. For a long time, though, I’ve been afflicted by my inability to arbitrate on the general question. Should we speed up (no limits) or slow down (new limits)? Should we seek to transcend the boundaries that have always shaped us, or must we act now to ensure we do not trespass beyond them?
Given I spend my life thinking about technology and its meaning in our lives, shouldn’t I have an answer? Isn’t my inability to come down on a side, here, a personal failing?
*
These questions have haunted me for a long time. Recently, I’ve come to see them in a new light.
I’ve come to believe that my inability to pick a side — and the broader cultural oscillation between these two positions — is a symptom of a deeper truth. That is, that the tension between transcension and limits can never be definitively resolved, because it is a manifestation of two eternal and opposing parts of our shared nature.
There is the part in each of us that is rational and language bearing, with an infinite power to combine and recombine ideas. Then there is the part that is organic and finite; a creature bound to our bodily selves and local environment.
We humans are the infinite shackled to the finite and embodied. This truth, and the tension to which it gives rise, is uniquely ours. An AI is a kind of infinite information processor. An ape is an embodied creature destined to live within the bounds of its given self. We alone — alone, that is, as far as we know — are both. Perhaps this tension is, in the end, the best definition we can offer of the human.
This realisation can transform the way we relate to the argument on limits, and offer us new ways forward.
In the face of this eternal and inescapable tension between transcension and limits — between the infinite and finite part of ourselves — we must realise that many different accommodations are possible. They give rise to modes of life that are opposing but equally legitimate. Should we build an AGI? Should we seek to settle on Mars? Should we intervene in our own genetic makeup? No definitive answers to these questions are possible, because each of them taps into an ultimately irresolvable tension between two equally valid but mutually opposing aspects of what it is to be a human being.
What’s needed, then, is not a quest for final answers — they do not exist. Instead, we must search for new ideas, values, and modes of life that allow us better to negotiate this eternal conflict and its endless manifestations.
In particular, we need new accommodations between groups of people who take different views on these questions.
There is a political philosophy designed to arbitrate between groups with different — sometimes wildly different — belief systems and ways of life. We called it liberalism. Seen this way, what’s needed now is a revived and fully 21st-century liberalism; one focused not on religious differences or on the traditional conservative vs progressive political framework — which is now exhausted — but on the definitive question of our age, which is our proper relationship with the technology revolution.
What would such a dispensation look like in practice?
It would mean cities able to support both high and low-tech ways of life, including lifestyles that allow citizens to opt out of the global system of consumerism and go back to the land. Think a new wave of urban farms built in the shadows of gleaming sky scrapers.
It would mean ensuring that the ability to access public services is not dependant on a willingness to adopt technologies: computers, phones, VR headsets, or anything else. But equally, it would mean preparing for, and ultimately building diplomatic relationships with, the new networked and decentralised states that will emerge in the age of the blockchain.
It would mean a new economy that makes space for the many kinds of human activity we need, including stewardship of the planet and caring for one another. A universal basic income could help bring about such an economy.
This is just a start; the list is long. But the broad thrust, I hope, is clear: this version of technological modernity is too much a one-way conveyor belt, sweeping everyone towards a single destination that we can label Maximum Technological Disruption. Many people — probably most people, in the end — will not want to ride this journey all the way to the end. They have good reasons for feeling that way.
We must build a multi-layered, multi-speed modernity that accommodates many forms of life, not just one. We need exit routes out of the technosphere for those who want them.
*
For my part, thinking all this through has changed the way I relate to my oscillation between no limits and new limits thinking.
A deep skepticism about technology is mainstream these days; even fashionable. I share some of that skepticism when it comes to where we’re heading, and the people who are leading us there. But I’ve come to believe that those who say we should abandon all dreams of technology-fuelled transcension — who say those dreams are only hubris — are being just as unresponsive to the truth of what it is to be human as the naïve techno-utopians they criticise. Our quest to be more, to exceed ourselves, to be infinite, is a fundamental part of who we are. We cannot erase it; it’s futile to try. And at its best this quest is as inspiring, as beautiful, as anything we find in nature. Few are making an enlightened and humane pro-technology case these days. But there is one to be made.
If we are to build new accommodations between the finite and infinite part of ourselves, though, then in the end we’ll need more than practical exit routes out of modernity. What we must make, really, is a new account of ourselves and our ultimate relationship with the world.
For those determined to ride technological transcension all the way, this means an answer to the question: for what? What is it that you are transcending towards, and why is it valuable? Without an answer to this question the project becomes one of empty expansion; of instrumental power for its own sake. Meanwhile, those whose focus is on the conservation of existing human modes of life should have an answer to the question: what, after all, is the human? What is the essence of that we’re seeking to protect?
For most of our shared history we had compelling answers to these questions. They came in the form of religious beliefs, which supplied us with an account of ourselves and our relationship with the cosmos.
Modernity, and especially the scientific world view, stripped us of any such account. It left us without a map of ultimate value; without any way of finding true north. No wonder, then, that it currently feels impossible to chart a meaningful course through the technological changes unfolding now. If we are ever to chart that course, what’s needed above all is spiritual revolution.
I'm not so sure I would frame the question of LLM progress as a choice between allowing something great to happen to humanity and the potential for something horrible to happen. It will do both. Our technology always does both.
First we have to lose our Pinker-like obsession with Progress to something closer to John Gray's challenge to that thinking (i.e., "the *what* are we 'progressing' towards, really?"). The real question to all of this is if we can reap the benefits of this technology while closely monitoring its eventual harms and having the courage to impose limits to those harms.