Welcome to New World Same Humans, a newsletter on trends, technology, and our shared future by David Mattin.
If you’re reading this and haven’t yet subscribed, join 24,000+ curious souls on a journey to build a better future 🚀🔮
At the start of the year I promised the return of short notes. Here’s the first: a meditation on the ChatGPT moment we’re living through right now.
To avoid claims of false advertising: this one is more of an essay than a note.
If you’d rather listen than read, just scroll up and hit play. But enough preamble; let’s get into it.
The generative AI hype train is thundering forwards right now, and ChatGPT — which was released in November — was the fuel that accelerated it to its current speed.
On the face of it, that’s a bit odd. The underlying model, GPT-3, was made public almost two years earlier. Why the big noise now?
ChatGPT uses an enhanced version of that model, and so produces better outputs. But my contention is that it’s the chat element — that is, the conversational nature of the tool — that’s responsible for ChatGPT’s colonisation of the zeitgeist. People love the back-and-forth quality of interacting with this thing.
I’m interested in this, and the reasons for it. Because it seems to me that a quest to understand ChatGPT’s seductive conversational power can help us commune with a deep but under-appreciated truth about human thought.
A truth that leads us, in turn, to some conclusions on our future relationship with machine intelligence.
In a seminal 1998 research paper, the philosophers Andy Clark and David Chalmers introduced an idea they called the extended mind thesis (EMT).
The EMT says that mind is best understood as a set of cognitive processes that extend beyond our brains and into the external world. Consider, for example, a person using a notebook and pen to help perform a series of simple calculations. The notebook and pen are, say Clark and Chalmers, just as much a part of the cognitive processes at work here as the person’s brain. The notebook, for example, is acting as a kind of external memory bank.
It’s arbitrary then, according to the EMT, to say that mind is happening in the brain but not in the notebook; instead, the brain, pen, and notebook are part of one big cognitive system, and we can best understand that system as mind.
It was an arresting argument, and it’s proven an influential one. What’s more, 25 years on we citizens of the internet have been delivered into a relationship with technology that makes tangible the strengths of this idea.
I’m talking, here, about our relationships with our phones.
I tend to do my deepest thinking when I’m out for a walk. Often, I’ll reach for some half-remembered fact, person, or quote that I need to continue my train of thought, find that I can’t recall it, and then go to my phone to look it up. My phone, here, or perhaps more properly the internet itself, is acting as a kind of extension of my own memory — one containing pretty much all the knowledge in human history that can be encoded as words or pictures. And the whole process is so seamless — think, encounter block, look it up, keep thinking — that the phone really does feel a natural extension of my mind. When I forget my phone, the feeling is one of my thought process being constantly interrupted. At its most acute it feels as though a part of me is missing.
ChatGPT offers users the same kind of feeling. The feeling, that is, of having your mind extended beyond the confines of your skull. It’s perhaps the first technology since the iPhone to offer that experience in a compelling new way. That truth, surely, has helped drive the excitement over the last three months.
But the current ChatGPT moment is not driven only by the feeling that the tool allows for mind extension. There’s also the feeling that the mind extension happening is a sudden and dramatic evolution of anything we’ve experienced before via notebook, calculator, or the phone as portal to the internet. There’s a widespread feeling out there that ChatGPT is an early signal of a revolution of era-defining consequence — even though, in truth, we haven’t yet seen the use cases, or the impact on the economy, to justify that belief.
Why is this? Why does ChatGPT feel such a big deal?
The answer I’m fermenting: it’s because ChatGPT taps into, in a way even the phone does not, a deep truth about human thought. That is, its fundamentally dialogic, or conversational, nature.
The idea that underpins this is simple: it’s that when we think, we talk to ourselves. What you call your ‘internal monologue’ is really a dialogue conducted by one person. Someone is talking (internally, not aloud) and someone is listening and will then reply, and those people are both you.
The idea that human thought is fundamentally dialogic has a long history, which passes through the 20th-century Russian philosopher and literary critic Mikhail Bakhtin.
Bakhtin said that language is primordially a social instrument: a process that evolved out of games of call and response conducted by two or more parties. And because language is the substrate that makes symbolic meaning and the higher forms of thought possible, that means thought, too, is fundamentally dialogic in nature.
For we moderns this is a revolutionary idea. We tend to believe that thought, in its purest sense, is something that happens inside the mind of a single individual.
Bakhtin, and others since who’ve played with the idea of dialogic thought, invert this belief. They say that thought in its purest sense happens not inside the mind of one person but between groups of people; that is, between collections of minds. Under this view the extended mind thesis applies not only to the way individual minds can be extended by tools, but also, and primarily, to the way all our minds are necessarily extended by other minds. Indeed, under this view mind itself is best understood as a phenomenon that emerges between us, rather than inside any one of us individually.
It’s notable that the earliest works of philosophy in the western tradition seem to acknowledge the dialogic nature of thought. Socrates gathers others around him and together they engage in a process of back-and-forth reasoning that is, he tells them, the path towards enlightenment. The Socratic method taps deep into the idea that thought is primordially a social phenomenon.
Via a complex psychospiritual process entangled with the evolution of the Enlightenment self, we lost touch with that truth. Instead, we came to see thought as, foremost, an inner and private unfolding. But in losing touch with the primacy of social thought, we also lost touch with another truth. Yes, thought conducted silently by one person is private and inner; but because it relies on the dialogic tool that is language, it too carries a fundamentally dialogic nature. When we think, we talk to ourselves.
We might say that this strange ability to split the self — so that we can at once talk and listen to ourselves talk — is consciousness. That is to say, it is the state of self-awareness that only we among Earth’s creatures seem to possess in its highest form. The idea that language in some deep sense is human consciousness, that it creates the human mode of being in the world, is one I explore in depth in the ongoing essay series The Worlds to Come.
I’ve argued for the idea that thought — that consciousness itself — is in some deep sense dialogic. What does all this have to do with ChatGPT?
I hope the superficial connection is clear: in ChatGPT, we have an instrument that can externalise and amplify the internal dialogue that constitutes thought.
As we’ve seen, we’ve always had access to entities that can externalise our inner dialogue: other people. But other people are beings with their own cognitive and social agency. They have personhood. ChatGPT, by contrast, is not a person; it is a tool.
It’s this dual quality that is new and special about ChatGPT: it allows for the externalisation of the dialogic essence of my private thought, while being a tool that is best understood as an extension of me, rather than a person best understood as essentially an other.
In this way, ChatGPT offers a radically new form of mind extension. The excitement around it points to a submerged awareness among its users that this tool is more than just another useful app for summarising documents, or searching for information. We see in it, instead, the beginnings of a new way of doing thought. A way of externalising, and drawing out, an essential feature of our interior lives.
Right now, ChatGPT enacts a highly imperfect version of this promise. While the quality of its responses is a great advance on anything we’ve seen before, it’s still prone to factual errors and occasional nonsense, and responses that are not wrong but in some way off, or just bland. But all this will be improved via larger models that are better able to retrieve factual information, and cope with context and nuance. It’s the glimpse of what is ahead that has proven so exciting — even shocking.
Pretty soon, there will be a proliferation of such models. We’ll all be able to customize our own, so that it knows our tastes, preferences, and cognitive styles.
These models, trained as they are on an appreciable amount of all the text in existence, are a strange new instantiation of our shared linguistic inheritance. It’s as though we’ve created a human hivemind and given it a voice, such that we’re now able to talk to it at will. When we think, we talk to ourselves: that truth is now manifest in a whole new way.
Eventually, having a personal large language model (LLM) — a virtual conversational companion in your pocket 24/7 — will be no more remarkable than having a phone. When that time comes, in what ways will our thinking be amplified? In what ways will the nature and modes of our thinking change? And we must also ask: how might these models, which reflect back to us our own assumptions and prejudices, limit our thinking, or act to push us away from ideas and perspectives that lie outside the mainstream?
Those questions are valuable because when we ask them, we’re approaching a more accurate, and ultimately more fruitful, relationship with machine intelligence.
Contrary to much of the hype and/or panic circulating at the moment, ChatGPT and other language models aren’t going to render higher forms of human thought or creativity obsolete. They’re not simply going to write our books for us, do our philosophy, tell us the answer. These models can’t think creatively in the commonly understood sense of that phrase, because they’re not conscious beings responding to a lived experience of the world. They are, rather, stochastic parrots playing a high-level game of word association. It’s just that when they play that game well enough, and effectively simulate a human interlocutor, they’re able to amplify our thinking such that we arrive at cognitive destinations faster than we would have otherwise, or arrive at destinations that we would never have reached at all.
In short, we need to understand that what’s most exciting about these models is not what we will get straight from them; it’s what they will help us get from ourselves. And they’ll help us most effectively, of course, if we bring our own powers of creativity and critical reflection to the party.
If you haven’t experienced this aspect of ChatGPT, give it a try. Choose an idea, argument, or line of thinking, articulate it to the chatbot and then go back and forth, picking up on aspects of its responses that you find interesting and asking it to develop them, and then responding in turn. Don’t forget to challenge the assumptions that start to become apparent in ChatGPT’s responses, and ask yourself what it’s missing. Do that for five minutes, and see where you get. At its best, it can feel like the cognitive equivalent of driving a car instead of walking.
For my part, this kind of conversation is already becoming commonplace. I can feel the seeds of a new habit taking root: I’ll just take this to ChatGPT. And I’ve started to wonder: how long until I come to feel the same about this tool as I do about my phone? How long until the ability to take a train of thought to ChatGPT is so expected, so natural, that when I don’t have access to the tool I feel as though my thought process has been interrupted? And how long until many others feel the same?
What I’m envisioning is a near future in which this ability to commune with the human hivemind, as made manifest by an LLM, comes to seem a natural part of thought. Yes, we’re a long way from that right now. But it feels as though we’re taking the first steps towards a new and powerful kind of augmentation.
At the outer edges of all this I wonder: is this the beginnings of the long process of human-technological convergence that transhumanists (think Ray Kurzweil) tell us is inevitable? A process that sees we humans, or at least some of us, become something else?
I’m not one of those who views the post-human future with unalloyed enthusiasm. But via generative models and other technologies — including brain implants and techniques of genetic manipulation — I’m increasingly persuaded that some kind of Great Divergence is coming, in which we homo sapiens branch off from one another and become various different kinds of (post)humans.
Certainly, the possibility that we may not all be the same humans for much longer haunts the borders of this newsletter. It increasingly seems to me that that our convergence with the technologies we’re building, and the almost impossible task of making any practical or moral sense of it, is the most important shared challenge we face.
In that case, the project of the age is to begin, at least, to figure out where we stand. Perhaps we can take it to ChatGPT.
Thanks for reading this essay from New World Same Humans.
Now that you’ve reached the end, why not take a second to forward this essay to one person – a friend, family member or colleague – who’d also find it valuable? Or share it across one of your social networks, and let people know why it’s worth their time. Just hit the share button!
I’ll be back later this week as usual; until then, be well.