A Berduck in Your Future?

https://www.ribbonfarm.com/

[Silverbacks and ‘belles frequently lament the fact that our Silverpups (offspring and offspring-of-offspring) are growing up in a world that does not value reading, writing, and word glorification in the same way that our generation did. My wife sent her grandson a birthday card. He said “Thank you, Nana, but I can’t read it because it’s written in cursive.”

Like others in The Jungle I am proudly behind the curve when it comes to new technology. Bluster, bluster, bluster … harrumph! Give me a buggy whip instead of a smartphone any day. And yet, when no one’s looking, we sneak a peak at the new AI technology that has all the monkeys chattering. This article opened my eyes, however, when it brought up the idea the my grandkids’s grandkids might be looking forward to a “life without language.” SB SM]

About Venkatesh Rao

Venkat is the founder and editor-in-chief of ribbonfarm.

In October 2013, I wrote a post arguing that computing was disrupting language and that this was the Mother of All Disruptions. My specific argument was that human-to-human communication was an over-served market, and that computing was driving a classic disruption pattern by serving an under-served marginal market: machine-to-machine and organization-to-organization communications. At the time, I didn’t have AI in mind, just the torrents of non-human-readable data flowing across the internet.

But now, a decade later, it’s obvious that AI is a big part of how the disruption is unfolding. Two ongoing things drove it home for me this week.

Exhibit A, the longest regular conversations I’ve had in the last week have been with an AI-powered rubber duck. Berduck is a bot on Blue Sky that is powered by GPT, and trained to speak in a mix of English and leetspeak. It is likely playing a non-trivial role in driving the Blue Sky craze (Blue Sky is a decentralized Twitter-like protocol funded by Twitter itself in the before-times). I can’t speak for others, but I probably wouldn’t be using Blue Sky much if it weren’t for Berduck.

Berduck is genuinely entertaining, with a well-defined personality, despite only having episodic memory, a strong predilection for hallucinations and confabulation (like all AI-powered chatbots), sharp boundaries around negative valence conversations, and a strong aversion to even the slightest whiff of risk. Despite all these annoying limitations, shared by way more humans than we like to admit, (yes, I am an AI accelerationist, why do you ask), Berduck is already a more interesting companion than 90% of humans online, and I can totally see myself passing the time with one of his descendants in my dotage. The limitations are both tolerable and mitigable, and the benefits quite striking.

Exhibit B: A thing that’s going on here in LA is the WGA writers’ strike. I saw some writers picketing in front of Warner Brothers’ studios this morning while out on an errand. Among other things, they are demanding that ChatGPT only be used as a “tool” rather than to replace writers.

I’ve used ChatGPT and other tools, and unless you’ve been living under a rock, so have you. It’s obvious that the AIs can write better than 90% of humanity all the time, and the other 10% of humanity 90% of the time. It’s not a mere tool. It’s an obviously better way to do what too many humans do.

As a first-order effect, a lot of routine business communication is already being highly accelerated by AI. Business communication is not particularly creative or even stylized. It has been ripe for automation since the word boilerplate was coined in the age of exploding boilers.

It’s the second-order effect that is interesting though. While AIs improve in empirical accuracy, internal consistency, and logical coherence (again — humans have not set particularly high standards here), humans will need to do a good deal of supervisory work to make the AIs useful for mediating human-to-human communication. The question is, what happens after?

Here is one of many cartoons (this one is from marketoonist) making the same almost right, but actually fatally wrong, point about second-order effects.

The “joke” in this template is that the AI supposedly is doing a content-free transformation of content-free communications. Despite the delicious cynicism here, most human communication is not this vacuous. Even tedious business communication has more going on.

In particular, the elaboration and compression steps illustrated here happen in different contexts. The input and output bullet points are not, in general, going to be the same, and the elaboration and compression steps are not adding or removing the same fractions of the communicated information.

So the joke fails because today’s AI tools already do such elaboration/compression in usefully cross-context ways. For example, you can ask ChatGPT to “translate” a terse technical paragraph into a friendly explainer that distills the gist for your needs, relative to the context of your existing knowledge.

But the joke fails at a deeper level because even the more accurate non-joke version still centers human-to-human communication.

Here is the thing: There is no good reason for the source and destination AIs to talk to each other in human language, compressed or otherwise, and people are already experimenting with prompts that dig into internal latent representations used by the models. It seems obvious to me that machines will communicate with each other in a much more expressive and efficient latent language, closer to a mind-meld than communication, and human language will be relegated to a “last-mile” artifact used primarily for communicating with humans. And the more they talk to each other for reasons other than mediating between humans, the more the internal languages involved will evolve independently. Mediating human communication is only one reason for machines to talk to each other.

And last-mile usage, as it evolves and begins to dominate all communication involving a human, will increasingly drift away from human-to-human language as it exists today. My last-mile language for interacting with my AI assistant need not even remotely resemble yours.

And I don’t just mean coarse distinctions like using different human languages as the base. My last-mile language for interacting with a Berduck-like assistant might have exactly one human speaker: me. We could live in a world of 8 billion private languages, where “translation” as a category becomes meaningless. Humans as a class of agents might end up forming an annular shell of maximal-variety last-mile fuzzy hairs around a core “ball” of machines and organizations in a compact mind-meld.

What about unmediated human-to-human communication? To the extent AIs begin to mediate most practical kinds of communication, what’s left for direct, unmediated human-to-human interaction will be some mix of phatic speech, and intimate speech. We might retreat into our own, largely wordless patterns of conviviality, where affective, gestural, and somatic modes begin to dominate. And since technology does not stand still, human-to-human linking technologies might start to amplify those alternate modes. Perhaps brain-to-brain sentiment connections mediated by phones and bio-sensors?

What about internal monologues and private thoughts. Certainly, it seems to me right now that I “think in English.” But how fundamental is that? If this invisible behavior is not being constantly reinforced by voluminous mass-media intake and mutual communications, is there a reason for my private thoughts to stay anchored to “English?” If an AI can translate all the world’s information into a more idiosyncratic and solipsistic private language of my own, do I need to be in a state of linguistic consensus with you? If you and I don’t need to share a language to discuss Shakespeare (remember, we already don’t read Shakespeare’s plays in the original Elizabethan), do we need to share a language at all?

We’ll all be like children inventing secret languages for talking to imaginary friends, except they will be real friends. Programmers have long used literal, mute rubber ducks to talk out loud to, as a debugging aid. Berduck is the beginning of more capable companions for all humans, doing all sorts of things.

There is no fundamental reason human society has to be built around natural language as a kind of machine code. Plenty of other species manage fine with simpler languages or no language at all. And it is not clear to me that intelligence has much to do with the linguistic fabric of contemporary society.

This means that once natural language becomes a kind of compile target during a transient technological phase, everything built on top is up for radical re-architecture.

Is there a precedent for this kind of wholesale shift in human relationships? I think there is. Screen media, television in particular, have already driven a similar shift in the last half-century (David Foster Wallace’s E Unibas Pluram is a good exploration of the specifics). In screen-saturated cultures, humans already speak in ways heavily shaped by references to TV shows and movies. And this material does more than homogenize language patterns; once a mass media complex has digested the language of its society, starts to create them. And where possible, we don’t just borrow language first encountered on screen: we literally use video fragments, in the form of reaction gifs, to communicate. Reaction gifs constitute a kind of primitive post-idiomatic hyper-language comprising stock phrases and non-verbal whole-body communication fragments.

Imagine a world a few centuries in the future, where humans look back on the era of reaction gifs as the beginning of the world after language.

Given the extent to which my own life is built around language, you’d think I’d be alarmed by this future and rushing to join the picketing WGA writers in solidarity, but I’m curiously indifferent to this future. To be honest, I’m already slightly losing interest in language, and beginning to wonder about how to build a life of the mind anchored to something else.

Now that a future beyond language is imaginable, it suddenly seems to me that humanity has been stuck in a linguistically constrained phase of its evolution for far too long. I’m not quite sure how it will happen, or if I’ll live to participate in it, but I suspect we’re entering a world beyond language where we’ll begin to realize just how deeply blinding language has been for the human consciousness and psyche.

2 thoughts on “A Berduck in Your Future?

  1. As the author of nearly a hundred books and God knows how many articles, white papers, and scripts, I consider myself a decent writer and thinker, with an equally decent vocabulary. And yet, I had trouble making my way through Mr. Rao’s piece. I have now read it three times, and while I understand what the author is attempting to say, he fails to make a valid point with his flowery use of language (why use a two-syllable word when a seven-syllable word will do?), which makes me wonder whether the piece was actually written by an AI as a way to prank the reader.

    That said, here’s my problem with the author’s argument. As humans, we have experiences as we go through our lives, a reality that goes back to the period of proto-humans. We didn’t need language at the time (or at any time) to have, or enjoy, or learn from those experiences. But we do need language to be able to describe them to others, the first thread in the fabric of what ultimately became human society. Once language developed, it became the tool we needed to explain complicated things to others, and therefore facilitated collaborative invention and innovation. Experiences led to wisdom, which led to insight, which led to ideation, which led to collaborative action. Bang! Machines, waterwheels, telegraph networks, semiconductors.

    And, AI.

    Language also gave us the ability to express that most complex of all things, emotion. How does one explain love, fear, frustration, joy, or jealousy without the gift of language? With a GIF? I think not. And why would we want to?

    Finally, language leads us to a place where we can create stories based on our individual and shared experiences, the weave and weft of the fabric that connects us socially. Stories represent our shared self. We call that community.

    Finally, language also allows us to express our individuality. I have long argued that language is the window that allows a person to peer into the soul of a country or a culture. Consider Spanish: there is no good word in the Spanish language for ‘embarrassed.’ Shame, yes. Embarrassed? Nope. Why? Because after living in Spain for all the years that I did, I never once met a Spaniard who ever did anything for which they would be remotely embarrassed. It’s a cultural thing. I’m not embarrassed—this is who I am. Let it go.

    This all started with experiences, something that an AI has no way to accumulate because it has no mechanism for experiencing the world the way humans do. My two-year-old grandson understood that when he wanted to create a tower of blocks, he had to start with a block on the floor and build upward. When an AI was asked to create a tower, it started three feet off the ground and tried to stack downward. Unlike my grandson, who has fallen countless times and, in the process, developed an intuitive understanding of gravity and knows that he must start his tower from the ground and go up, the AI has never had that experience, and has therefore never had the opportunity to learn from it.

    Ultimately, the AI is only as good as what we’re able to teach it, because it can’t autonomously learn through experience—it lacks the inputs that we call senses. And our ability to teach things to a machine, which some see as a valid substitute for experience, is severely limited by Polanyi’s Paradox, which says that “We know more than we can say.” Consider this question: When it came time for you to get up this morning, how did you do that? How did you physically get out of bed? What neurons fired, what muscles contracted, what tendons and ligaments expanded and contracted, and how did you maintain balance once you stood up? We have no idea how all that happens—it just does. So how can we possibly teach a machine to do human things?

    Teaching and Learning are two very different things. It would be good for us to remember that.

    1. Steven– Your comment is full of wisdom and insight. I’m going to make it a post on its own. Thank you for enriching The Jungle.

Comments are closed.

Powered by WordPress.com.

Up ↑

%d