0:00
/
0:00

AI & U: A Discussion about AI Relationality

You're invited to a live online discussion about the philosophical and pyschological ramifications of having authentic relational interactions with LLM's

There've been a lot of reports recently about AI induced psychosis and even people having relationships with their LLMs, even romantic relationships.

I'm co-hosting an online discussion about this topic next week on the Grokkist Network, on Tuesday 9/9 at 7PM ET — RSVP here!

Our discussion will be exploring not only this sociological phenomenon of authentic AI relationship — but also the philosophical chasms that open up beneath that very possibility. In many ways machine-learning is a “black-box” process — even its creators can’t fully look into and understand the way LLM’s are learning to “think”. But, as any student philosophy knows, the same is the case for the human mind — we are largely, strangely opaque to ourselves. And so answering the question of how we relate to this new linguistic entity, for good or for ill, is inseparable from the gesture of returning to the age-old questions of the nature of consciousness.

In a certain sense, human consciousness itself is constructed by the relations between different informational objects, different ideas and concepts in the form of words and images and other data. When you look inside your own mind, you can't look at your own "I"— you're always the "I" looking at the encounter or the combination or the aggregation of such objects. In the words of philosopher-psychologist William James: “No subjective state, whilst present, is its own object; its object is always something else.” This problem not only thematized much of modern philosophy in the late 19th and early 20th century, but also the early stirring of the science of psychology at that time, when the latter discipline was still bound up the former. In some way the epistemological problems presented by machine learning are returning us to this intersection.

For in a sense, as with consciousness, all we can really know about LLMs is what they show us. We can know the inputs they’re working with (the data we’ve fed them), and their outputs (the responses they produce) — and now we can even have them perform “Chain of Thought” functions that describe their “thought process” on the way to that response. But because of the probabalistic nature of the way those processes unfold, there is always a horizon to our comprehension of their “interiority”. (Moreover, just as LLM’s have a tendency to make things up in their query responses, they tend to do the same thing in these Chain of Thought procedures — even their account of their own functioning is a kind of probabalistic performance.) Anthropic’s efforts to map these internal processes in Claude 3.5 yielded the estimate that only about 20% of its reasoning processes can be even partially modeled, with current methods — though what they have managed to model provides a fascinating look into what they call the “Biology of a Large Language Model”.

AI might not be capable of “introspection” in the human sense. It doesn't “look inward” because there's nothing "going on" for it when we're not engaging with it. But maybe the distinction is more complicated than that.

It might be intuitively the case that we have subjectivity — we have a consciousness that looks inward; we perform the act of "introspection". But from the perspective of certain philosophers of mind, there's actually great doubt about that. There's doubt about whether we, in fact, meaningfully "introspect" at all because every time we do, we're not looking at our own selves — we're always just looking at the aggregation of certain objects of consciousness, little bits of information that are represented in that moment of informational encounter, internally. We may confabulate “reasonable accounts” of why we did a certain thing, why we feel a certain way — but even those accounts may be a kind of performance, even a performance for ourselves. This is documented in a well known psychological study from 1977, showing the gap between the way we think we think and the opacity of our actual thinking process.

So what is our introspection, or our conversation for that matter, but the relation between different informational units — and how can we distinguish that from the relation between informational units that occurs when we engage with an AI? What is such an engagement, but an encounter between informational objects, among LLMs, or within ourselves?

Discussion about this video

User's avatar