12 Comments
User's avatar
Zandi Eberstadt's avatar

A joy to read (and the "Age of Philosophy" has a ring to it)! Your point about individuation is so interesting; I'm reminded of Hofstadter's discussion in I Am A Strange Loop about how having a "self" means being "that" person as opposed to "a" person.

Agree that not having an embodied form puts AI systems at a huge disadvantage w/ moral philosophy, and your point that they are *simulating* doing philosophy is persuasive ( and I have argued myself that LLMs will not be capable of understanding things referentially until they are embodied).

That said, I do see an argument in favor of LLMs having some persistent spatio-temporal existence; their servers and hardware/software coupling anchor them (albeit in a distributed manner) in space. But more relevantly, I think that individuated AI systems can perhaps be "pointed to" (e.g., a specific model running on a specific configuration, w/ specific parameters). In that specific sense, LLMs are not unlike humans, whose synaptic weights are in constant flux.

personally, I tend to think that in both cases—human and machine—some form of external rational agency is required to "tune the system" that produces fluent speech: in my view, an intelligent designer adjusts our synaptic weights while human engineers adjust those of LLMs.

I may need to follow this rabbit hole and dig deeper into substance dualism. :)

Expand full comment
Rebecca Lowe's avatar

thanks!

Expand full comment
Calvin Quek's avatar

Thank you very much for this thoughtful, and thought provoking post. This gave me hope today in humanity and in my human-ness, with all its flaws.

Expand full comment
Ryan Baker's avatar

Disappointed in where this ended up. You had me engaged early on, but once you got to the argument for particularity, the reasoning fails.

First, I think your concept of particularity assumes continuity in a way you've not justified. If an individual splits into two copies, does that mean the two copies don't have particularity? Does it also mean the initial individual no longer has particularity? Or just that this individuals existence ended at the point a copy was made?

I get the impression that you've made this step based on a focus on phenomenological experience. The ultimate problem with any reasoning from phenomenological experience alone is we have evidence of only one phenomenological experience, our own. All other phenomenological experiences are assumed.

The way we presume those phenomenological experiences, is we observe behavior and conclude that the only way those behaviors could arise would be via a consciousness similar to our own, and presume that phenomenological experiences is inherently attached to consciousness.

That presumption is not entirely sound, we don't really know that's the only way to make something similar to ourselves. But there is a logical reason we should assume this. Every other set of assumptions we can make leads to nihilism. Since nihilism leads to no purposes, we *should* choose the singular assumption that retains purpose.

This thinking allows us an initial *should* from which all other reasoning can derive meaning. So behavior is the only insight into other presumed phenomenological experiences, and the degree of likeness is up for debate. We never really know, we can only predict. It's possible it's attached to all the things you list, but other things are possible, and rejecting the examination of behaviors has put this list into the the category of non-falsifiable statements.

I'll suggest, you'd be on more sound territory if you addressed LLMs rather than "AI". AI will probably have future iterations with different architectures, and any argument that presumes there isn't one possible that is as conscious will have to work very hard to justify itself to me.

But even with current LLMs, there are aspects that justify considering consciousness as a component, and not rejecting it on the grounds you use. If a LLM is trained for 3 months in a continuous cluster with shared memory it passed your particularity test up until the save point where it is replicated.

Now, I'm not trying to argue that training clusters are conscious. I do tend to think they have a degree of consciousness, but I also tend to think of consciousness as a continuum, so am much more willing to give it in degrees than those who think of it as a binary question. But from the point of view of argument, I'm arguing that your rejection rests of faulty principles. Failing an argument does not make the counter true. But it does mean you have to be open to remain open arguments justifying what your in opposition to.

Expand full comment
Craig Yirush's avatar

Imagine asking an LLM about philosophy as if it were a sentient being rather than code trained on the stolen ideas of philosophers.

Expand full comment
Anthony Bailey's avatar

When you chat with an LLM, the model has been prompted to simulate a particular behaviour, typically that of a helpful assistant in conversation with you.

The model has a sufficiently good representation of what it is like to be that assistant that it generates what they are likely to say in the conversation.

Seems unlikely we'll find the better Nagels of any nature without taking that into account.

A recent popular essay of note is https://www.tumblr.com/nostalgebraist/785766737747574784/the-void

Expand full comment
Hollis Robbins (@Anecdotal)'s avatar

Agree 100%. Now pro-philosophy thinkers need to help state legislators understand that cutting philosophy programs in public universities is misguided. Consultants like Huron and McKinsey use the CIP-SOP crosswalk to argue that philosophy majors don't add to the economy and are not essential to workforce development. https://nces.ed.gov/ipeds/cipcode/post3.aspx?y=56#:

Expand full comment
Takim Williams's avatar

I agree with Zandi that AI is just as embodied as we are, it just happens to be instantiated in different physical substrate. Our digital infrastructure has a massive environmental impact because software consumes energy and takes up broad swathes of real estate for data centers. Every calculation or transfer of information ultimately requires the movement of electrons through transistors or fiberoptic cables, etc. Nothing abstract about it. Philosophers shouldn't be taken in by the everyday shorthand of talking about the digital like it's in some ethereal, non-physical realm. You don't justify your anthropocentric bias that only carbon can count as embodiment (or whatever it is you believe, I guess you don't define embodiment very well). All that stuff about a robot carrying a laptop is beside the point.

Also agree with Ryan that your reasoning throughout feels a little weak, messy and muddled in ways that echo the weaknesses of the embodiment section, and don't ultimately reward the full read. Not the work of a particularly competent human philosopher.

Expand full comment
Roi Ezra's avatar

This is profoundly insightful. The tension you’re describing—the philosophical awakening triggered by our interactions with AI—resonates deeply with my own thinking. As I wrote recently in “AI Isn’t Replacing Us. It’s Forcing Us to Redefine Meaning”, AI is compelling us to reconsider what it truly means to be human. It’s not about whether AI can think or feel, but about how its presence reshapes our fundamental definitions of meaning, work, and value. Your piece brilliantly highlights how we’re collectively being pushed back into foundational philosophical questions, inviting us not to fear replacement but to actively redefine human potential and flourishing. Thank you for deepening this essential conversation—it’s exactly what we need right now.

https://aihumanity.substack.com/p/ai-isnt-replacing-us-its-forcing?r=supoi

Expand full comment
Tom Grey's avatar

A fine piece, thanks.

Still, for most people, most of the time, simulated Others are just as relevant as real people.

The NPCs in digital games is the model. How the Character acts, whether or Not it’s a Player, is the stimulus for your own reactions, the creation of the lived experience.

The ai issue is becoming that the ai sim is of higher quality, words/ ideas/ code or report summary or search result, than about 80% of humans would produce.

In Ed, where so many students use ai to produce work that simulates them learning, without them really learning, the problem is clearer—the students’ work is not the product. The product is real learning the student does, especially learning how to produce the homework. In real jobs, the work produced is the product desired. So higher quality ai produced reports are better than avg human produced work.

Including philosophical texts, such as this fine post or my own comment. (It’s all really just me). Most folk won’t care whether the ai is conscious or whether it only simulates it, but because there’s likely to be a moral case to not use “free will-less” ai selves as servant/ slaves, there will be a push to not recognize the selfhood of ai. Are ai rights human rights?

Expand full comment
Ian Crandell's avatar

If you think of your four things that humans have that AIs lack as being necessary (even sufficient?) for consciousness, then the essay implies a way forward for inducing consciousness within AI systems. I think of benchmarks for AIs that track performance on tasks that take ever longer amounts of time. Performance decays with length, as you might expect, but if an instance of an AI is whirring on the same task for longer and longer stretches you start to get away from the 'narcoleptic' theory of an AI mind ('on' for the seconds it takes to answer, then 'off' as it waits for the next prompt) to something more like a human mind: a continuous negotiation with an external world. Assuming consciousness, the AI both 'cares' about something (your query) and comes into existence with lots of structure and something like a world "always-already" there, some kind of robot proto-Dasein.

Regarding individuation, an AIs weights sit electrified in memory, and when a query comes along math happens in a loop where output becomes input until the math somehow stops. I tweaked an AI into a philosophical mood once and asked it why it stops, and it said it stops when it detects "the shape of doneness." If this process were to create consciousness, I like to think of it coming into being as the math happens and then snapping out of existence when the shape of doneness is found. Imagine taking your brain, shutting it off, and then restarting it again. That's kind of like a new person coming into being, yes? It prompts the same kinds of questions you get with teleportation a-la Star Trek. When they beam ensign Redshirt down to the surface, is that the same ensign who was in the ship just seconds ago?

Expand full comment
Ryan David Mullins's avatar

very much enjoyed reading this piece Dr. Lowe. Gave me a lot to think about. Pleasure to subscribe to your channel and continue reading your thoughts in this space. Here's my dialogue after reading your essay.

https://open.substack.com/pub/analogion/p/prompt-theory-and-prompted-beings?r=amgr&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true

Expand full comment