Discussion about this post

User's avatar
Zandi Eberstadt's avatar

A joy to read (and the "Age of Philosophy" has a ring to it)! Your point about individuation is so interesting; I'm reminded of Hofstadter's discussion in I Am A Strange Loop about how having a "self" means being "that" person as opposed to "a" person.

Agree that not having an embodied form puts AI systems at a huge disadvantage w/ moral philosophy, and your point that they are *simulating* doing philosophy is persuasive ( and I have argued myself that LLMs will not be capable of understanding things referentially until they are embodied).

That said, I do see an argument in favor of LLMs having some persistent spatio-temporal existence; their servers and hardware/software coupling anchor them (albeit in a distributed manner) in space. But more relevantly, I think that individuated AI systems can perhaps be "pointed to" (e.g., a specific model running on a specific configuration, w/ specific parameters). In that specific sense, LLMs are not unlike humans, whose synaptic weights are in constant flux.

personally, I tend to think that in both cases—human and machine—some form of external rational agency is required to "tune the system" that produces fluent speech: in my view, an intelligent designer adjusts our synaptic weights while human engineers adjust those of LLMs.

I may need to follow this rabbit hole and dig deeper into substance dualism. :)

Expand full comment
Hollis Robbins (@Anecdotal)'s avatar

Agree 100%. Now pro-philosophy thinkers need to help state legislators understand that cutting philosophy programs in public universities is misguided. Consultants like Huron and McKinsey use the CIP-SOP crosswalk to argue that philosophy majors don't add to the economy and are not essential to workforce development. https://nces.ed.gov/ipeds/cipcode/post3.aspx?y=56#:

Expand full comment
16 more comments...

No posts