13 Comments

That the brain is computational? Or that computational things can come very close to mimicking what the brain does along some metrics? (Since I think the brain is computational, I don't have a dog in this fight. But even so...)

Expand full comment

Thought-provoking essay, though the end is a bit flip. Query whether you've considered the role of Peirce's semiotics in all this.

Expand full comment

Maybe it would be fruitful to think about prompts as a theory of meaning? We see prompts all day and this results in various thoughts and reactions. Prompts also trigger various reactions from machines.

Much like viruses, prompts co-evolve with hosts. A prompt is meaningless without a host, but which host it gets paired with is a matter of circumstance.

Until recently, machine prompts (commands or search queries) were fairly distinct from human prompts, even if they use the same words sometimes. But we can learn to understand machine prompts, and some machines are getting increasingly better at understanding human prompts. "Prompt engineering" is fairly close to saying what it is you want.

(This is fairly similar to the concept of memes, but perhaps "prompt" is a better word?)

Expand full comment

I'd argue LLMs don't even mimic/replicate/use usages and compositional activities, currently I'd say that is like saying a shoe is a foot. They are part of our extended phenotype. I'd still take Douglas R. Hofstadter's views on this at this point. Also, I've recently reviewed Miles Hollingsworth's recentish bio of Ludwig at https://whyweshould.substack.com/p/gap-hunting-duck-rabbits-with-miles also some compositional poetry at https://meika.loofs-samorzewski.com/compositionalpoetry.html

Expand full comment

The "proper name" is Joseph Robinette Biden Jr. ;)

Expand full comment

Middle row, 5th column is a chair Middle row, 6th column is a table

Expand full comment

On the matter of ambiguity, I'd use communication theory over semantics and linguistics, and develop a model for LLMs that integrates the speech-act-ness of its conversationality.

On Sign:Signifier/Signified, you left out the Signifier, and Signified is not the thing. I think you might be conflating the denotation/reference wing of semiotics with the Saussurian Sign:Sr/Sd wing. The whole point of semiotics is to point out the arbitrary relation of the signifier to its signified, as presented in the sign, e.g. rose = love. So the idea that Signified is a thing is a misreading of semiotics. (You might be thinking of the symbol, not the sign; the symbol is inseparable from its representation, such as the cross).

Where you bring in "fuzzy" concepts why not use Habermas and the pragmatics of speech, as well as symbolic interactionists like Goffman? Clearly meaning is resolved by communicative actions; the social action aspect is material. Here too I think we could think more interestingly about GPT and chat AIs by regarding interactions with chat agents as mediated social interaction, and not strictly speaking as an engagement with written text.

Expand full comment

If you want to be a philosopher, at least get on the LibDem network: chin up, shoulders up, tailbone up, sleep on the chin/ribcage, not on the back.

Expand full comment