That the brain is computational? Or that computational things can come very close to mimicking what the brain does along some metrics? (Since I think the brain is computational, I don't have a dog in this fight. But even so...)
The substrate doesn't really matter, as Hammond says "the discovery that giant inscrutable matrices can, under the right circumstances, do many things that otherwise require a biological brain is itself a striking empirical datum." If computational things can play the language game, then we should treat them as thinking things, regardless of what's sitting below the language coming out the other side.
Maybe it would be fruitful to think about prompts as a theory of meaning? We see prompts all day and this results in various thoughts and reactions. Prompts also trigger various reactions from machines.
Much like viruses, prompts co-evolve with hosts. A prompt is meaningless without a host, but which host it gets paired with is a matter of circumstance.
Until recently, machine prompts (commands or search queries) were fairly distinct from human prompts, even if they use the same words sometimes. But we can learn to understand machine prompts, and some machines are getting increasingly better at understanding human prompts. "Prompt engineering" is fairly close to saying what it is you want.
(This is fairly similar to the concept of memes, but perhaps "prompt" is a better word?)
'According to Marcus, "Large Language Models don't directly implement compositionality — at their peril.” But if Wittgenstein and the inferential pragmatists are right, neither does the human mind.'
Can you explain a little more why you think Wittgenstein would say this? Though he would deny understanding is *primarily* a matter of rule-following, I don't think he would deny that people are capable of learning rules from relatively few examples, like being told the rules for valid moves in chess and seeing a few examples of chess pieces moving according to those rules, without having to be trained on some huge set of examples like an LLM. Wittgenstein's analogy of language use to playing some sort of "game" (what he called 'language-games') would suggest this, especially a game that includes both more formal rules along with more implicit understanding of things like what good strategy looks like, what it means to play like a "good sport", etc.
On the matter of ambiguity, I'd use communication theory over semantics and linguistics, and develop a model for LLMs that integrates the speech-act-ness of its conversationality.
On Sign:Signifier/Signified, you left out the Signifier, and Signified is not the thing. I think you might be conflating the denotation/reference wing of semiotics with the Saussurian Sign:Sr/Sd wing. The whole point of semiotics is to point out the arbitrary relation of the signifier to its signified, as presented in the sign, e.g. rose = love. So the idea that Signified is a thing is a misreading of semiotics. (You might be thinking of the symbol, not the sign; the symbol is inseparable from its representation, such as the cross).
Where you bring in "fuzzy" concepts why not use Habermas and the pragmatics of speech, as well as symbolic interactionists like Goffman? Clearly meaning is resolved by communicative actions; the social action aspect is material. Here too I think we could think more interestingly about GPT and chat AIs by regarding interactions with chat agents as mediated social interaction, and not strictly speaking as an engagement with written text.
I think you'd enjoy the book I cite, Following the Rules. It's a kind of synthesis of Habermas, Brandom, and modern decision theory. Heath's dissertation was on communicative rationality.
I've got a pretty firm perspective on this, however, which comes from my view of online communication as being a form of online "talk," or, technically-mediated talk. So my use of Habermas, Giddens, Goffman, even Eric Berne, comes from treating online interaction as social action.
So I think that in addition to our interest in GPT et al as language models, we should tease apart user interactions with them as modes of mediated online social action - in this case I think "online talk" fits well.
This begs the question Is interaction with chat AIs intersubjective? Insofar as we "speak" to GPT, and it engages in "conversation" with us, we're relying (as it is also, by means of pre-training and reinforcement learning) on normative and socially acceptable uses of speech to communicate. The matter of AI's authentic and genuine subjectivity is suspended, I think, as long as we maintain the interaction.
In other words, chat AIs engage us in an "as if" form of talk. Perhaps it's Gibson's "consensual hallucination." Regardless, I think what's at play is more than language in the linguistic sense, and involves aspects of speech that necessarily bring our competencies in social action into the mix.
That the brain is computational? Or that computational things can come very close to mimicking what the brain does along some metrics? (Since I think the brain is computational, I don't have a dog in this fight. But even so...)
The substrate doesn't really matter, as Hammond says "the discovery that giant inscrutable matrices can, under the right circumstances, do many things that otherwise require a biological brain is itself a striking empirical datum." If computational things can play the language game, then we should treat them as thinking things, regardless of what's sitting below the language coming out the other side.
Yes. Massively yes. Especially with only 1/30,000 of the neural network size...
Thought-provoking essay, though the end is a bit flip. Query whether you've considered the role of Peirce's semiotics in all this.
Maybe it would be fruitful to think about prompts as a theory of meaning? We see prompts all day and this results in various thoughts and reactions. Prompts also trigger various reactions from machines.
Much like viruses, prompts co-evolve with hosts. A prompt is meaningless without a host, but which host it gets paired with is a matter of circumstance.
Until recently, machine prompts (commands or search queries) were fairly distinct from human prompts, even if they use the same words sometimes. But we can learn to understand machine prompts, and some machines are getting increasingly better at understanding human prompts. "Prompt engineering" is fairly close to saying what it is you want.
(This is fairly similar to the concept of memes, but perhaps "prompt" is a better word?)
I'd argue LLMs don't even mimic/replicate/use usages and compositional activities, currently I'd say that is like saying a shoe is a foot. They are part of our extended phenotype. I'd still take Douglas R. Hofstadter's views on this at this point. Also, I've recently reviewed Miles Hollingsworth's recentish bio of Ludwig at https://whyweshould.substack.com/p/gap-hunting-duck-rabbits-with-miles also some compositional poetry at https://meika.loofs-samorzewski.com/compositionalpoetry.html
goodness signed in with the wrngo account.
'According to Marcus, "Large Language Models don't directly implement compositionality — at their peril.” But if Wittgenstein and the inferential pragmatists are right, neither does the human mind.'
Can you explain a little more why you think Wittgenstein would say this? Though he would deny understanding is *primarily* a matter of rule-following, I don't think he would deny that people are capable of learning rules from relatively few examples, like being told the rules for valid moves in chess and seeing a few examples of chess pieces moving according to those rules, without having to be trained on some huge set of examples like an LLM. Wittgenstein's analogy of language use to playing some sort of "game" (what he called 'language-games') would suggest this, especially a game that includes both more formal rules along with more implicit understanding of things like what good strategy looks like, what it means to play like a "good sport", etc.
The "proper name" is Joseph Robinette Biden Jr. ;)
Middle row, 5th column is a chair Middle row, 6th column is a table
On the matter of ambiguity, I'd use communication theory over semantics and linguistics, and develop a model for LLMs that integrates the speech-act-ness of its conversationality.
On Sign:Signifier/Signified, you left out the Signifier, and Signified is not the thing. I think you might be conflating the denotation/reference wing of semiotics with the Saussurian Sign:Sr/Sd wing. The whole point of semiotics is to point out the arbitrary relation of the signifier to its signified, as presented in the sign, e.g. rose = love. So the idea that Signified is a thing is a misreading of semiotics. (You might be thinking of the symbol, not the sign; the symbol is inseparable from its representation, such as the cross).
Where you bring in "fuzzy" concepts why not use Habermas and the pragmatics of speech, as well as symbolic interactionists like Goffman? Clearly meaning is resolved by communicative actions; the social action aspect is material. Here too I think we could think more interestingly about GPT and chat AIs by regarding interactions with chat agents as mediated social interaction, and not strictly speaking as an engagement with written text.
I think you'd enjoy the book I cite, Following the Rules. It's a kind of synthesis of Habermas, Brandom, and modern decision theory. Heath's dissertation was on communicative rationality.
Book page:
https://global.oup.com/academic/product/following-the-rules-9780195370294
Free pdf:
https://cdn.preterhuman.net/texts/thought_and_writing/philosophy/Following%20the%20rules%20-%20Heath.pdf
I'll give it a look. Thanks for the tip.
I've got a pretty firm perspective on this, however, which comes from my view of online communication as being a form of online "talk," or, technically-mediated talk. So my use of Habermas, Giddens, Goffman, even Eric Berne, comes from treating online interaction as social action.
So I think that in addition to our interest in GPT et al as language models, we should tease apart user interactions with them as modes of mediated online social action - in this case I think "online talk" fits well.
This begs the question Is interaction with chat AIs intersubjective? Insofar as we "speak" to GPT, and it engages in "conversation" with us, we're relying (as it is also, by means of pre-training and reinforcement learning) on normative and socially acceptable uses of speech to communicate. The matter of AI's authentic and genuine subjectivity is suspended, I think, as long as we maintain the interaction.
In other words, chat AIs engage us in an "as if" form of talk. Perhaps it's Gibson's "consensual hallucination." Regardless, I think what's at play is more than language in the linguistic sense, and involves aspects of speech that necessarily bring our competencies in social action into the mix.
If you want to be a philosopher, at least get on the LibDem network: chin up, shoulders up, tailbone up, sleep on the chin/ribcage, not on the back.