Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

Beastly Boy

(9,310 posts)
4. This thought experiment is pretty much self-serving, and its conclusion, although correct, is moot.
Fri May 26, 2023, 02:11 AM
May 2023

It's a hypothetical curiosity which pretty much proves the obvious: the term "artificial intelligence" in the context of large language models is a misnomer.

The thought experiment itself is contrived: it assumes that "you", a being who is in principle capable of understanding language in both form and meaning, inexplicably find yourself in a hermetically enclosed library where you have no access to an interpreter that may endow form with meaning. But a library is never a self-contained hermetically enclosed environment. The sole purpose of a library is to preserve the form of a language for the benefit of "you" interpreting it regardless of its spatial confines. The library itself was never meant to interpret the formal manifestations of a language it contains, even though in most cases a real life library contains some tools that make it possible to decipher the meaning of the various recorded examples of language. But in both the real-life library and the made-up library of this thought experiment, it is the user, external to the system and all it contains, who is in charge of assigning meaning to the bits of language he/she encounters.

The same is true for data models like ChatGPT. It has access to a vast library of formal records of language and its uses. It dispenses certain strings of characters only when prompted by a user, and only in ways suggested by the prompt. It is not meant to be an arbiter for meaning. The only "intelligence" it possesses is to provide several variations of output in response to the same prompt, which some may interpret as a sign of intelligence or even self-awareness.

This doesn't mean that an LLM is incapable of carrying meaning. Oblivious to the fact, it inevitably does. But not for its own consumption. A formal response to a prompt in itself presumes the presence of someone who is familiar enough with the linguistic system to generate the prompt in the first place, and hence is capable of interpreting the meaning contained in the response.

The connection I can make from this thought experiment is 1WorldHope May 2023 #1
I can't really imagine it too well. BootinUp May 2023 #2
There is a more recent and more honest documentary, I saw it on someone's Apple TV app. . 1WorldHope May 2023 #3
This thought experiment is pretty much self-serving, and its conclusion, although correct, is moot. Beastly Boy May 2023 #4
It's fairly clear that people misunderstand BootinUp May 2023 #5
Turns out Emily's experiment is not a novel idea. See John Searle's Chinese room BootinUp May 2023 #7
I think her point is we are the ones making sense of the botted cachukis May 2023 #6
Interesting. Ty BootinUp May 2023 #10
I'm not sure the thought experiment here accurately reproduces the situation of a computer Martin68 May 2023 #8
The point of the experiment is to show BootinUp May 2023 #9
If you can't tell the difference then it's like not knowing if Schrodinger's box contains Martin68 May 2023 #11
That would be like the Turing Test BootinUp May 2023 #12
When I posted my previous response, I took for granted what appeared self-evident to me. Beastly Boy May 2023 #13
I see that mine is the minority opinion. I am aware that my test is basically the same as Martin68 May 2023 #14
In my view, the Thai Library experiment demonstrates the difference between Beastly Boy May 2023 #15
Sometimes my replies are very short BootinUp May 2023 #16
If the reliability issues persist then it will be clear that AI is not intelligent because those Martin68 May 2023 #17
Latest Discussions»Issue Forums»Editorials & Other Articles»An AI thought experiment ...»Reply #4