It's a hypothetical curiosity which pretty much proves the obvious: the term "artificial intelligence" in the context of large language models is a misnomer.
The thought experiment itself is contrived: it assumes that "you", a being who is in principle capable of understanding language in both form and meaning, inexplicably find yourself in a hermetically enclosed library where you have no access to an interpreter that may endow form with meaning. But a library is never a self-contained hermetically enclosed environment. The sole purpose of a library is to preserve the form of a language for the benefit of "you" interpreting it regardless of its spatial confines. The library itself was never meant to interpret the formal manifestations of a language it contains, even though in most cases a real life library contains some tools that make it possible to decipher the meaning of the various recorded examples of language. But in both the real-life library and the made-up library of this thought experiment, it is the user, external to the system and all it contains, who is in charge of assigning meaning to the bits of language he/she encounters.
The same is true for data models like ChatGPT. It has access to a vast library of formal records of language and its uses. It dispenses certain strings of characters only when prompted by a user, and only in ways suggested by the prompt. It is not meant to be an arbiter for meaning. The only "intelligence" it possesses is to provide several variations of output in response to the same prompt, which some may interpret as a sign of intelligence or even self-awareness.
This doesn't mean that an LLM is incapable of carrying meaning. Oblivious to the fact, it inevitably does. But not for its own consumption. A formal response to a prompt in itself presumes the presence of someone who is familiar enough with the linguistic system to generate the prompt in the first place, and hence is capable of interpreting the meaning contained in the response.