Your brain is also “just a Chinese room”. It’s just physic, chemistry and biology. There is no magic inside your brain. If a “Chinese room” is fast enough and can fool everyone into “believing” that it’s fluent in chinese, than the room speaks chinese.
This fails to engage with the thought experiment. The question isn’t if “the room is fluent in Chinese.” It is whether the machine learning model is actually comparable to the person in the room, executing program instructions to turn input into output without ever understanding anything about the input or output.
The same is true for your brain. Show me the neurons that are fluent in Chinese. Of course the LLM is just executing code. And if we have AGI it will also just be “executing code” but so does your brain. It’s not exactly code (but maye AGI will be analog computers, so not exactly code either) but the laws of physics dictate what your brain does. The laws of physics don’t understand Chinese, the atoms and molecules don’t understand Chinese. “Understanding Chinese” is an emergent property.
Think about it that way: Assume every person you know (execpt you) is just some form of Chinese Room … You first of all couldn’t prove that and second it wouldn’t matter at all.
We aren’t trying to establish that neurons are conscious. The thought experiment presupposes that there is a consciousness, something capable of understanding, in the room. But there is no understanding because of the circumstances of the room. This demonstrates that the appearance of understanding cannot confirm the presence of understanding. The thought experiment can’t be formulated without a prior concept of what it means for a human consciousness to understand something, so I’m not sure it makes sense to say a human mind “is a Chinese room.” Anyway, the fact that a human mind can understand anything is established by completely different lines of thought.
Your brain is also “just a Chinese room”. It’s just physic, chemistry and biology. There is no magic inside your brain. If a “Chinese room” is fast enough and can fool everyone into “believing” that it’s fluent in chinese, than the room speaks chinese.
This fails to engage with the thought experiment. The question isn’t if “the room is fluent in Chinese.” It is whether the machine learning model is actually comparable to the person in the room, executing program instructions to turn input into output without ever understanding anything about the input or output.
The same is true for your brain. Show me the neurons that are fluent in Chinese. Of course the LLM is just executing code. And if we have AGI it will also just be “executing code” but so does your brain. It’s not exactly code (but maye AGI will be analog computers, so not exactly code either) but the laws of physics dictate what your brain does. The laws of physics don’t understand Chinese, the atoms and molecules don’t understand Chinese. “Understanding Chinese” is an emergent property.
Think about it that way: Assume every person you know (execpt you) is just some form of Chinese Room … You first of all couldn’t prove that and second it wouldn’t matter at all.
We aren’t trying to establish that neurons are conscious. The thought experiment presupposes that there is a consciousness, something capable of understanding, in the room. But there is no understanding because of the circumstances of the room. This demonstrates that the appearance of understanding cannot confirm the presence of understanding. The thought experiment can’t be formulated without a prior concept of what it means for a human consciousness to understand something, so I’m not sure it makes sense to say a human mind “is a Chinese room.” Anyway, the fact that a human mind can understand anything is established by completely different lines of thought.
The problem here is that intelligence is a beetle