Searle argued that AI only simulates human mental processes like understanding. Computer programs are not doing what people are doing when they appear to show intelligence.
He gave the example of the equivalent of machine translation. Someone is isolated in a room with a Chinese-English dictionary and a grammar of Chinese. Essentially, it is a black box.
You slide Chinese text through a hole into the room and the person inside, who doesn’t understand Chinese, uses the dictionary and grammar to find an English equivalent, which they write on a piece of paper and slide out through another hole.
Searle says this is equivalent to what machine translation does.
You believe the black box and/or the person in the room understands Chinese, but they don’t. Searle says the arrangements, and in particular the lack of knowledge about what is happening in the room, just make it appear understanding is taking place. Appearances can be deceiving.
After it is explained the person in the room doesn’t speak Chinese, perhaps you accept that but say it doesn’t matter. Verisimilitude is important to you. If a machine translation from Chinese is just as good as translation by a person, then you can say the machine understands Chinese.
Searle says that’s a mistake. He says believing AI can develop consciousness and understand things makes the same mistake. Simulation is not the same thing as the real thing.
I agree confusing simulation with the real thing is the real mistake. Read my take on the AI community saying AI apps have agency, AgainstAgency
But I think the Chinese Room is a better analogy to the situation of the second language speaker than to AI. The person in the Chinese room is the non-native speaker.
To native speakers, non-native speakers appear to be more or less able to communicate in their second language. What they don’t realize is the degree of difference in their own understanding of the language with that of non-native speakers. The native speakers read deeper language competence into non-native speakers’ heads than that which exists. That competence of non-native speakers is often only superficial, at a level equivalent to looking up a word in a dictionary and struggling with a grammar book, and unconnected to other intellectual skills, as the person in the Chinese room is doing.
Their thoughts don’t correspond to what the non-native speakers are thinking. It is not just the non-native speakers who are making mistakes about language. The native speakers are making mistakes about the intentions, motivations and feelings of non-native speakers, too. As the Tony Jones Principle of Intercultural Communication Non-Alignment puts it, it’s impossible for people from different cultures to understand each other, because intentions, motivations and feelings are never aligned.
Read about my own AI skepticism at AI
Me atBack to LetUsNowPraiseFamousMen