Consciousness, Thinking Computers, and Chinese Rooms: A Praxis Philosophy Conversation

Is it possible to create a thinking computer?

This is a hot question these days. Everyone from Elon Musk to your friend who just watched Terminator for the first time is gearing up for a world of super-intelligent, self-aware, thinking computers. And by gearing up, I mean freaking out.

Of course, as with everything in philosophy, this question is nothing new. And it has much deeper implications than just in artificial intelligence. For a computer to be capable of thought would transform how we think about what it means to be a conscious and thinking being.

In the 1960s, philosopher of mind John Searle formulated a thought experiment that he thought disproved the possibility that computers (even those capable of passing the famous Turing test) could *truly* think in the same way humans think. The thought experiment, called the Chinese Room argument, goes something like this:

A man is locked in a room with a set of instructions for interpreting Chinese characters into English, and vice versa. He is given a series of questions in Chinese. He is able to answer the questions in Chinese by using the rule set and the Chinese characters he was provided with.

We cannot say that the man is really *understanding* or thinking in Chinese. He is merely following instructions and *simulating* understanding of Chinese, enough to fool the people outside his “Chinese room.”

Similarly, a computer could conceivably produce answers which pass the Turing test, fooling us into thinking that it is a thinking computer. In reality, though, the computer is merely following programming and *simulating* understanding in the same way the man in the Chinese room simulated a knowledge of the Chinese language.

The Praxis philosophy night crew is a bunch of nerds (myself included). It’s natural that we picked a topic like the Chinese room argument, and it’s natural that our conversation made so many sci-fi references and deep dives into our own curiosity in AI.

Check out our full discussion on this thought experiment in thought (hang around toward the end for some really cool rabbit trails on intelligence):

James Walpole

James Walpole is a writer, startup marketer, and perpetual apprentice. You're reading his blog right now, and he really appreciates it. Don't let it go to his head, though.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.