Can AI truly ‘think’? What is the difference between human and AI ‘thinking’?

C

 

This article discusses whether AI can think like humans. It uses the Turing Test and the Chinese Room Thought Experiment to illustrate the difference between simple information processing and true thinking, and questions whether AI can evolve through a process of counterfactual consensus.

 

A conversation scene from the movie with JARVIS, the AI assistant

“Hello, sir. (Hello, sir.)”

This is the greeting of JARVIS, Tony Stark’s artificial intelligence assistant, to his master, Tony Stark, in the globally popular hero movies Iron Man and The Avengers. In the movies, Tony Stark says to JARVIS, “A little ostentatious, don’t you think?” and JARVIS says to an enemy who tries to control him, “I believe your intentions to be hostile.” In both movie series, the AI JARVIS is portrayed as if it can “think” and communicate with humans on an equal footing.

 

About human and AI “thinking

A long time ago, Descartes said. “I think, therefore I am. I think, therefore I exist.” The answer to the question, “Can humans think?” is a common one for most people (well, more like all people): “Yes. Humans can think,” and you’d be hard-pressed to find anyone who disagrees. As a human, I can ask myself the question, “Can I think?” because I am thinking in the first place.
So, can A.I. think? Even if we reserve judgment on JARVIS in the sci-fi movie because we don’t know the level of science and technology applied to it and the limitations of its abilities, it remains a question whether the current state of AI is capable of the aforementioned “thinking.” What are the characteristics of “thinking” that distinguish it from other similar behaviors, and what are the differences between humans and AI in this regard?
To better understand these questions, I’d like to introduce two experiments on AI and thinking: the Turing Test and the Chinese Room. The Turing Test is an experiment proposed by Alan Turing in 1950 that tests how similar a computer’s responses are to human responses, based on the belief that “if a computer’s response to a given input is indistinguishable from a human response (specifically, if the computer fools the experimenter 30% of the time in all trials), then the computer is intelligent and thinking.
When I first heard about Turing’s belief in this experiment, I had one question: “If a computer (or artificial intelligence) has a large or high-quality database and simply compares and contrasts information according to an algorithm without understanding the input and submits an answer, should it be considered ‘thinking’?” The Chinese Room is a thought experiment designed by John Searle, who had the same question, to refute Turing’s beliefs.
Here’s how the thought experiment works First, you put a person in a room with two windows to receive questionnaires and responses, who speaks no Chinese but can recognize the shapes of Chinese characters adequately, and give them a list of pre-made Chinese questions and answers. An observer outside the room, who does not know that the person in the room cannot speak Chinese, observes the person in the room responding to the Chinese questions.
To the observer outside the room, it would appear as if the person in the room understands all of the Chinese questions and responds appropriately. However, since the person in the room is merely responding to a list of questions, and not understanding the Chinese questions and engaging in a thought process to come up with answers, John Searle concluded that the Turing test cannot determine whether an AI is thinking with real intelligence.

 

What it means to “think” – centered on “congruence

Thinking is also a process of information processing in the sense that it involves opening and exploring a database of its own experience and learning to create a response to an input, just as a calculator or search engine does, or as the person in the Chinese room did. Therefore, the question of AI and thinking boils down to the question of what distinguishes ‘mere information processing’ from ‘thinking’. What elements should be included and what should be possible in order to be called ‘thinking’?
Of course, it’s not easy to come up with a rigorous criterion, as many philosophers and engineers have failed to find one that satisfies everyone. However, in this article, I would like to propose one such criterion, the one that “information processing” must contain in order to transcend itself and enter the stage of “thinking,” and it is the following short question.

 

Can you perform the process of ‘Thesis’?

Thesis, antithesis, and synthesis are the three stages of the Hegelian dialectic, consisting of a thesis, antithesis, and synthesis. A thesis is simply a proposition (or assertion) that exists with an opposing antithesis. The antithesis is another proposition that is the opposite of, or contradicts, the preceding thesis. When a thesis and antithesis meet, because they are two contradictory propositions, they are subjected to a “productive logical process” in which they collide and connect over a much longer period of time than when two similar or unrelated propositions meet, producing a multitude of subordinate propositions and secondary knowledge, which are then integrated into a larger propositional “sum” that is deeper than the original thesis and antithesis. The resulting ‘sum’ is qualitatively more advanced than the original ‘theorem’ and ‘antithesis’ and has properties that can be applied to all situations involving the subordinate propositions ‘theorem’ and ‘antithesis’. It does not end there, but the ‘sum’ becomes a new ‘theorem’ again and goes through the logical process of facing the contradictory ‘antithesis’ again, and since it aims for an absolute truth that can be applied to many situations, the antithesis is a logical process with a single completeness that leads to the absolute.
In other words, to propose “antithesis” as a criterion for “thinking” is as follows. When a thinking being has a proposition (in this case, information) and an opposite proposition (information) comes in, it should perform the process of antithesis with the existing proposition (information) and the new proposition (information), not simply store them in a database, list them, and compare them. In addition, you should be able to create a qualitatively developed database by including the newly created ‘sum’ in your own database through the process of counterfactual agreement, and explore the database to produce results when any input comes in.
To me, “thinking” is not about keeping a huge database of all incoming propositions (information), searching through it for each response, computing it, comparing and contrasting it, choosing the better one, and submitting an output for the input (this is the process of a PC storing everything in memory and choosing an output). Thinking is when the elements in the database to be explored collide, connect, and integrate to improve the quality of the database and thus produce efficient output. Furthermore, thinking can only be said to occur when the “evolution of the output” comes from the evolution of the database itself, not from the evolution of the output “selection” and “submission” (e.g., the speed of the computer’s simple operations).
Here is a diagrammatic explanation of why I believe that antisums can improve the quality of databases. Suppose we have three “contradictory” pieces of information, A, B, and C, as candidate outputs for an input, P. If a computation on any input Q similar to P yields A among A and B, there is no guarantee that A will be the appropriate output if C is added, or if P and Q are similar but not identical. Therefore, the computation must be re-done each time to yield an output for all similar inputs, not just P and Q. However, if the process of counterfactual consensus unifies A and B into D and C and D into E, then E will be the appropriate output for both P and Q and similar inputs. Therefore, the quality of the database itself can be improved through the process of antithesis.

 

Does AI ‘think’?

What if we connect my proposed criterion, the ability to perform the process of counterfactual consensus, to the case of AI thinking that we have been wondering about? If we take the current state of AI, such as Siri on the iPhone and AlphaGo, which beat Lee Sedol, as examples, it is clear that Siri, which answers a limited number of questions in a set pattern, cannot perform the process of counterfactual consensus.
AlphaGo, which defied the odds in March 2016 to win the world championship in Go, a game with an almost infinite number of possible outcomes, is often referred to as the epitome of “deep learning” technology. However, the core of deep learning is not “advancement” through the logical process of counterfactuals, but rather “categorization” of vast amounts of data. AlphaGo learned by clustering data based on unimaginable computational power and technology, making predictions through classification, and finding the optimal output by computing all possible numbers on a 19X19 checkerboard in each situation to defeat Lee Sedol, but it did not progress toward something absolute in itself.
In other words, I don’t think it’s fair to say that AI is thinking at this point. At this point, A.I. is merely able to list, compare, and contrast data based on its superior computational power to come up with human-like answers that pretend to be human. For an AI that does not develop through the logical process of counterfactuals, the definition of conscious thinking is unreasonable.

 

Conclusion

Science and technology are advancing even as you read this article. Thanks to this, the outputs that AI produces for its inputs are becoming more and more human-like. Indeed, one day we may be able to speak Goethe and Nietzsche to an AI, but even if an AI can speak Goethe and Nietzsche, “thinking” must include the possibility of self-improvement (or, more narrowly, the improvement of a database) through a process of counterfactual consensus.

 

About the author

Blogger

I'm a blog writer. I like to write things that touch people's hearts. I want everyone who visits my blog to find happiness through my writing.

About the blog owner

 

BloggerI’m a blog writer. I want to write articles that touch people’s hearts. I love Coca-Cola, coffee, reading and traveling. I hope you find happiness through my writing.