Why can’t advanced robots have personality, and what are the limitations?

W

Robots are not capable of directed understanding like humans, and it is not possible to program emotions and will into them. Furthermore, it is not possible with current technology to implement parts of the human soul and unconscious in robots.

 

The movie “The Bicentennial Man” tells the story of a domestic robot named Andrew who has interpersonal conversations with humans, feels emotions, and engages in creative activities. Later in the movie, Andrew, who has a human-like appearance, falls in love with the granddaughter of the woman he once loved. Andrew repeatedly petitions the court to be recognized as a human so that he can marry a human, but is rejected, and when he is finally allowed to die, he is recognized as a human and dies.
Watching this movie raised an important question: can a robot built by humans ever become a person with feelings, thoughts, and will? No matter how revolutionary the technological advancements, robots cannot become people because there is a problem that cannot be solved simply by increasing the computational speed and memory storage capacity of computers: robots are composed of computers, and more specifically, there is a fundamental problem with the core device of computers, the programming system.
A computer is essentially a digital device, meaning that its memory and storage space, such as a hard disk, consists of a continuous list of binary data, 0’s and 1’s. The computer’s brain, called the CPU, stores and processes information by performing operations on the binary data. Sometimes computers process analog data, which is a continuous line, but this is done by converting digital data to analog data, just as the pixels on a monitor get smaller as the resolution increases so that the contents of the monitor appear as a natural picture.
The programs that make up a computer’s brain take input data, perform operations according to a predetermined code, and produce output data. In other words, they are not engaging in creative activity; they are performing predetermined methods and procedures. Programs cannot perform the thinking functions of the human brain, such as creativity. Programs also cannot understand, which is a key function of the human brain.
To support this argument, there is the Chinese room argument proposed by John Searle. The Chinese Room Argument goes like this: First, put a person named Searle, who only speaks English, in a locked room. In the room, put a tool that allows Searle to communicate with the outside world, a list of questions in Chinese, and a complete list of answers (a rulebook) written in Chinese. The Chinese examiner in the room writes the questions in Chinese, and Seol in the room writes the answers in Chinese according to the rulebook and gives them to the examiner outside. To a Chinese person who doesn’t know who is inside, it appears that the person inside speaks Chinese. But in reality, Sul doesn’t understand Chinese at all. In other words, just because the questions and answers are correct doesn’t mean that the person understands Chinese. This argument shows that programs don’t account for human comprehension.
Human comprehension involves a concept called orientation. When a person looks at a picture of an apple, they don’t just think that it’s an apple, they think about why they’re looking at it in this context. Just as looking at an apple in art class means something different than looking at an apple in math class, humans make holistic and collective judgments about things. Some people look at an apple and think, “That looks delicious,” while others respond, “That’s the kind of fruit you should eat for breakfast!” and still others think of Apple’s iPhone or Steve Jobs. You can’t have that kind of orientation in a program.
Not only do robots not have the same directed understanding as humans, but it’s also impossible to program emotions and volition. To implement emotions in a robot, let’s say we want to quantify emotions. We could use a random function to implement feeling good some days and bad some days, but does this represent the complexity of human emotions and different states of mind? More specifically, how can we program a heart in love and a heart broken? Can the human mind be considered random? These are areas that cannot be represented through learning. Furthermore, the human mind is an interrelated system of thoughts, emotions, and will. Of these, thoughts tend to control will, and will tends to control emotions. This is because we can change our will by changing our thoughts, and we can change our emotions from depressed to positive through our will. Can this interrelated flow be realized by a robot? This is not possible with a robot.
There are new forms of computers, such as recently developed quantum computers and DNA computers, but they can only compute faster and cannot solve problems that traditional digital computers cannot. There are also theories that use heuristics to determine the best choices for humans, but heuristics cannot account for the fact that humans make irrational and contradictory choices.
On the other hand, humans are not only persons, but also souls. Our primal longing for God, fear of death, and orientation toward the eternal world show that we have an unconscious mind. Modern science is also studying the unconscious, but it’s not clear what it means or where it comes from. Even as science advances, it’s not certain that it will be able to resolve these questions about the spiritual world and its origins. Can we apply the problems of the human soul and unconscious to human-made robots that science hasn’t solved? And since the soul and unconscious clearly control our conscious world and personality, we can’t view this as a trivial issue.
Classical physicalists have argued that the human soul or mind does not exist, and that the mind is a product of the brain. According to physicalists, the mind or mental states are physical and are nothing more than physical states. However, the contradiction in this physicalist argument is that it denies free will. Physical things are controlled by natural laws, which means that our minds are controlled by physical causes. Physicalists ultimately support determinism, which denies human free will. What is free will? Free will generally refers to a state in which behavior is not externally constrained, meaning that we are able to make our own decisions and are not controlled or bound by external forces. One of the natural laws that govern the universe is the second law of thermodynamics, the law of entropy, which states that all changes in energy are irreversible and tend toward disorder. In all human activity, we build tall buildings against gravity and sail ships against the movement of the ocean. It is highly unlikely that these things could have been accomplished by a random assemblage of stones over a long period of time according to the laws of nature. Humans are free to do things that defy the direction of entropy.
We’ve seen three reasons why robots, as developed so far, cannot have personality: they cannot have the same directed understanding as humans, it is impossible to program emotions and will, and it is impossible to implement parts of the human soul and unconscious. While movies have portrayed this as possible in the future, solving the critical problems of creating robots with personality still seems elusive.

 

About the author

Blogger

I'm a blog writer. I like to write things that touch people's hearts. I want everyone who visits my blog to find happiness through my writing.

About the blog owner

 

BloggerI’m a blog writer. I want to write articles that touch people’s hearts. I love Coca-Cola, coffee, reading and traveling. I hope you find happiness through my writing.