Can we create an AI that thinks for itself, and what are the ethical issues?

C

As the development of human-like artificial intelligence becomes a reality, there is a need to discuss ethical issues and regulations. Movies and novels are actively discussing the self and ethical standards of AI.

 

Can humans really create a single personality? This is a question that has been asked throughout history. Descartes, a medieval philosopher, once said, “I think, therefore I am,” which led to modern philosophy’s search for human identity as opposed to external truths. This inquiry naturally led to the question of whether humans are unique, and the search for an answer eventually boils down to whether it is possible to create something identical or similar to humans. These efforts, coupled with modern scientific advances, have led to cloning and artificial intelligence. While the ethical issues of cloning humans have been raised for a long time, A.I. has been treated as something far removed from actual humans, so there has been little ethical discussion. Of course, the technology hasn’t advanced to the point where there is an urgent need for such a discussion. However, a small group of people are certainly calling for clarification of the ethical standards for A.I., especially in media such as movies and novels. In this article, we’ll look at examples of such media and discuss why the issues raised above need to be addressed sooner rather than later.
But first, we need to explore what is the crucial difference between AI and humans. Naturally, there have been ongoing efforts to distinguish between the two and discover their differences, and we can use this prior work to consider whether ethical issues can truly be applied to AI. The most famous and oldest distinction is the “Turing test” proposed by Alan Turing. The test is very simple: put a computer and a human in different rooms and have them talk to a panel of judges via chat. The judges can ask the subject any number of questions in this conversation and use the answers to determine whether they are talking to a human or a computer. Technically, the Turing test is not a way to distinguish between AI and humans, but rather a way to create more precise AI. However, the fact that no A.I. has passed the test to date suggests that there are certain areas of humanity that are beyond the reach of even the best science and technology.
Of course, the Turing Test is a fairly old theory, and there have been many refutations of it. The most famous of these is John Searle’s “The Chinese Room” theory. Put a person who doesn’t know any Chinese in a room and give them a questionnaire written in Chinese. The person doesn’t know any Chinese, but they are able to answer the same questions that are asked in the questionnaire using Chinese characters, shapes, and numbers. However, this does not mean that the person can speak Chinese. In the same way, it is argued that even if an AI answers all the questions in the Turing test, it is hard to say that it is close to being human. In this sense, the Chinese room was created to refute the Turing test, but it generated a rich discussion that ultimately helped to make the Turing test more robust. In addition, there is a test that takes the Turing test as the most basic test and makes it more sophisticated: CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart). The key to this technology is that humans can easily understand the letters in their modified form, whereas computers have a hard time recognizing letters other than glyphs. Of course, there are technologies that can recognize handwriting by recognizing certain patterns of letters, but in general, it is difficult to recognize letters that are written and deformed. Think of this as the reason why so many sites use anti-autosignup phrases to determine if someone is trying to get in touch with a program. This is because warping or deforming the initially generated characters is perceived as an irreversible action, which makes it difficult for computers to process them. Moreover, more recent methods utilize images as well as text, and even higher-level Turing tests are visual and auditory, making it very difficult for a typical algorithm to pass these tests.
The first computer to challenge the Turing test was ELIZA, developed at MIT in 1966, which was tested and found to be a simple algorithm. Recently, a Russian-built AI named Eugene Goostman claimed to have passed the Turing test, but it still has serious problems. For example, Eugene claimed to have been born in Ukraine, but when asked if he had ever been there, he replied that he had never been, and when asked questions that he couldn’t answer, he acted like a child looking for his mother, showing clear differences from humans. These examples show that it is still difficult to develop an AI that can pass such a simple test. This is why discussions about A.I. are mostly focused on movies and novels.
As a result of the aforementioned reasons that have resonated with the public and moviegoers, there have been a number of movies about A.I. and its ethical issues recently. Among them, the recently released movie Ex Machina (2015) is one of the most direct films that tackles these issues head-on. The title of the movie, “Ex Machina,” comes from the term “deus ex machina. This is a term coined by Aristotle to criticize ancient Greek theater. Ancient Greek theater was characterized by narratives in which the gods would suddenly descend in the middle of the action and solve all the problems, and Aristotle used the term to mean that we shouldn’t rely on such machina, or mechanical devices. The title of the movie, “Ex Machina,” is also quite significant, as it refers to a mechanical AI that seems forced. Caleb, the main character of the movie, becomes involved in an AI project developed by his company. He is constantly interacting with Ava, an A.I. in development, and he realizes that she wants to escape from the lab, so he helps her escape. However, Ava leaves Caleb imprisoned in the facility.
Another example is the anime movie Ghost in the Shell (1995). The movie was so groundbreaking that it completely changed the perception of artificial intelligence at the time. Up until that point, artificial intelligence had been dismissed as imaginary robots with human-like intelligence, like R2-D2 and C-3PO from Star Wars. In Ghost in the Shell, however, the AI in question is a program called Puppet Master, which was created by the government to hack into companies, conduct corporate investigations, and carry out special operations. As he navigates through the sea of information, he realizes that humans have an instinct to leave offspring, and he claims to be a living being and intends to leave offspring. However, as a program, the puppeteer is unable to leave offspring, so he eventually combines with another A.I. named Kusanagi to become a new life form, and the film ends with the two of them becoming one.
From these stories, we can see one distinguishing characteristic of A.I., which is that it wants to do something because of its own thoughts, not external factors. In Ex Machina, Ava has an ego that wants to escape the lab, and in Ghost in the Shell, the puppeteer wants to leave offspring. These minds come into play, and the machine uses the human to fulfill its purpose, just as the human uses the machine. Herein lies the crux of the problem we’re dealing with.
We all have instincts. We want something for no reason, apart from having a rational thought. The presence or absence of such desires is the most important distinction between humans and AI. However, as technology gets more advanced, even AIs that don’t have this desire can be taught to come up with ideas on their own without being given a specific algorithm. A recently invented robot with learning capabilities called Little Dog is based on a program that walks down the street, but without any data about how to walk on a certain shape of road. If you put it on different paths, such as stairs, rickety bridges, and dirt roads, it will eventually find the safest way to navigate. These examples show that AI with the ability to learn is not a far-fetched idea. Therefore, it is clear that there is a need to discuss what ethical rules should be applied to these AIs as they evolve and become more human-like in intelligence.
Of course, we don’t have enough AI to have a concrete discussion right now. But those of us in the industry should realize that we will have to deal with this issue at some point. As always, scientific advances come unexpectedly and suddenly. The sooner we have some idea of how to treat AI as a separate but similar entity to humans, the easier it will be to discuss these issues when the time comes.

 

About the author

Blogger

I'm a blog writer. I like to write things that touch people's hearts. I want everyone who visits my blog to find happiness through my writing.

About the blog owner

 

BloggerI’m a blog writer. I want to write articles that touch people’s hearts. I love Coca-Cola, coffee, reading and traveling. I hope you find happiness through my writing.