When the lines between AI and humans blur, how will we define our humanity?

W

Drawing on Alan Turing’s Turing Test and the arguments of Brian Christian, this paper argues that when AI learns and socializes in human ways, the boundaries between humans and AI will blur, and we will have to change the way we think about being more human.

 

British mathematical genius Alan Turing asked the question, “Can a machine think?” and proposed the imitation game as a criterion for determining if a machine can think. In this game, a judge pits a human and a machine against each other, and the machine is deemed to have passed the Turing test if the judge judges the machine as human. This imitation game is what is now called the Turing Test. “In 2009, Brian Christian, author of The Most Human of Humans, participated in one such Turing Test competition, the Löwner Prize contest. There, he identified many factors that make humans more human than AI in order to win against AI, and argues that we need to become more human in order to distinguish ourselves from AI and lead human lives in the future.
Since the term “artificial intelligence” was coined in 1956, the field of artificial intelligence has continued to evolve. In the early days, artificial intelligence was used to solve complex, structured problems, and then expert systems emerged that provided human expertise and experience at the right time, helping people by learning directly from various data. Later, machine learning techniques emerged, and AI was divided into supervised learning, unsupervised learning, and reinforcement learning, depending on how data is input. To overcome the limitations of machine learning, deep learning, based on reinforcement learning, was introduced, and A.I. began to learn by itself by imitating human neural networks. If A.I. learns in the same way as humans through deep learning methods, would we no longer be more human than A.I.? Rather, AI and humans will possess the same “humanity”. Before we begin, let’s define humanity.
The Turing Test is a test that answers the question of whether an AI can think. However, given the current 33% passing rate, the short 5 minute time limit, and Brian Christian’s point that the conversation must be word-for-word, we define humanity as when all the judges in an “unconstrained” Turing test find no evidence that the interlocutor is an AI.
In the past, AIs that have participated in Turing Tests have either answered certain questions in a calculated way, or they’ve consulted a database of human-human conversations, and when asked the same question, they’ve given the same answer in the database. So being the “most human human” was the best way to win the Turing Test, and Brian Christian was right in his approach. But the AI we’re going to see in the future will use deep learning algorithms. Since deep learning algorithms are reverse-engineered versions of the human brain, the way AI learns is the same as humans. Therefore, the AI that participated in the Turing test in the past and the AI that will participate in the future are inherently different.
So what separates us from A.I.? According to Brian Christian, unlike AI, we have state-dependent and place-appropriate conversations. State-dependent conversations are conversations that understand, remember, and reflect on past situations. And “place-appropriate” conversations are conversations that understand and reflect the current situation. These conversations would pass the “unconstrained” Turing test. In the past, database-driven A.I. did not even have a basic understanding of where they lived, who they were, or what their current situation was, so they would either parrot back what the other person said, or simply output random answers to the same question based on a large amount of conversation data. AI with a specific identity also fails to reflect the context of the current conversation. For example, Eugene Gustman, an A.I. that is said to have passed the “constrained” Turing test, has the identity of a 13-year-old Ukrainian boy. “Here’s a conversation with Eugene Gustman that Ray Kurzweil, author of The Birth of Mind, posted on his blog.

Ray Kurzweil: I live in the capital of the United States. Do you know where that is?
Eugene: Even an idiot knows the capital of the United States is Washington.
Ray Kurzweil: I live in the capital of the country that put a man on the moon.
Eugene: Well, if you want an answer, the capital of the United States is Washington.
Ray Kurzweil: I live in the capital of the country that built the Great Wall of China.
Eugene: Tell me more about the capital, I love hearing about different places.
Ray Kurzweil: How old have you been wearing glasses?
If I’m not mistaken, you still haven’t told me where you live. Is it a secret?

Ray Kurzweil has told Eugene where he lives, but Eugene Gustman keeps asking questions about it, which means that Eugene Gustman doesn’t understand the context of the conversation at all. But what about AI with deep learning?
Consider AlphaGo, the most famous AI that uses deep learning. AlphaGo was taught to play Go using deep learning, and in March 2016, it defeated Go master Lee Sedol. This is very significant. Go was the last bastion of machine-proofing, even when chess was falling to AI. This is because Go has more moves than the number of atoms in the universe, requiring not only calculation but also human intuition. The reason why intuition is necessary in Go is that you can’t calculate the number of moves in every case, so you have to make moves that are likely to win based on the situation. This is where intuition comes in, the ability to see the whole picture in the moment. AlphaGo uses the same approach. In each situation, it chooses a number that it thinks will win. In the end, AlphaGo surpassed humans in intuition, a trait that was thought to be uniquely human. This suggests that AI may surpass humans in other areas as well.
So, AI can do so in conversations. In order to have “state-dependent” and “place-appropriate” conversations, an AI must not only be able to talk, but it must also have “sociality,” which is “the ability to learn to regulate the mindset of a community and its relationships with other members in the course of growing up as a member of a society.” If an AI can learn how to talk and socialize, it will be equally human. Humans learn language after birth. We start by repeating what our parents say, then we learn the meaning of the words, and then we realize how to talk in certain situations through various experiences in real life. This is the same with deep learning algorithms. A chat robot with deep learning has a huge database and doesn’t just use the data unfiltered, but recognizes patterns in conversations and reflects them. Based on the conversation, it judges whether it is similar to existing human conversations, and starts learning new ones based on the judgment. Therefore, A.I. can have conversations.
Humans also learn to socialize after birth. As a child grows up, they learn social norms such as morals, and they learn what is acceptable and what is not acceptable. We can also see that a person’s social behavior is greatly influenced by the environment in which they grow up, so social behavior is learned. As mentioned earlier, the learning methods of deep learning are the same as those of humans, so if humans can learn, then AI can learn. Therefore, through deep learning methods, A.I. can acquire human-level socialization and have enough conversations to reflect it.
Deep learning learns in the same way as humans, so let’s look at learning how to communicate in the context of big data. With the popularization of PCs, smartphones, and the internet, the amount of data has increased exponentially. This unstructured data is called big data, and it becomes the training material for deep learning. A typical example is social media. In September, Facebook had 1.79 billion users. If each user posts one post per week, that’s 1.79 billion posts. From these posts, it is possible to extract the sentiment of a specific audience using big data technology. Deep learning allows us to extract the emotions of a specific audience. Therefore, in a specific conversation situation, A.I. can recognize the emotions of the situation.
So, despite the existence of AI using deep learning techniques, why is there no AI that can have a perfect conversation yet? Deep learning is based on how the human brain works, but we don’t yet fully understand how the neocortex works, and deep learning is still an unfinished field. More efficient algorithms continue to emerge on the software side, and there are endless possibilities for hardware advancements, such as quantum computers. But one thing is for sure: just as Ray Kurzweil predicted that AI would surpass human-level intelligence in the 2030s, the enormous potential of deep learning leads us to believe that AI could reach human-level intelligence.
So while it is true that Brian Christian’s idea of distinguishing between AI and humans learned through deep learning has certainly given us questions, answers, and food for thought about whether or not we are living humanly, in the end, there will be no difference between AI and humans. The textbooks that A.I. will learn from will be human, so it will reach a point where it will be indistinguishable from humans. In the end, there will be no “humans more human than A.I.,” but only “A.I. as human as humans.” Therefore, what we need to do now is not to make ourselves more human in the face of the emergence of A.I., but to prepare for a society in which A.I. will reach and surpass human intelligence.

 

About the author

Blogger

I'm a blog writer. I like to write things that touch people's hearts. I want everyone who visits my blog to find happiness through my writing.

About the blog owner

 

BloggerI’m a blog writer. I want to write articles that touch people’s hearts. I love Coca-Cola, coffee, reading and traveling. I hope you find happiness through my writing.