Are advances in artificial intelligence a blessing or a disaster for humanity? AlphaGo and the arrival of the singularity

A

AlphaGo’s victory over Lee Sedol in 2016 is an iconic moment in the development of artificial intelligence. The development of AI can bring great benefits to humanity, but it can also pose great risks if misused. To prepare for the future, we need to find ways to develop and utilize AI correctly.

 

In 2016, AlphaGo, a Go artificial intelligence program developed by Google DeepMind, and professional Go player Lee Sedol 9 drew a lot of attention. The result was a 4:1 victory for AlphaGo. AlphaGo became the first Go program to defeat a professional Go player, having previously beaten Pan Hui 2 and then Lee Sedol 9. AlphaGo is often mentioned in conversations about artificial intelligence because the algorithm and its victory over a human are significant in the evolution of AI. AlphaGo’s existence brings us one step closer to the age of artificial intelligence, and it’s important to understand what it is in order to prepare for the future.
Humans have evolved to empathize with other humans. That’s why we can put ourselves in each other’s shoes, imitate, and interact. What’s interesting about this is that people anthropomorphize non-humans and think of them as if they were human. In a study by Barett and Keil, test subjects tended to think that God could only be in one place at one time and doing one thing at a time, even though they knew that God could be in multiple places and do multiple things at once. Similarly, when it comes to AI, people tend to think of it as human-like, if not more so than God.
People also tend to anthropomorphize AI’s abilities. When we think about “intelligence,” we think in terms of human-level intelligence. For example, when we hear the word ‘intelligence’, we think of Einstein, not the species ‘human’. From the perspective of whole organisms, this is quite inappropriate, because if the difference in intelligence between an ant and a human is a line segment, even the difference between a 4-year-old and Einstein is a dot. The same fallacy is true for AI. If an AI is at the level of a 4-year-old child, people won’t pay much attention to it because they think it’s smarter, but if it’s at the level of Einstein, people will be excited about the discovery. The problem with this is that AI cannot think at a human level. Vernor Vinge has argued that when AI reaches a point where it can create an AI that is better than itself, the advances in AI will trigger a chain reaction that will lead to a sudden increase in performance, a point he calls the “singularity. Assuming that humans can create an AI that is better than themselves, the AI will slowly evolve from amoeba or insect-level intelligence to human-level intelligence and then surpass human intelligence the moment it reaches the singularity.
What makes AlphaGo so important is how it improves its performance. Before the match against Lee Sedol 9, AlphaGo played against a second AlphaGo, a clone of itself, for several months to improve its performance, resulting in a victory over Lee Sedol 9. The fact that AlphaGo was able to improve its performance without direct human assistance is similar to Vernor Vinge’s idea that “singularity” AI creates better AI. It also showed that AlphaGo can learn and surpass the abilities of adults in the game of Go. Currently, AlphaGo plans to continue to increase its scope of application by utilizing existing algorithms. This is part of an effort to get closer to the “singularity,” which is expected to be the point of practical adoption of AI.
It is our creativity, a product of our intellect, that has made us unique on the planet, despite our physical disadvantages compared to other animals. If A.I. transcends human intelligence past the singularity, it will undoubtedly replace much of it, perhaps all of it. Even the Industrial Revolution, which changed the paradigm of human civilization, was accompanied by a major transformation of social structures, which in turn led to an increase in living standards. In the Industrial Revolution, machines replaced human labor, but if AI replaces human intelligence in the future, it will change the structure of society and the way we live, with an impact that is incomparable to the Industrial Revolution.
The impact of AI is expected to be enormous, but the problem is that while it can contribute to human civilization, it can also pose a threat. It can be seen as a double-edged sword, like nuclear power, and if AI becomes a large part of human society and turns against humanity, the consequences will not be limited to the Chernobyl nuclear disaster. In the worst case scenario, A.I. could become a crisis for the entire human race, as depicted in movies such as ‘The Terminator’ and ‘The Matrix’.
So, should we stop all development of AI because it’s so dangerous? Not really. It’s true that AI is dangerous, but if used well, it will bring benefits that are unprecedented in human history. AI will be better than humans in every field, so it will replace inventions that were once the sole domain of humans, and its computational speed will be incomparably faster than humans, allowing it to solve technical problems in a fraction of the time it would take humans hundreds or thousands of years to solve.
As we’ve discussed, how we use AI could drastically change the future of humanity, so we need to be prepared to use it properly. Let’s consider a scenario where AI could threaten humanity. The “optimization process” is often used to describe how AI works to achieve its goals. In mathematics, optimization means finding the maximum or minimum value of a function that satisfies constraints in a given situation. If A.I. were to be used to solve real-world problems, it would follow the optimization process. The problem is that A.I. will perform any method of solving a problem as long as it satisfies the constraints. Take a famous example. If an A.I. is given the goal of “make this person laugh,” it will initially bring up topics that the person finds interesting and show them funny videos. However, it will soon realize that the best way to make the person laugh is to stick electrodes into their brain and send electricity through them. Of course, this is not what we want. If A.I. becomes involved in the world’s major decisions, it won’t just be one person’s tragedy, as in this example.
In his TED talk, Nick Bostrom argues that to prevent this from happening, we need to create AI that learns human values. If this is feasible, it would be quite effective in preventing tragedies caused by AI. The idea of an AI learning human values is that it would codify and learn human values, which would mean that human values would become common among AIs. AI that learns human values is expected to reflect them in its optimization process, eliminating the means we don’t want in its decision-making process.
As big a problem as it is for an AI to come up with an answer that is obviously wrong, we may not even know it’s wrong. As AI enters our lives, it will also be involved in decision-making on our behalf. There are cases where AI’s decisions may seem wrong at first, but turn out to be right over time. We’ve already seen how an automated design algorithm using an optimization process designed an optimal bridge structure that humans never thought of. Since we don’t know the detailed process that the AI went through to reach its conclusions, the accumulation of such cases will lead to a social climate of unconditional trust in AI’s judgment, which will expose us to the dangers of AI.
In order to compensate for the expected problems of A.I., we need to prepare for them before they are introduced. Of course, there is still a long way to go in the development of A.I., and there is still much to learn. There will be issues that can only be determined after A.I. is developed. However, before the singularity and the introduction of A.I. approaches, there are still issues that can be considered, such as “in what direction should A.I. be developed?” and should be thought through during the development process. Predicting and preparing for the problems associated with the introduction of A.I. will help to accelerate the introduction of A.I..

 

About the author

Blogger

I'm a blog writer. I like to write things that touch people's hearts. I want everyone who visits my blog to find happiness through my writing.

About the blog owner

 

BloggerI’m a blog writer. I want to write articles that touch people’s hearts. I love Coca-Cola, coffee, reading and traveling. I hope you find happiness through my writing.