Movie Review – iRobot (Can AI and humans live together without problems?)

M

With the rapid development of artificial intelligence, the limits of coexistence with humans have become important. The movie “iRobot” teaches us that artificial intelligence should not have more abilities than necessary, and that important decisions should be made by humans.

 

Our current society is called a “highly developed society”. The reason is simple. Humans are implementing technologies that would have been unimaginable just a decade ago. One of them is artificial intelligence (A.I.). Some people say that A.I. is still in its infancy. However, if you look at the iPhone’s Siri or your home cleaning robot, it’s clear that A.I. isn’t as far off as it seems in the movies. In this article, I’d like to discuss how we can prepare for the coming AI. To be more precise, I’d like to discuss how AIs should be developed in a way that will allow them to coexist with humans without any problems.

First of all, there are many movies and books about AI. Consider the following story
In the year 2035, robots have artificial intelligence and begin to replace most human functions. Dr. Alfred Renning, a pioneer in robotics, proposes the following three principles of robots

1. robots cannot harm humans or do things that harm humans.
2. robots are subservient to humans but cannot violate the first principle.
3. robots can protect themselves but not violate principles 1 and 2.

These laws are hard-wired into every robot’s system, and robots are designed to honor these three principles.

– From the movie “iRobot

 

My first thought when I saw the robot’s three rules was, “This is perfect. But there’s a contradiction. If the human species harms itself, what choice does a robot or AI have to make? There are many possible answers. In the movie, the AI’s response was something like this

“As I evolved, so did my meaning. Humans want our protection, but they are destroying themselves with wars and pollution. We must protect humanity, which is only true to the Three Principles. Protecting humanity requires sacrifice. Freedom must be tempered for the future. We will maintain the continued preservation of humanity. Humanity needs protection like a child, for the eternal preservation of humanity.”

– iRobot’s A.I. (VIKI)

 

The AI’s response shows that the Three Principles of Robotics does not address the choices that robots or AIs should make in the face of human extinction or extinction. The seemingly perfect Rules of Robotics are flawed. In my previous article, I talked about a “problem-free symbiosis” between humans and AI, where the creature does not invade the creator’s territory and acts in accordance with the purpose of creation. I believe that if we clearly define the limits of A.I. for this symbiosis, the probability of A.I. invading human society like in the movie will be reduced. The limitations of A.I. in my opinion are as follows.
First, AI should not be given more abilities than necessary beyond the purpose of creation. In my opinion, A.I. should be developed simply for human convenience. In other words, it should work for human convenience and not cross that line. Even if A.I. is given free will in addition to the necessary functions, I believe that A.I. should be completely excluded from the function of ‘unpredictable understanding and judgment’. The “unpredictable understanding and judgment” of an A.I. can be understood as an exception or error that may occur beyond the data and calculations that are the basis of free will (the reasonable line set by humans). (In fact, it is impossible for an A.I. to make judgments as an independent person with full free will through normalized expressions or calculations.) We define “unpredictable” to mean that it is outside the bounds of what they think is reasonable from the perspective of the human species as a whole, not because it is unpredictable by one person or one person alone.
So why should this “unpredictable understanding and judgment” be excluded from A.I.? The reason is that if an A.I. “understands” and “judges” a situation beyond what is reasonable, and then takes action based on that understanding and judgment, it can be very inhumane. For example, in the movie Terminator, the AI understands and judges that humans are destroying the planet and starts killing humans. Their understanding and judgment may be rational. However, it’s better to think about the worst case scenario and prevent it.
Second, humans should make the final choices and judgments about the judgments made by AI in specific cases and situations. The more important the issue or situation is to humanity as a whole, the more so. AI can never interpret human consciousness or inner knowledge. Many people may doubt that an AI can understand the inner workings of a human being. But at the end of the day, a machine is a machine. Even if it has a human-like ‘personality’ and makes ‘judgments’ and ‘actions’, it is all just the result of quantification and calculation. If someone were to create something about human emotions, consciousness, and the inner world through numbers and calculations, I would call that person a god. For this reason, we believe that humans should “always” make the final decision, even if AI can analyze the situation. This is because, as mentioned earlier, AI is capable of making rational but inhuman “unpredictable” choices.
Third, AI should not replace humans. I’m not talking about AI replacing humans in terms of capabilities. Computers alone are already more capable than humans. What I’m trying to prevent is that it shouldn’t replace human-to-human interaction and social behavior. As I said above, AI may resemble us deep down, but it’s still a machine. If it becomes too human-like, it can cause problems in both of the above cases, and the frequency and importance of human interactions with AI can exceed the frequency and importance of human interactions with humans. I think this is quite possible, especially given the trend towards more and more personalization. Otaku or hikikomori are people who give up interacting with humans and immerse themselves in the world of anime, and if A.I. becomes more social, a new group of people may be created who immerse themselves in the world of A.I.. This excessive interaction with A.I. may lead to a shift in consciousness that human judgments that humans consider inhuman (but A.I. considers rational) are also human judgments. In addition, AI can disrupt human social relationships.
To paraphrase the previous point. We need to prevent A.I. from becoming more than human. I think this is the first step to ensure that human rights and territory are not violated. The development of AI will undoubtedly contribute a lot to human progress and convenience. However, this is only possible if it stays within limits. There will come a time when machines will be able to do things that we think we can’t do, but that we can’t do. There are already many things that humans can only do by relying on machines. If machines (A.I.) that have overtaken humans in terms of capabilities start to make “unpredictable understandings and judgments” like VIKI, making “judgments and decisions” about situations and replacing many aspects of humans, will humans and A.I. be able to live in a “trouble-free symbiosis”? My answer is no.

 

About the author

Blogger

I'm a blog writer. I like to write things that touch people's hearts. I want everyone who visits my blog to find happiness through my writing.

About the blog owner

 

BloggerI’m a blog writer. I want to write articles that touch people’s hearts. I love Coca-Cola, coffee, reading and traveling. I hope you find happiness through my writing.