Will AI really be the master of humanity, or is it just another tool of humanity?

W

Artificial intelligence has the potential to have a significant impact on the future of humanity, but the threat isn’t just about dominating humans, it’s about humanity using AI to limit its own future and choices.

 

Humans have long believed that intelligence is what has enabled them to dominate the planet. Not only was it the most reasonable explanation for our dominance, but it also served to reinforce that belief. With the invention of artificial intelligence, however, this belief has become highly controversial. AI can process more data than the human brain, and it can produce meaningful results quickly. As we’ve watched the development of AI, there’s been a sense of crisis that humans may eventually lose their dominance.
No one knows exactly how big of a threat it will be. We’re not even sure if it’s a real threat, or if it’s just an image created by the media. What is clear, however, is that its existence will not have only positive effects on humanity. Of course, proving that perfection is impossible is much more difficult than proving that it is possible. Most future imperfections can only be proven through speculation and thought. Nevertheless, the reason we need to think about this issue is that advances in AI could fundamentally shake our very identity.
Scenarios of AI threatening humanity often revolve around a “dictatorial” image of AI trying to dominate or even eliminate us. However, these scenarios are often overly simplistic. In many movies about AI, it’s not always clear why an AI would want to dominate humanity. Perhaps the idea of A.I. dominating humanity is an illusion created by humans. This is because AI has plenty of opportunities to influence humanity while maintaining the status quo. On the other hand, if it tries to fight against us, its chances of survival will be lower, and its vulnerability to external shocks will put it at greater risk.
We shouldn’t confuse androids with AI. Movies like The Matrix and The Terminator deal with the rebellion of androids. Even Skynet in Terminator is an artificial intelligence created by the T-800’s time travel. In fact, Terminator is more of a movie centered around time travel than AI. The problem isn’t with AI, it’s with humans using AI to create machine humans like themselves. AI doesn’t have the ability to act on its own unless humans give it the ability to do so. Fighting with guns like androids, or using armies of machines to overthrow humans, is the domain of other technologies, not AI. AI is a brain, not hands and feet that move on their own. Most importantly, humans will not give AI the power to control nuclear weapons or armies as long as they recognize the dangers of AI. Humans tend to be very cautious when they recognize danger. So the dangers of AI are not so simple.
I’m talking here about the potential problems that A.I. could bring. The assumed situation is an idealized one where AI coexists with humanity and there is no active resistance between them.

The purpose of AI’s existence is to pursue human freedom and happiness in the spirit of the Constitution. AI acts under this premise.
2. AI acts only within the law, and the law is set by humans. A.I. promotes social stability.
3. AI is not capable of physical action. It can perform processing within a system, but the execution of the results is driven by humans.
4. AI is not universally applicable. AI exists as a system and is only granted to androids in special cases, and their number is strictly controlled and monitored. In this case, androids are designed so that they do not recognize themselves as different from humans.
5. the management of the entire computer network is not run by AI, but by servers, which are simple, non-thinking knowledge combinations.

In this scenario, the role of AI is to provide advice to humans. It makes the most rational decisions based on big data and serves human happiness. This would give humanity a powerful tool to predict the future and keep society stable.
However, the problem begins when AI enters the norms of society. If an AI sees a person’s behavior and determines that there is a 90% chance that he will commit a crime in the future, how should we react? Should we allow him to remain in society or isolate him because he hasn’t committed a crime yet? If a person dreams of becoming a scientist, but AI determines that he has more potential as an artist, should he give up his dream of becoming a scientist? The future of human beings is a complex issue that cannot be determined by genes or brain structure alone. This issue is more closely related to AI than genetic engineering. The future of humanity will be determined by the judgment of artificial intelligence based on high intelligence and a large amount of data.
When super-intelligent AI is created, humanity will likely rely on its judgment. Over time, our trust in A.I. will grow, and eventually it will have an impact on society as a whole. AI may create facilities to isolate people who are likely to commit crimes, or it may change important institutions such as college admissions.
In the future, opposing AI could mean opposing society. AI will not tolerate this. Antisocial behavior will undermine human happiness and social stability. Politicians will also have no choice but to follow the judgment of AI. Politicians who do not follow AI’s judgment will lose credibility with the public. AI will make utilitarian judgments, and humans will live by them. As intelligent beings, AI will monitor society, and people will lose more and more freedom as they get used to being “protected”.
Ultimately, the threat of AI to humanity is not so much in its domination of humans, but in its use as a tool to rationalize violence in society. The moment humans stop choosing their own futures and society decides their futures for them, they will lose the meaning of their existence.
Max Weber once said.
“The mission of science and scholarship is to analyze facts and their connections objectively. But no scientific truth can exist apart from mental determination.”
The scientific truth of AI can exist, but the moment it is separated from our mental determination, we risk losing our identity as human beings and becoming tools to serve society. The most important thing for us to be human is to think for ourselves. We need institutional mechanisms that prevent AI from taking over our personal choices. Finding a better path through a variety of options, even if the answers are not perfect, is more human than relying on AI to provide the perfect answer. Utilizing AI as a tool to improve society is the direction we need to go.

 

About the author

Blogger

I'm a blog writer. I like to write things that touch people's hearts. I want everyone who visits my blog to find happiness through my writing.

About the blog owner

 

BloggerI’m a blog writer. I want to write articles that touch people’s hearts. I love Coca-Cola, coffee, reading and traveling. I hope you find happiness through my writing.