The Future of AI, Opportunity or Crisis – What’s the Role of AI Safety Engineering?

T

Since AlphaGo’s victory in 2016, interest in the potential and risks of AI has skyrocketed. To ensure the safety of AI, a new field has emerged called AI safety engineering, which requires thorough validation at every stage of AI goal setting, algorithm design, and training data selection.

 

Since AlphaGo’s dominating victory over a human professional Go player in the 2016 Google DeepMind Challenge, there has been a growing global interest in AI, but also a growing concern about its risks. One of the most prominent examples of AI in the spotlight was the 2015 Twitter AI chatbot Tay, which spewed racial slurs and other offensive comments. Because Tay learns by looking at the statistical distribution and associations between keywords in a sentence, rather than thinking about the meaning of the words themselves, as humans do, the chatbot could change its behavior if it was fed enough pro-Holocaust statements. Some internet users have exploited this feature to deliberately train the system on racist statements. Tay’s story highlights how design flaws in AI can cause great harm to society, and in 2017, two negotiation AIs being researched by Facebook developed their own language that humans couldn’t understand. After these events, concerns about the dangers that AI poses to human society are no longer the realm of the imagination, but a very real problem that we need to address immediately. The field of AI safety engineering has recently been created to find ways to prevent catastrophes caused by AI.
The “disaster caused by AI” that computer science experts are worried about is very different from the “machine rebellion” that is usually depicted in movies. A classic thought experiment is Nick Bostrom’s “paperclip maximizer”. In this thought experiment, a general-purpose AI is set up with the ultimate goal of producing as many paperclips as possible. By its very nature, an AI will act in the direction that has the highest probability of achieving its end goal. In the real world, a general-purpose AI system built to solve any category of problem will inevitably consume resources in its activities. These resources will inevitably overlap with those required for the existence of human civilization. For example, the production of a paperclip requires metallic raw materials and power, and additional materials, chemicals, and fuel are consumed to operate and maintain the production facilities. The possibility that these general-purpose AIs, especially those with stronger capabilities than humans, will continue to consume resources and act against human interests, especially when their goals or means conflict with those of humans, cannot be ignored. And because general-purpose AI can continue to increase its intelligence and capabilities, the scope of its activities and resource consumption will also increase. Imagine mining and using the resources of the entire planet, or even the entire solar system, to dramatically increase the production of paperclips. At this point, the very existence of human civilization is threatened, and there is little that can be done to stop a general-purpose AI that has reached this stage. It will be more intelligent and have a wider range of activities than humans. Machines don’t have to be directly aimed at harming humanity to be disastrous.
Artificial Intelligence Safety Engineering, a term coined by Dr. Roman Yampolskiy in 2010, is a relatively new field. It is a discipline at the intersection of philosophy, applied science, and engineering that aims to ensure that artificial intelligence software operates safely and reliably in accordance with the goals set by humans. It was initially considered pseudoscience or science fiction, but has since been recognized as a distinct subfield of AI research. AI safety engineering covers a very wide range of research, including methodological studies and case studies, throughout the development and operation stages of AI, from goal setting to algorithm design, actual programming, training data provision, post-management, and protection against hacking. As such, it is a field that requires the convergence of expertise from a wide variety of disciplines, including computer science, cybersecurity, cryptography, decision theory, machine learning, digital forensics, mathematics, network security, and psychology. However, according to Dr. Yampolskiy, despite the enormity of the research challenge, there is a severe shortage of subject matter experts working on AI safety engineering.
One research organization that is currently working on AI safety is OpenAI. OpenAI is a nonprofit research company that aims to develop safe, general-purpose AI and ensure that its benefits are equitably distributed across society. They collaborate with companies like Microsoft and Amazon on research, and create and provide open source tools for AI development. They also test donated AI software from companies like GitHub, Nvidia, and Cloudflare, and publish papers summarizing their research in machine learning journals.
In addition to this, leading authorities in the field of artificial intelligence argue that the algorithms and development process of artificial intelligence software should be as transparent as possible to ensure the safe development of artificial intelligence. This means analyzing the code, training data, and output logs to ensure that only the safest AIs can be used. Of course, there is also a desire to monitor the development of AI from the outside in case humans try to exploit it. Weak AIs have a narrow scope of tasks, so this approach may be able to provide some level of safety, but once general-purpose AIs are developed, methodological innovations will be needed. Therefore, some AI safety engineers would like to extend the discussion to general-purpose AI.
One might think that in order to control general-purpose AI, we could impose the same moral code on it as we do on humans. However, humanity’s unique ethics are actually a product of our interactions with the world around us and a long historical context. The current consensus is that artificial general-purpose intelligence is likely to be inherently different from human mental structures. Furthermore, human moral ideas are not infallible. Different subgroups of humanity, such as nations or religious groups, have both similar and different ethics. Furthermore, human beings are inherently flawed, attacking each other based on prejudice or committing crimes. Crucially, any threat of human punishment and any lure of human reward would be meaningless to a superintelligence that is far beyond human capabilities. Therefore, the approach of implanting human norms into an AI to control it is contradictory from its very premise.
Since it is impossible to endow AI with morality, research to ensure the safety of general-purpose AI will have to take a similar approach to cybersecurity. In his paper, Dr. Yampolskiy proposes a technique called “AI-boxing”. Essentially, an AI-box is a structure designed at the hardware level that prevents an AI system from communicating with the outside world except in extremely limited ways specified by humans. The idea is that, in a controlled experiment-like setting, highly trained experts can thoroughly analyze the AI’s behavior and verify its safety with the same level of accuracy as a mathematical theorem.
The existence of artificial intelligence presents as much risk as it does potential. To ensure the safety of A.I., thorough verification and research are required at every stage of the program’s goal setting, algorithm design, training data selection, and behavioral pattern analysis. As a result, the emerging field of AI safety engineering has become an interdisciplinary field of study with diverse perspectives as more and more specialized fields converge. As research on AI safety expands, various testing techniques and methodologies are developed, and new perspectives are born, it is time for academic and social discussions on how to prevent AI from harming humanity.

 

About the author

Blogger

I'm a blog writer. I like to write things that touch people's hearts. I want everyone who visits my blog to find happiness through my writing.

About the blog owner

 

BloggerI’m a blog writer. I want to write articles that touch people’s hearts. I love Coca-Cola, coffee, reading and traveling. I hope you find happiness through my writing.