COGNITIVE WORLD

View Original

Will There Be A ‘Kill Switch’ For AI?

Source: COGNITIVE WORLD on FORBES

AI systems are constantly evolving. Machine learning models learn from data and experience, and once they are released into the real world, they need to continually be monitored, tested, and retrained on an ongoing basis. It also needs to be created with ethical and responsible frameworks in place. Just because AI is created with good intentions, doesn’t mean that the real-world applications will go as planned. One poignant example is the embarrassing failure of Microsoft’s chatbot Tay, which launched in 2016. Within a day of its release people had taught the twitter-based bot to be racist and misogynistic. Not only did this bad, public behavior hurt Microsoft’s brand but it had a big social impact when it came to how people view AI. This was clearly a case of unintended consequences.

A focus of organizations when developing and/or implementing AI needs to be on how to create an intelligence that is capable of doing what it was intended to do. In the Microsoft example, no one got injured or killed and the bot was able to be quickly shut down with minimal consequences, but as we start to rely more and more on artificial intelligence that is a big possibility. 

Dr. Mark van Rijmenam. Image: LinkedIn

On a recent AI Today podcast episode, Mark van Rijmenam, Founder of Datafloq, faculty member of the Blockchain Research Institute, and public speaker shared insights into how AI systems are developing in the context of responsibility and ethics. Mark is pursuing a Ph.D. at the University of Technology Sydney where he researches how technology, such as AI, affects organizations and businesses. His professional focus is in Blockchain, big data, and AI. However, Mark’s biggest focus is to look at how technology affects social aspects as well as organizations, and in particular responsible artificial intelligence. 

According to Mark, the explainability of AI systems is one way that we can build AI systems that are accountable for their results and actions. The premise is that a responsible AI should record everything that it does and the reasons for its decisions, so if something happens, humans can go back after some unexpected thing happens and see why the system made that decision. Mark uses the example of a self-driving car driving into a wall. After the car has hit the wall, investigators should be able to ask the AI in the car why it drove into the wall instead of taking another action. The AI should be able to respond in understandable terms that it drove into the wall because of one particular reason or because something specific happened.

In today’s world, it is very important to know how and why decisions are being made by computers. We have cars, planes, and drones that are partially or fully controlled by AI. Despite the fact that we need to have explainable AI we are not at a point technologically where we can do this, especially for algorithms that lack transparency such as deep learning. Right now, after an incident involving an AI, we have to have a human go through the code and try to find out what happened. This is challenging. 

Ethical guidelines for AI

Another way that we can combat AI systems that have unexpected failures and learn from past AI experiments is to instill ethics into our artificial intelligence. The problem with teaching ethics to AI is that we first have to come up with a consensus on what is “good” and what is “bad”. For many particular issues, such as theft or murder, we as a society can agree on what is good and what is bad, but not everyone agrees when it comes to more nuanced, philosophical aspects.  Furthermore, many aspects of morality and ethics are cultural, and as such, things get more complicated as these AI systems interact in different regions of the world and with people of different cultural or religious perspectives. 

An important thing to note when we look at responsible and explainable AI is we are entering a world where AI doesn’t have to be taught by humans how to do things. AI can now learn how to do various tasks on its own. Google’s DeepMind developed AlphaGo. The system learned how to play the game Go by itself and got so good it was even able to beat Lee Sedol, the best human Go player in the world. 

Can a bad AI system be controlled?

Many who are worried about superintelligent systems or malicious use of AI systems consider that there should be some control measures to shut off AI systems that rapidly ramp out of control, such as the Tay bot. Control measures come down to governing rules built into the AI pr even creating a kill switch that can quickly shut down the AI system. In a Facebook AI Bot experiment, the company ended up having to terminate the bot because it started communicating in an unintelligible language. The new language was more efficient for communication between the bots but humans were left unable to understand the bots. This was again a case of unintended consequences. 

In Mark’s opinion, very few companies review their code or make significant efforts to be responsible when it comes to AI systems. However, an example of a company that is doing a good job is Google’s DeepMind. Mark considers this Google project to be one of the world’s leading AI companies. DeepMind started a research branch and that team of around 25 people whose primary focus is on AI ethics.

Artificial intelligence responsibility is a big issue as we increasingly come to depend on these systems in our daily lives. Increasing aspects of our activities depend on machine learning systems which provide a troubling lack of accountability. These systems sometimes operate with little human control or oversight, and in those situations, we should be even more concerned. As companies see AI, blockchain, big data and other technologies as competitive advantages, they will increasingly make use of these transformative technologies with less regard for security and responsibility. For these reasons, it’s important that organizations consider the ethical and responsible use of AI to mitigate the risk of unintended consequences for these systems. Once these systems, it might not be as easy or feasible to simply pull the “kill switch”. 


Kathleen Walch is Managing Partner & Principal Analyst at AI Focused Research and Advisory firm Cognilytica, a leading analyst firm focused on application and use of artificial intelligence (AI) in both the public and private sectors. She is also co-host of the popular AI Today podcast, a top AI related podcast that highlights various AI use cases for both the public and private sector as well as interviews guest experts on AI related topics.

Follow Kathleen on Twitter. Check out her website.