Is AI Good or Evil?

Ethical Dilemmas in AI

I frequently get questions from people that are not educated in computer science or machine learning, whether AI will take over the world—a sort of doom’s prophecy. Typically I always answer this question the same way. — No, our current methods in machine learning will not take over the world; moreover, the “AI” systems we build today are designed by humans and ultimately humans have the power to control what systems can and cannot do. Then I usually go on to try to explain that the current state-of-the-art in machine learning is specialized intelligence for specific tasks, such as, playing chess, driving a car, or classifying images. Although these systems are capable of remarkable things, they do not have consciousness and they are no where near general enough to be compared to human intelligence or “AI”.

I have no data to back this up and it might be due to bias in my social circles, but my impression is that a majority of people outside of the AI community associate AI with danger and negative aspects. I even feel that there is some stigma about working with technologies related to AI for these reasons. Are the negative views of AI motivated? In this blog post I give my two cents on the topic.

Can AI technology be misused for malicious purposes? Yes, without a doubt. All technologies can be misused if not regulated correctly. In recent time we have seen that cars have frequently been used in terror-attacks (e.g. the attack in Stockholm 2017). Does that mean that the car as an invention is to blame, and that car-manufacturing engineers 100 years ago should have anticipated these terror-attacks? Are AI researchers of today responsible for an AI takeover in 2000 years?

First of all, whether our current methods in machine learning will ever reach a level of intelligence comparable to human intelligence (which would be required for an AI takeover) is not certain. We are at least very far from it. Yet, the possibility of future AI is nonetheless something that bears thinking about. If, or when, an AI system is built, that will have an enormous impact on our society. An impact that could be both positive and negative. We have to make sure that when true AI systems are built, they are built in a way that ensures that the AI will act to the benefit of humans, prohibiting the possibility of an “AI takeover”. However, and this is just my personal opinion, that is not to say that individual researchers in ICT and machine learning are the ones responsible for scrutinizing the possibility of an AI takeover. Especially not at this stage, when an AI takeover is so far distant that it resembles science fiction to many. In my view, a machine learning researcher is not necessarily the person that is best suited to weigh the possibility of an AI takeover that could happen 2000 years from now against the positive aspects of using machine learning to make the world a better place right now. That would be to take the problem too casually and also to put too much weight on the shoulders of technologists.

I think it is extremely presumptuous for scientists to think they can make ethical choices for society. Technology can always be used for good or bad. It is the role of society at large to decide how to use technology. It is not the role of scientists to decide unilaterally. - Yann LeCun

There are many philosophical questions regarding the possibility of future AI systems and ethical dilemmas that come with it that deserve to be studied in their own right by philosophers and researchers around the world (e.g. a good example of this is the work by Nick Bostrom, who I admire). This research is certainly important and has its place, but it should not be put against the research that aspires to use and improve the AI technologies we have today in responsible ways. Even if our current technology is disparate from human intelligence it has the potential to benefit our society immensely. As an example, a recent study have shown that AI can act as an enabler on 76% of the Sustainable Development Goals defined by the United Nations https://arxiv.org/pdf/1905.00501.pdf. Some application areas where AI can be used for good purposes are healthcare, security, education, and economy. AI technologies can also help against climate change.

I am not declining that there are dangers associated with AI and we are already seeing signs of misuse of technologies like facial recognition by certain companies and governments. Ultimately, it is up to every individual to make their own moral judgments on the dangers and possibilities of intelligent machines. Personally, I believe that there are far too many positive aspects of autonomous and intelligent systems that outweigh the unrealistic—but in theory possible—conjecture that an AI will one day take over the world.

My thesis in this blog post is that one has to put things into context before labeling technology as “bad and dangerous”. Being a researcher in machine learning myself, I do feel a responsibility to spend my time working on things that are worthwhile and that can benefit society at large. Moreover, I welcome the debate among futurists about the implications of a potential human-level AI and I am glad that there are people much smarter than I that are doing research on precisely this topic and thinking about regulations that might be necessary in the future. However, I do not feel guilty for working with current AI technologies due to a potential AI takeover happening in the far future. The positive sides of safe use of intelligent machines are simply too many to not do research on AI. Finally, to end this slightly incoherent text I will quote Andrew Ng’s answer when asked about the rise of killer AI robots:

Worrying about the rise of evil killer robots is like worrying about overpopulation and pollution on Mars before we’ve even set foot on it.