The Uppsala Code and AI Ethics

Ethics for research are not static. Rather, ethics evolve over time as new scientific questions are asked and new technologies are developed [3]. An example of an ethical dilemma for today’s researchers that was not a problem a couple of decades ago, is how to use artificial intelligence (AI) responsibly. Today, AI is emerging as one of the most researched areas in science. One reason for this is the contemporary breakthroughs in the field, as well as the many applications of AI. For instance, in a recent study by researchers from the US, Sweden, New Zeeland, Germany, and Spain, published by Nature [10], it was shown that AI could act as an enabler on 76% of the Sustainable Development Goals defined by the United Nations. Examples of application areas where AI can be used for societal good are healthcare, security, education, and economy.

Despite the many societal benefits of AI, there is no doubt that AI technologies, like any other technology, also can be used in non-ethical ways. Consequently, “AI ethics” or “ethics in AI research” is a hot discussion topic [4]. As an example, Chalmers University in Sweden has a dedicated AI ethics committee to manage the ethical issues just surrounding AI research [9]. Moreover, as recent as last week, the European Commission published a draft for rules on how to use AI responsibly [2]. In this essay, I will discuss some of the ethical questions regarding AI research and discuss an ethical dilemma that is relevant for my own research. In particular, I analyze the following question:

What research directions within the field of AI are ethical to pursue?

To answer this question, I analyze it from the view point of the Uppsala code of ethics [3].

Discussion

In general, there are two types of risks related to AI applications. Firstly, we have the concern regarding philosophical questions of the possibility that future AI systems could become too powerful to be managed by humans [1]. Secondly, we have the concern regarding ethical use and development of current AI technologies (e.g. misuse of current facial recognition technology). In this document I will focus on the latter as that is more related to my own research.

Considering the many benefits that AI technologies can bring to society (e.g. to the healthcare sector), the key question to consider when conducting research on AI technologies is: in what research directions do the pros outweigh the risks of harmful use of AI? Are there research directions within AI that simply should not be explored because the risks are too high? As a researcher in AI, this is an ethical dilemma that I have had to consider: is my research direction going to lead to societal benefits, harmful applications, or both?

A guideline that can be used when conducting AI research is the Uppsala code, developed by researchers at Uppsala university during the 1980s [3]. The Uppsala code was not developed specifically for AI but is a general code for weighing the pros and cons of potential research outcomes. As such, the code can be used as a guideline for how to balance the potential good outcomes and the potential harmful outcomes of AI research. The code is essentially an appeal to researchers to avoid research that can lead to ecological harm or the development of weapons, or that is in conflict with basic human rights.

The Uppsala code is intended for the individual scientist who has to make his or her own assessment of the consequences of his/her research [5], hence the use of the code is subjective. My own view, and also the consensus in the research community, is that the societal benefits of AI are too many to not do research in this area, however we still need to vary of the risks. If we apply the Uppsala code to AI research, we can consider all AI research that do not lead to significant ecological damage or weapons, to be ethical. This definition of ethical research encompasses most of the AI research today. However, the Uppsala code also contains the following point:

Nor shall research be so directed that its consequences conflict with basic human rights as expressed in international agreements on civic, political, economic, social and cultural rights.

In light of the above point, is research on AI applications that can be used for mass surveillance (e.g. facial recognition) ethical? There are certainly potential conflicts with basic human rights if such applications are used wrongly. However, it is also clear that facial recognition technology also can be used for many good purposes, e.g. reducing crime and helping blind people. How can we deal with the double-edged sword of AI technologies? Regulation could be the answer.

Another point in the Uppsala code that is relevant for AI ethics is that the scientist has a special responsibility to assess carefully the consequences of his/her research and to make them public [5]. This is a principle that is not acknowledged enough by the AI research community today. For example, it is rare to find discussions of potential harmful consequences of the research results in AI research papers. Maybe journals should enforce authors to include a section to consider these dilemmas, or at least encourage it?

To answer the question that I posed in the beginning of this essay, I believe that, while general ethical guidelines for research, like the Uppsala code, can be used, we are in a strong need for new and stronger definitions of research guidelines designed specifically for AI. Regarding the question of what research directions within the field of AI that are ethical to pursue, I personally believe that there is no single answer to this question. Rather, I believe that most AI research can be used both for good and bad, and thus the research can be conducted in an ethical way as long as the right regulations and laws are in place. Hence, my conclusion is that, while there are certain gray areas where the individual researcher has to make a decision about ethical and non-ethical use of AI, and in such situations the Uppsala code can help, there are currently many applications of AI that could (and in my opinion should) be regulated by law. To achieve this, I believe that a key path going forward in establishing guidelines for ethical AI research is the cooperation between law-makers and AI researchers. As a final note, I also believe that an important factor in ethical AI research is transparency. As long as the research community is discussing the issues of AI research and are transparent with their research, I am confident that we will be able to use this new and exciting technology to improve our society in many ways. A potential threat to this would be if companies are not transparent with how they apply AI or if companies do private AI research and are not open with the results that they obtain.

Conclusion

In this blogpost I have used the Uppsala code-a code for weighing beneficial and harmful consequences of research against each other-to analyze AI research from an ethical perspective. My view is that the potential positive outcomes of AI research greatly outweigh the possible negative outcomes. To deal with the risks of AI, I first and foremost believe on regulations and cooperation with law makers, and only in the second instance should we rely on individual assessments and ethical policies (such as the Uppsala code). I think it would be to put too much weight on the shoulders of scientists and technologists to let AI usage be managed solely by ethical policies.

References