I woke up at 5am on Wednesday, November 14th to participate in the IGF Round Table on Artificial Intelligence for Human Rights and SDGs from Copenhagen. Though it wasn’t the discussion I thought it was going to be, it was very enlightening.
The panelists highlighted many of the ways artificial intelligence (AI) could affect the future, both positively and negatively. The main predictor of whether the effects will benefit or harm society lie in the intentions of the programmers. As one of the panelists said, “Tech is neutral.” It simply is a tool that is used and has no agenda of its own. Those who program the technology and use it provide the intent of beneficence or harm. With this in mind, AI could not only help solve many of the world’s problems, but its existence may also exacerbate existing issues or create new ones. Inequality could be increased if AI is used to replace manual labor forces, which is mostly concentrated in developing countries, or otherwise affect populations which are already at a disadvantage.
While AI technology has been developing at an incredible rate since its recent explosion in 2010, technical development has been progressing at a much faster rate than ethical development in the area. Many of the panelists expressed concern regarding this unequal rate, mentioning ways that this could negatively affect AI’s ability to positively influence society. The main issue here was trust. As the technical development of AI has advanced, offline inequalities have come online into the programs as the inherent beliefs of programmers and users have bled into AI programming. When users encounter discrimination through AI, they fear using it, lose trust in it, and forego using it. This mistrust may hinder future attempts to use AI for good.
To combat these issues of trust and the potential for AI to be used in unethical ways, the panelists discussed many ways AI ethics are being pushed to the front of the line. AI is not going to disappear, it is going to continue to expand and grow. Establishing AI ethical principles now can help avoid the fear of AI that will only cause more issues in the future. To help promote sustainable development and human rights, AI needs to be used responsibly and alongside data of high integrity. This cannot be done without the roles of governance, business, and civil society coming together to establish the necessary ethical principles.