Entering the world of ethics and AI for the first time can be a daunting experience. The subject matter in fact truly represents a world on its own, a complex mosaic of many actors and tribes, and all with their own languages. It is an ecosystem that depends on but also changes rapidly with the fast-moving pace of technological innovation and evolution. It is not the first time in history that technological change is difficult to understand and to keep track of, yet artificial intelligence, and especially machine learning, do bring unprecedented complexity about how future technology can work and what the consequences could be.
The absence of universal ethical principles or standards means there is no common language to address the ethical dimensions of AI; there are no common denominators for the various debates taking place. Each side to the various, often parallel discussions uses different interpretations, concepts and arguments to support a broad variety of interests or ideological viewpoints. Even within certain sectors or organizations, there is no fully shared understanding of what ethics is or means, even if they manage to agree to some sort of basic guidelines or principles.
Yet the biggest divide is between different sectors. For example, the military-tactical arguments will never fully coincide with political or commercial ones when determining what is ethical to do when it comes to AI and, let?s say, its incorporation into weaponized Unmanned Aerial Vehicles (UAVs) or systems. The objectives and vested interests are often simply too far apart, especially of course when it comes to commercial interests, but even in cases where military interests are supposed to be in line with those of their political masters.
There has always been at least some discrepancy between what bureaucrats or politicians want and what military commanders in the field think is needed. But AI is a game changer in this regard as it opens up a Pandora?s box of complexity related to moral and legal responsibilities and accountability. The more weapons and systems become autonomous, the more limited human agency and control will be. That is particularly challenging in situations where lives are at risk. Former Defense Secretary Ash Carter has called this the application of AI in ?grave sensitive matters?, which according to him require that there always has to be a human being involved at some point in the decision-making process.