Artificial Intelligence, Ethics, Compliance and Policing

Artificial Intelligence, Ethics, Compliance and Policing

Author: Gareth Martin, Data Science & Artificial Intelligence Director at Altius

Note: This post originally appearing on LinkedIn here.

This is one of the really hot topics in the AI community and while it was science fiction when Isaac Asimov came up with his three laws of robotics in 1942, the relevance and importance of governance of AI is now very much a reality. As we develop more intelligent solutions capable of making decisions which impact real people, the ethical and legal implications of those decisions must be considered. How do you audit a convoluted neural network? The more complex these systems get the more difficult it will be to manage how they make decisions and the ethical implications of the process by which they do so. Will we use specific “policing AI’s” to regulate other AI’s in live systems? Will we develop a legislative framework written in a manner that AI’s can refer to it and police themselves and others? Maybe not yet.

Significant consideration to this topic has been made both in public and private fields with efforts in the European Parliament to coordinate a consistent and considered approach to this topic through The Legal Affairs Committee and the recommendation to the Commission on Civil Law Roles on Robotics |Europarl 2015/2103(INL)|. With the European Union’s General Data Protection Regulation (GDPR) looming it is clear that the building blocks are arriving on site but the progress is still painfully slow in an industry that is changing so rapidly. The importance of the topic has a consensus but the speed of introduction of broad-ranging formal regulation will be slow so my assumption is that industry will need to continue to lead from the front and allow the regulatory bodies catch up.

I was at Big Data World last week (March 2018) and had the pleasure to attend two keynotes on this topic hosted by a gentleman by the name of Berndt (Bertie) Müller who is the chair at an organisation called…get ready for it… The Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB). Bertie had a summary slide that really stuck in my mind. Essentially it took the perspective of a developer or organisation considering the use of AI for a topic. It asked the following questions in the process of deciding whether a solution should be built or not;

  • Is it possible for AI to do this task?
    • If it’s possible, is it legal?
      • If it’s both possible and legal, is it ethical?
  • Is it cost effective to use AI to do this task?
    • Is it responsible based on the accuracy of data and the methods available within the cost limits?
      • If it’s both cost effective and responsible, is it accepted by those it will impact and is any risk exposure accepted by the legally responsible parties?

artificial intelligence

Based on an extract from Berndt Müller’s presentation on Ethical AI at Big Data World, London, 2018

If you get to the third point in each of these two streams and can answer yes to both then you can give yourself a green light to move forward. But of course, the questions should be asked continuously as the AI evolves in development and in the application.

The risk with ethics is that it’s all based on a perspective and there are a lot of grey areas. For now, governance through ethics in conjunction with some limited specific laws, and the application of existing non-specific laws to the first AI cases to develop a precedent will have to suffice. In Europe, the application of liability principles in the Products Liability Directive |Directive 85/374/EEC| and the Machinery Directive |Directive 2006/42/EC| to AI scenarios will be the first step. Already these directives are being assessed to determine their fitness for purpose in application to AI scenarios.

The Institute of Electrical and Electronic Engineers (IEEE) seems to be putting significant focus and effort into the development of their Model Process for Addressing Ethical Concerns During System Design (IEEE P7000) and are actively working on series of related standards in this area including transparency, data privacy, bias and governance. This work and related work by other institutes is giving us the basis for a legal framework but we still have a while to wait. However, for them to be widely usable they need to be publically available which many of them, including those from the IEEE, are not (yet).

For now, we will need to be smart and considerate in our development of intelligent systems and be supportive of the development of more robust and broad-ranging regulatory frameworks.

Enjoy AI development responsibly!