Tuomas Pöysti: Towards Human-Centric and Fair AI with the Rule of Law27.2.2019
Remarks by the Chancellor of Justice of the Government of Finland, Dr. Tuomas Pöysti on the Panel on the AI and the Rule of Law on 27 February 2019 at the Council of Europe High Level Conference on AI, Governing the Game Changer – Impacts of Artificial Intelligence Development on Human Rights, Democracy and the Rule of Law, Helsinki 26–27.2.2019.
My main messages in the panel are the following:
(I) The AI has plenty of possible and promising applications in the justice field and it can be used in all fields of justice. It is good to process volumes of data and recognize patterns and help to sharpen legal arguments and assists humans to focus on tasks where humans are good, that is wide judgments of social and societal contexts. The AI can also be a drafting aid and predictive analytics may be used with positive potential except on the final determination of individual risks on an ex ante basis which is strictly not possible in a democratic society. Also use which would combine predictive analytics on individual with lazy decision-maker or judge is a dangerous combination. The AI is a good element in access to justice –digital platforms. AI assisted digital procedures may improve access to justice and efficiency of justice.
(II) Securing fundamental rights by design and default and rule of law requires binding international instruments and Member Country legislation: ethical codes are a good beginning but not enough in the protection of rule of law and fundamental rights.
(III) Ensuring full transparency of AI functions and human explicability are one of the core regulatory needs and principles to be ensured. There shall be explicability of the methods used in the system, with reasonable limitations and exceptions for the IPR (if needed via a trusted third party auditor). There shall be a full transparency and explicability of the reasoning and operations performed by the AI system. Its decisions shall be subject in natural language for the review and they shall be attributable for accountability purposes to human beings and shall be under human control. The auditability of the functions should be a default position. There shall be a strong requirement of explicability of the reasoning of the AI system meaning that the reasoning shall always be explicable in natural language and subject to legal review by courts and humans. The official responsibility and liability of the responsible officials and realization of the rights to good administration or proper administration of justice including right to be heard, and to have an intelligible reasoned decisions shall always be guaranteed. Transparency is also a challenge to the courts and legality oversight authorities like ombudsman institutions and their access to review and necessary collaboration needs to be established.
(IV) Humans are always accountable and responsible. Accountability rules and attribution of responsibility shall be clearer in various kinds of AI applications with different level of autonomy. Liability rules should be stricter and better support secure software development and development of systems with human rights by design and default.
(V) Internationally accepted principles concerning entrusting decision-making to automatic systems beyond the criteria established in the data protection legislation and conventions would be welcome.
During the following more detailed arguments will be used to deepen the main messages:
1. Rule of law the law is founded on the fundamental rights and freedoms, non-discrimination, prohibition of arbitrary use of power, legal certainty, access to justice and remedies and the principle of legality and legislation in democratic, open and transparent, participative procedure with accountability. (Council of Europe, Venice Commission, Rule of Law Checklist CDL- AD (2016) 007). Parliamentary Annual Report of the Chancellor of Justice of the Government of Finland, Parliamentary Reports 4/2018 and the English translation of the Comment by the Chancellor of Justice in the Report). These principles together with the principle of giving full realization for the human dignity and unique value of human being shall apply fully to the development and use of AI. Ensuring this human-centric AI and securing rule of law based humanism and, a fair place of humanity and humans is still quite a challenge requiring action far beyond the current debate of the AI ethics. The Council of Europe European Commission for the Efficiency of Justice (CEPEJ) European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems (CEPEJ (2018) 14) is an excellent ethical code and policy paper. From a wider general purpose, the European Commission High Level Expert Group on Artificial Intelligence proposal draft Ethics Guidelines for Trusworthy AI (European Commission, DG Communication, 18.12.2018) would complete the European ethical frameworks set from an another angle.
2. The ethics alone is ambiguous and unable to secure sufficiently the important core principles of the human rights centric AI and the maintenance of material rule of law. I see a strong case for additional international conventions and EU legislative action and legislation on the national level in each Contracting State. Essential is to establish a set of principles, rules and incentives for the human rights by design and default in AI. Additionally important is to ensure that the law and justice remain as common public goods and open and transparent instruments of freedom and respect for human dignity and unique value. The tasks is more precisely to transfer and write the fundamental principles of fundamental rights into the infrastructure and digital architectures and, to the code and algorithms of AI powered systems. Justice and the law are public languages and public governance tools recognizing the individual as an actor with full legal personality and capacity whereas the also the code and algorithms may be private and non-transparent elements. There is a real risk for an AI black box with a privately known law and much stronger emphasis on the private autonomy; this may have some good points but it would also be less democratic. The privatisation of justice by automatic alternative dispute resolution has several systemic risks and weaknesses albeit it has some promises.
3. The use of AI has several potentially very beneficial offerings for justice systems in all fields of justice (criminal, civil, administrative and application matters). An AI powered machine can deliver many tasks better than humans, in particular, to process big volumes of data including textual data, to perform reliably mundane and repetitive tasks and to perform statistical and probability operations and hence, find correlations and probability predictions and discover hidden patterns. Machines can produce and utilize open data with significant potential for quality and efficiency improvements in justice. In short, the AI powered machines can support more coherent, well-researched and well-argued application and development of the law and making it available and subject for professional and democratic control. The sufficient understanding of what the AI can and cannot do requires that legal and justice policy makers have a sufficiently deep understanding on which kind of evidence, data and reasoning the various AI solutions are based on. None of them so far is really reproducing legal reasoning but they may, if properly designed, and trained with feed in and training data, improve legal information management and use and increase coherence. The AI can enable and empower humans in the justice systems to focus on what is authentically human. It may well be that AI systems will learn a limited form of emotional intelligence in a sense of being able to recognize a patterns for emotionally sound handling but understanding multiple social and societal contexts is a human endeavor in a foreseeable future.
4. The AI supported machine can perform well in the law-drafting and contract drafting and also in multiple other tasks in the application of law: Reading and processing and analyzing vast volumes of data and inferring patterns, disclosing contradictions and categorizing concepts and expressions from the textual mass would improve the quality of legal drafting. Potential for better application of the law starts from the simple case management, still today I have had in the oversight of both courts and advocates the problems of wrong deadlines and, wrongly copy pasted documents, so the proper document and case management systems can help a lot in all legal practices. I quote Daniel Kahneman, the well-known cognitive psychologist: humans are noisy and humans are lazy. Legal research, prospective diagnostic i.e. predictive analyses of the chances of cases and scale of sentencing & award of damages and the discovery of the patterns in the case law will help judges and attorneys alike if giving the final touch remains genuinely the task for the humans, the AI thereby compelling the humans to argue better their decisions. I do not see a problem in predictive analytics (which is very ambiguous term) once it is not predictive risk decisions where individual assessment would be replaced by statistical probability calculus. The AI can used in both civil, criminal and administrative procedures when also its limits are known and the systems are properly designed, implemented and trained. But it cannot replace a trained judge.
5. The law is also costly and slow. Unfortunately, access to civil procedure is more and more out of reach of the ordinary citizens. Therefore digital legal advice and assistance and also digital alternative dispute resolution and digital or digitally assisted court proceedings would be worth consideration but of course, with due discretion.
6. The software and algorithms are often private, under intellectual property and trade & business secrets whereas the law in the rule of law is public. In addition, the structure of the probabilistic analyses is difficult to see and understand; in particular this concerns systems, which use machine learning and the system is able to improve itself. Legal and legally relevant decision-making may thus disappear in a black box of AI powered analyses with no person able to review and understand. Given the significant concentration of the market power in the tech field, a strict hard law regulation will be needed.
7. There should be, following constitutional practice from various countries such as France (Constitutional Council Decision CC 2018-725, 12.7.2018) and of Finland (Constitutional Committee Opinion PeVL 62/2018 vp and the Chancellor of Justice Opinion OKV 57/20/2018), a strong legal requirement for transparency. There shall be as wide as possible transparency and explicability of the methods used in the system, with reasonable limitations and exceptions for the IPR (if needed via a trusted third party auditor). There shall be a full transparency and explicability of the reasoning and operations performed by the AI system. Its decisions shall be subject in natural language for the review and they shall be attributable for accountability purposes to human beings and shall be under human control. The auditability of the functions should be a default position. There shall be a strong requirement of explicability of the reasoning of the AI system meaning that the reasoning shall always be explicable in natural language and subject to legal review by courts and humans. The official responsibility and liability of the responsible officials and realization of the rights to good administration or proper administration of justice including right to be heard, and to have an intelligible reasoned decisions shall always be guaranteed. Transparency is also a challenge to the courts and legality oversight authorities like ombudsman institutions and their access to review and necessary collaboration needs to be established.
8. There should be legal requirements of the human rights by design and default and proper system testing and maintenance and access to a sufficient quality of data and realization of citizens rights of access to data, analytical tools and access to good administration and justice. The ensuring rights by design and default shall be done not only from the perspective of the protection of personal data but of the overall general balance of the fundamental rights and freedoms. The conditions for good administration, access to justice and impartiality of justice and the prohibition and prevention of discrimination would require more attention with sufficient AI specifics.
9. There should be explicit rules on the attribution of responsibility and establishing liability and accountability for various parties in the design, realization and running of the AI systems including the quality and use of data (attribution of liability and accountability rules). All the rules should ensure ultimate user controllability and human control. The AI is a common name for many different kinds of systems with a varying level of autonomy. The concept of AI itself is relatively open-ended.
The rules of attribution of responsibility could be different for different levels of AI following, for example, the tentatively good reasoning in the European Commission High Level Expert Group on draft on the Ethics Guidelines for Trustwothy AI (see draft Ethics Guideline page 15, and footnote 24) provides a good point of departure for different attributions of responsibility:
(I) a decision or analyses model written into the software in which the software developer and the controller of the use of the system shall bear responsibility;
(II) a machine learning within the frames of a model – software developer like in the case (I);
(III) a machine learning within a model but with some significant feed in information – software developer, system controller and the parties responsible for the data;
(IV) a machine learning with some deontic models and feed in data – the controller of the system,
but with a strong and strict liability rule of the adaptation of the model within the machine and for the availability of the explicability of the functioning of the machine, here public and private sector use may need to have differentiated attribution and liability regimes; and finally,
(V) a machine which operates a learning model with rules and rule-based judgment – strict liability for the consequences of such reasoning to the system developers, strict liability for the use and explicability for the system controller.
The situations of liability and responsibility are quite often cumulative so clearer rules on shared responsibilities are needed. The errors may also be in the past and hence, a decision-making based on historically cumulative big data may proven to be biased or otherwise erroneous because of the mistakes or weaknesses in the distant past which also should be addressed as a problem.
Public administration and justice system will need separate attribution rules on the accountability and the relationship of third parties and service providers in the liability and official responsibility. In addition analogically similar principles on the human control and attribution of the official responsibility are needed in the case of public sector systems concerning who has the control and responsibility of official action taken by partly autonomic systems. The fundamental idea that human need to be accountable is a sound point of departure but what we need are the precise attribution of liability and accountability rules for different stakeholders and for different levels of contribution to risks (intentional wrongdoing, grave negligence, negligence, division of risks).
10. A general problem in the software industry has been lack of sufficient incentives for proper security and design for quality and rights. Therefore, much more stringent product liability regime is required and, also the use of the more advanced systems should be tied to tested build in human rights by design features. For the rules on the transparency and attribution of responsibility and establishing liability rules international conventions by Council of Europe and adaptation of their rules to the Union law by the European Union might be a good option. There could be one instrument generally for the human-centric design and use of the AI systems with the rules on liability and accountability at the heart, another instrument for the use of AI in public administration and justice systems.