ELI Webinar Series on the Conference on the Future of Europe: Artificial Intelligence (AI) and Public Administration
Public administration, being an emanation of the State's public functions, entails the processing of much more data than private entities. New technologies, such as artificial intelligence, can therefore play a significant role in the modernisation and overall improvement of the functioning of public administration. On the other hand, a guarantee of the transparency, correctness and security of the processed data is also fundamental. Therefore, the possibilities to implement AI in the operation of public administration are limited by the principle of legality and the need to ensure a high degree of reliability of technologies used, as well as the need to ensure the respect of citizens’ rights.
Public administration is, as a result, confronted with specific challenges in the deployment of AI and, more generally, algorithms. The use of these techniques poses specific problems related to the particular requirements associated with the principle of good administration. In addition, issues such as transparency, accountability, compliance and non-discrimination are particularly relevant in the context of public administration.
This event will aim at discussing how to safeguard citizens' confidence in the use of the technology and not hinder innovation. The topic will be introduced by the Reporters of European Law Institute (ELI) project on Artificial Intelligence (AI) and Public Administration, Marc Clément, Paul Craig and Jens-Peter Schneider.
The webinar will provide ample opportunity for those present to contribute to the debate. After a brief introduction of the webinar’s topic, registrants will be able to share their views by participating in polls as well as responding to and raising other key questions relevant to the webinar’s theme. A report, outlining discussions and featuring any collectively agreed proposals will be drafted after the event and submitted to the European Parliament, Council and the European Commission via the Conference on the Future of Europe platform for further reflection.
Event report
ELI Webinar Series on the Conference on the Future of Europe Artificial Intelligence (AI) and Public Administration 25 November 2021 Event Report 1. Background Founded in June 2011 as an entirely independent non-profit organisation, the European Law Institute (ELI) aims to improve the quality of European law, understood in the broadest sense by initiating, conducting and facilitating research, making recommendations, and providing practical guidance in the field of European legal development. ELI is committed to the principles of comprehensiveness and collaborative working, thus striving to bridge the oft-perceived gap between the different legal cultures, between public and private law, as well as between scholarship and practice. ELI undertook to contribute to the Conference on the Future of Europe by holding three lectures dedicated to the three pillars of its project portfolio: Rule of Law in the 21st Century • Business and Human Rights: Access to Justice and Effective Remedies (with input from the EU Agency for Fundamental Rights, FRA), 30 November 2021 Law and Governance for the Digital Age • AI and Public Administration – Developing Impact Assessments and Public Participation for Digital Democracy, 25 November 2021 Sustainable Life and Society • Climate Justice: New Challenges for the Law and Judges, 11 November 2021 Below is a brief report on the AI webinar. 2. Context, purpose, subject and structure/methodology of the event Public administration, being an emanation of the State’s public functions, entails the processing of much more data than private entities. New technologies, such as AI, can therefore play a significant role in the modernisation and overall improvement of the functioning of public administration. On the other hand, a guarantee of the transparency, correctness and security of any data processed is also fundamental. Therefore, the use of AI in public administration should be limited by the principle of legality and the need to ensure a high degree of reliability of any technologies used, as well as the need to ensure respect of citizens’ rights. Public administration is, as a result, confronted with specific challenges in the deployment of AI and, more generally, algorithms. The use of these techniques poses specific problems relating to the particular requirements associated with the principle of good administration. In addition, issues such as transparency, accountability, compliance and non-discrimination are particularly relevant in the context of public administration. In order to address how to safeguard citizens' confidence in the use of the technology and not hinder innovation, ELI’s Council gave a mandate in 2020 for a project to be conducted on AI and Public Administration – Developing Impact Assessments and Public Participation for Digital Democracy. 3. Number and type (general or specific public with details if possible) of participants present 90 participants from 43 different countries took part in the event which was open to the public and advertised on the ELI website and social media, as well as at the Conference on the Future of Europe portal. The majority of participants (54%) worked in the legal field, followed by participants working in education and training (20%) and students (11%). Several individuals represented other occupations. 4. If available, demographic information about participants (eg age, gender, etc) 57% of participants were female, 38% male and 3% identified themselves as ‘other’. 22 participants (24%) indicated their age as falling within the 40–49 category and a further 22 participants (24%) within the 50–59 category, 20 participants (22%) fell within 20–29 range, 17 participants (19%) were between 30–39 years old, 7 participants (8%) were 60 or over and 2 participants (2%) were under 20 years old. 5. Main topics discussed during the workshops The participants were welcomed by Teresa Rodríguez de las Heras Ballell, a member of the ELI Executive Committee, who briefly introduced ELI, the Reporters of the ELI project on AI and Public Administration – Judge Marc Clément, Prof Paul Craig and Prof Dr Jens-Peter Schneider – and the topic of the webinar. Paul Craig put into perspective in stark terms the nature of the issue the project is dealing with, pointing to the fact that administration has existed through time immemorial. With administration developed administrative law. The kinds of people and bodies that exercised administrative authority varied over time and from country to country but the foundational and simple commonality was that a human being (eg head of an agency, Minister) made the decision and it was traceable to that individual or organisation. With the advance of technology, decisions are no longer made by an identifiable individual in some cases. They are made wholly or partially by an algorithm or some sort of automated decision-making system, which poses significant problems. While an individual will be involved in designing the initial algorithm/automated decision-making system, the role of the individual may be limited as some automated decision-making systems can learn by doing. This raises a range of issues about accountability and the responsibility for decisions made. As a result, several institutions and bodies (including the EU, the Council of Europe and academic bodies) are involved in thinking of the best way to deal with this. ELI is close to finagling Model Rules, which have at their heart the idea of an impact assessment. As the heterogeneity of different types of automated decision making precludes a ‘one-size fits all’ approach, the Rules have three kinds of systems: (a) where automated decision making is always regarded as high risk and is subjected to an impact assessment (Annex 1); (b) trivial measures which do not need an impact assessment (Annex 2); and an in-between category under which an impact assessment procedure may be needed (Annex 3). For high-risk instances, there are extra procedures (eg expert scrutiny bodies and public participation) which go beyond what happens in normal circumstances. Judge Clément gave examples of real life automated systems, eg the use of algorithms/automated decision-making systems in the Netherlands to detect social fraud triggering automatic debt recovery and resulting in 26,000 families being affected. The damage and disillusion caused in the UK by the automatic downgrading of A Level results was a further example. This showed the need for problems underlying such systems to be addressed and underlined the usefulness of ELI’s Model Rules in this respect. The floor was then given to the public which was asked to provide their views on several questions including the following via polls on open and closed questions: 1. Have you had experience with Artificial Intelligence Technologies (AIT) or other Algorithmic Decision-Making Systems (ADMS) in general? 2. In particular, do you have experience with AIT or other Algorithmic Decision Making Systems (ADMS) used by public authorities? 3. What types of risks, in your view, are the most important ones caused by AIT/ADMS when used by public authorities? 4. What, in your view, are the most important potential benefits of AIT/ADMS when used by public authorities? 5. What, in your view, are the most important measures that could minimise risks associated with AIT/ADMS when used by public authorities? 6. What are the most important challenges for a trustworthy/reliable use of AIT/ADMS by public authorities? 6. Main ideas suggested by participants during the workshops and the shared or debated narratives and arguments that led to them The debate started with the question of whether participants had prior general experience with AIT or other ADMS. A bit more than half of the participants (23/42, 56%) indicated that they had some experience with AIT/ADMS. Most users with experience were lawyers and academics. Around 26% (11/42) were uncertain about whether they had ever experienced AI or ADMS, while 19% (8/42) responded that they had no experience at all. In reply to the question of whether lack of experience with AIT and ADMS was a possible obstacle to advocating public participation in the Model Rules, Prof Schneider said that such people may nonetheless have questions which should be addressed. Further, the ELI Model Rules also require expert participation, meaning that the overall assessment and reflection will be comprehensive. Returning to the statistics, Prof Craig explained that it is sometimes the case that people become aware that decisions are being made by an ADMS, say, when something goes wrong. People then start unravelling backwards to determine what when wrong. Variables put into the complex system may end up being biased, which is a dangerous problem in the case of self-learning ADMS. Prof Craig also observed that ADMS may make administration more efficient. He emphasised, therefore, that ELI’s Model Rules are not against AI/ADMS. Less participants (16/36, 44%) had experienced AIT/ADMS in the context of public administration, with 19% (7/36) being uncertain and 36% (13/36) having no experience at all. The use of very basic AI tools by public administration was one potential reason for the low figures as was the fact that, according to an eGovernment Benchmark study prepared for the European Commission and released in 2021 entitled, ‘Entering a New Digital Government Era’, the use of AIT and ADMS in the various Member States is very diverse; as such, participants from the Baltic States and the Netherlands would have more experience using AIT/ADMS than those from Germany, for instance. In responding to a question on the scope of the Model Rules in relation to the EU’s AI Act, Judge Clement said that the Model Rules focus on the risks posed by the technology and avoided the problem of defining AI/ADMS, etc as this is not the issue that is really at stake. The risk-based approach avoids the problem of defining some technologies and not others as well as the fact that some technologies change so rapidly and doing so would mean the law is several paces behind. In response to a question on the extent to which a citizen has the right to request human intervention to address risks inherent in an AI/ADMS system, Prof Schneider referred to the report in the Model Rules that requires the identification of options to minimize risks. He said that the right to ask for a decision made by a human being is one key element of these measures. It may not be the only one or the best one. Prof Craig added that one would have to look to the source of the statutory authority to exercise power over topic X, under English law at least, to ascertain whether one would ordinarily have a right to a hearing or to be heard in some form before a decision on topic X is made. If so, the question of whether it was lawful for the public administration to introduce a system of AI/ADMS on topic X without the right to human intervention arises and this would be for the courts to decide in light of the relevant empowering legislation. If the right to be heard is important, it cannot be pushed off the ‘edge of the cliff’. The audience identified benefits to AI as well as risks. Participants identified discrimination of certain groups of persons, risks to privacy and non-transparent and non-explainable decision making, as key risks of AIT/ADMS when used by public authorities. Cybersecurity concerns and over-reliance of members of staff of public authorities on AIT/ADMS were lesser concerns. That discrimination and biased output were key concerns did not come as a surprise. The panelists explained that on the one hand, society has become more aware of the dangers of discrimination in public decision making than a generation ago. On the other hand, the range of what constitutes discrimination has become more complex and has expanded. An example given was of LGBTIQ+ rights, which are now in the forefront of people’s minds but were not one or a couple of generations ago. Therefore, many different variables must be considered when designing AIT/ADMS, making sure that they are fit for purpose by eliminating the various ways they might potentially discriminate. However, an issue that often appears is disjunction between technicians designing the system and policy people, due to a lack of ability by the latter to communicate their wishes to tech people or to test whether they have adequately taken them into account. This is precisely why the ELI Model Rules have two steps: experts on the one hand and public participation by individuals, non-governmental organisations that may specialise in the field, etc, on the other. The overlap between the risks identified above was emphasised by a participant. These coupled with lack of transparency give the impression that discrimination exists. This is exacerbated where there are self-learning systems that learn from the state of play. On the other hand, the fact that such systems can be tested means it is earlier to detect discrimination than in the case of human decision making. Discussions turn to the benefits of AIT/ADMS. Participants identified faster decision making, increased equal treatment of similar cases/better control of biased decision making and enhanced quality of decision making when AIT/ADMS is used by public authorities as advantages. The panelists opined that these benefits do not come as a surprise as humans are fallible and may be irrational and slower in processing. That the more mechanistic/routinized an issue is, the more likely a decision will be fairer was highlighted and vice versa, in the case of more variables, polycentric issues, etc. An example was given from the US of Northpointe’s risk-assessment tool called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) used to predict hot spots of violent crime or determine the types of supervision that inmates might need. Lawyers asked for the tool to be developed to avoid potential bias of the evaluators, which would have been based on their own subjective experiences rather than objectively. At the same time, as the tools are developed on the basis of the experience of human beings, biases can be copied into the way the tools are designed. A fruitful discussion on how to mitigate or minimise risks without compromising the benefits of AIT/ADMS used by public authorities followed. Among the three most selected options were impact assessments by public authorities prior to the deployment of AIT/ADMS, independent expert participation in such impact assessments and judicial review by courts of decisions taken or supported by AIT/ADMS, all of which are recommended by ELI’s Project Team, depending on the context in which the AIT/ADMS is deployed. The panelists once again emphasised the need for external monitoring, first by experts and then by the public to minimise unconscious bias. The imposition of strict liability rules in the case of damage were also considered a helpful recourse as was oversight powers for independent data protection supervisors. Participants identified scarce human resources in public authorities, scarce hardware, the lack of expertise in the field in public authorities in Europe followed closely by dependence on non-European companies and lack of data as a result of data protection rules as the three most important challenges to ensuring trustworthy/reliable use of AIT/ADMS by public authorities. 7. General atmosphere and expected follow-up The collective conclusion was that communication is essential between various actors. Information and accountability are also key. An impact assessment is a way to collect more information and facilitate self-reflection of the benefits/risks for authorities using such a system. This facilitates dialogue within the administration, between tech people on the one hand and policy makers on the other. A further way to reinforce this in high-risk systems is getting an expert independent scrutiny body to look at the initial report of the implementing authority and evaluate it. In doing so experts provide an external voice to internal reflections thus facilitating the mitigation of risks. Ensuring public participation and access to courts to challenge the implementation of such systems are also of crucial importance in this respect. Discussants were very keen on reading the ELI Team’s output given its emphasis on key, timely ex ante procedures to mitigate risks/foster public confidence in AIT/ADMS on the one hand while delivering the above identified benefits on the other. There were informed it would be uploaded onto ELI’s website on approval by ELI’s Council and its Fellows.Share:
Share link:
Please paste this code in your page:
<script src="https://futureu.europa.eu/processes/Digital/f/14/meetings/74253/embed.js"></script>
<noscript><iframe src="https://futureu.europa.eu/processes/Digital/f/14/meetings/74253/embed.html" frameborder="0" scrolling="vertical"></iframe></noscript>
Report inappropriate content
Is this content inappropriate?
- Call us 00 800 6 7 8 9 10 11
- Use other telephone options
- Write to us via our contact form
- Meet us at a local EU office
- European Parliament
- European Council
- Council of the European Union
- European Commission
- Court of Justice of the European Union (CJEU)
- European Central Bank (ECB)
- European Court of Auditors (ECA)
- European External Action Service (EEAS)
- European Economic and Social Committee (EESC)
- European Committee of the Regions (CoR)
- European Investment Bank (EIB)
- European Ombudsman
- European Data Protection Supervisor (EDPS)
- European Data Protection Board
- European Personnel Selection Office
- Publications Office of the European Union
- Agencies
0 comments
Loading comments ...