Joint Conference on Ethical and legal aspects of AI for Law Enforcement

Joint Conference on Ethical and legal aspects of AI for Law Enforcement

The European Commission-funded projects STARLIGHTALIGNERpopAI (coming from the same cluster of calls from H2020) and the AP4AI project, hosted a joint conference on “Ethical and Legal Aspects of AI for Law Enforcement” on January 25th and 26th at the CEA premises in Brussels.

The event was co-organised with the European Commission (DG Home), jointly conducted with CENTRIC and Europol and supported by the EU Agency for Fundamental Rights (FRA), Eurojust, the EU Agency for Asylum (EUAA) and the EU Agency for Law Enforcement Training (CEPOL) in the framework of the EU Innovation Hub for Internal Security.

The conference brought together a diverse range of stakeholders from the European civil security ecosystem to discuss the specific challenges and needs of the development, deployment and use of AI by actors from different perspectives: national Law Enforcement Agencies, researchers, civil society, ethicists, legal and social experts, industry, policy makers and European Agencies.

During two full days of keynote speeches, panel discussions, and hands-on workshops, participants reached a better common understanding of the AI landscape in Europe and internationally. This was a key objective of the event, given the fast pace of developments in AI regulation, production, use and development, as well as the complexity of the AI ecosystem.

Participants reached consensus that the preparedness of all actors for the enforcement of the EU AI act requires practical, effective, and cost-effective socio-technical tools and guidance, as well as certified processes for practitioners, researchers, and industry/market stakeholders to follow.  To this end, the event provided insights into the challenges of ethical assessment of AI, and allowed participants to discuss biases and other issues within all phases of the AI lifecycle, including data collection, development, training and use. Three interactive workshops were held during the second day, exploiting the benefits of a diverse and experienced audience.

During the first workshop, the AP4AI project showcased its conformity assessment tool and STARLIGHT partners provided insights on how an online self-assessment tool might address the need for practical and user-centric compliance support in the era of the EU AI Act. The AP4AI tool is specifically designed to guide internal security practitioners through the compliance evaluation for their internal processes when developing and deploying AI systems, using 12 AI accountability principles.

In the second workshop, the ALIGNER project presented its methodology for ensuring compliance with the European legal and ethical frameworks governing the use of AI technologies in the security domain. The ALIGNER partners illustrated and discussed their Human Rights Impact Assessment template, aimed to assist Law Enforcement Agencies in deploying their AI systems in full respect of fundamental rights.

The final workshop of the day was chaired by popAI partners and showcased a novel methodology based on the deconstruction of controversial AI cases into knowledge items that in turn can be used to design more responsible AI technologies, policies, and procedures. This method aims to support diverse stakeholders to jointly work on complex sociotechnical issues employing creative approaches of foresight scenarios.

The event emphasized the importance of adopting a cross-disciplinary approach that unifies technical, social, ethical, and legal views to ensure that AI in civil security domain is a benefit for the society and does not infringe human rights. It highlighted cross-disciplinary approaches for compliance both in the operational and R&D phase, by analysing and discussing controversies, gaps and grey areas that may hinder EU Law Enforcement in the fight against crime. As a result it emphasized the need for education, collaboration, and advocacy to maintain LEAs‘ technological capabilities while simultaneously putting the safeguard of human rights and the enhancement of trust between the actors within the AI ecosystem at the forefront, to strengthen freedom, justice and security in the EU. Furthermore, the event addressed the issues of ethics and trustworthy AI, which is gaining greater relevance in EU funded research. These aspects are transversal to the entire life-cycle of AI technologies and are of utmost importance for the social acceptance of such technologies. The four projects tackle ethics and trustworthy from different angles, thus ensuring a more comprehensive and exhaustive understanding of the thematic. The synergies among ALIGNER, STARLIGHT, popAI, and AP4AI have been notably appreciated and could offer new opportunity for collaboration in the future.

ALIGNER, popAI and STARLIGHT have received funding from the European Union's Horizon 2020 research and innovation programme under Grant Agreements no. 101020574, 101022001 and 101021797.

Download the press release below to find out more about future events.