Law enforcement agencies (LEAs) have access to larger amounts of data more than ever before. Recent years have seen unprecedented growth in technologies that create, store and distribute significant quantities of data that have intensified investigatory challenges but also represent an opportunity for AI adoption. LEAs and security practitioners face these issues every day, creating an unmet demand for new operational capabilities powered by AI in the fight against (cyber)crime and terrorism.
AI can provide opportunities for LEAs through streamlining and enhancing productivity and efficiency, spotting patterns and making decisions with increased speed and accuracy. To maximise the benefit of AI, LEAs must take a critical and human-centric approach to the implementation of AI technologies for the safety and security of society.
AI requires legal and ethical safeguards. LEAs are legally required to motivate their conclusions, decisions and acts, including in their application of AI. Therefore, AI tools should be “transparent” and able to “explain” the underlying ‘decision-making’ processes for their outputs.
Advanced technology can also be exploited for criminal misuse. Off-the-shelf AI software may allow criminals to amplify, transform and introduce new threats. LEAs must be armed to respond to these threats through better cybersecurity and awareness of adversarial AI.
Close collaboration with civil society to understand the societal implications of increased AI adoption will enable LEAs to build trust. Working with policymakers will support the construction of legislative and ethical instruments that pave a route for long-term standardisation mechanisms that support AI research.
Therefore, a dedicated hub is needed that unifies multidisciplinary AI knowledge, resources and data for LEAs across Europe to reduce fragmentation and promote a harmonised environment that fosters long-term AI innovation, adoption and uptake. STARLIGHT will bring forward effective innovation, adoption and uptake of AI for security in Europe.
STARLIGHT aims to create a community that brings together LEAs, researchers, industry and practitioners in the security ecosystem under a coordinated and strategic effort to bring AI into operational practices.
STARLIGHT will achieve its strategic goals by capturing the key challenges in the following guiding principles:
- Follow a human-centric approach to AI to design and develop AI tools - carefully addressing potential ethical and legal implications - which are responsible (meeting societies’ needs, eliminating or mitigating the negative impacts, encoding ethical values), explainable (transparent and understandable decision-making processes, reinforcing the admissibility of any resulting evidence in court), trustworthy (upholding all applicable laws and regulations, respecting ethical principles and values, technologically robust), and accountable (enabling the assessment of algorithms, data and design processes), in accordance with existing ethical frameworks and guidelines compatible with the EU principles and regulations.
- Build on relevant past and current projects to accelerate project execution. These projects will bring best practices, systems to leverage and interoperate, AI components to exploit and evolve, adopted ethical and legal frameworks, and detailed and documented use cases that provide a benchmark for the project.
- Employ co-design and co-creation utilising short research, development and testing cycles. Having LEAs and technology providers collaborate update and identify gaps, requirements, and challenges, supporting the flexibility and adaptability required by challenging and evolving technologies such as AI and the rapidly changing priorities of LEAs.
- Take an open-source first approach to technologies and solutions, to facilitate their adoption, adaptation and scale up at national and EU levels.
- Adopt a “privacy and data protection by design and by default” continuous integration approach for the development of the AI tools, services and the STARLIGHT framework, in accordance with the GDPR, the Law Enforcement Directive (LED) and any future AI regulation
Core domain areas
STARLIGHT aims to address a wide range of core domain areas for LEAs with the aim to demonstrate the applicability and viability of the solutions developed across a number of high priority threats.
Child Sexual exploitation
Border and External Security
Cybersecurity and Cybercrime
Serious and Organised Crime
Protection of Public Spaces
STARLIGHT’s strategic goals will be achieved through delivery against 11 core objectives.
- Define, analyse and extract LEAs use case scenarios, capabilities and gaps, and requirements for AI
- Promote collaboration and agency to enhance uptake of novel technologies for existing capability gaps
- Embed and embrace acceptance and impact of the use of AI technologies by law enforcement
- Observe and implement mechanisms for social, legal and ethical implementation of AI for LEAs
- Provide LEAs with a rich set of advanced, easy-to-integrate and interoperable AI tools and solutions to improve the prevention, detection, and investigation capabilities of LEAs in multiple security domains
- Deliver AI-based cybersecurity tools to protect AI LEA solutions against cyber threats, including adversarial AI
- Continuously pilot test, validate and evaluate AI tools for operational maturity
- Creation of realistic and representative data for training and testing AI tools
- Build a sustainable AI European Community of LEAs, researchers and industry for security
- Realise the STARLIGHT framework as environment for multi-stakeholder, collaborative AI analytics
- Create a compelling and sustainable ecosystem to continuously inform and provide LEAs with cutting edge AI tools tailored to their needs