This article, based on a literature review, looks at the ways in which AI can be used for criminal activities. The literature can be divided in three categories: (1) crimes with AI, (2) crimes on AI, and (3) crimes by AI. The first category consists of the use of AI to commit traditional forms of crime. A well-known form is deepfakes in which people are fooled with non-existent images. A second category is crime directed against AI. For example, an attempt is made to hack the AI itself or deliberately influence a decision model. In the third category, AI commits crimes without any human action. In such cases, AI makes decisions that are punishable by the letter of the law, like e.g. discriminatory practices. Finally, the article covers legal discussions on liability, responsibility and criminalization of AI-crime. |
Justitiële verkenningen
Meer op het gebied van Criminologie en veiligheid
Over dit tijdschriftMeld u zich hier aan voor de attendering op dit tijdschrift zodat u direct een mail ontvangt als er een nieuw digitaal nummer is verschenen en u de artikelen online kunt lezen.
Inleiding |
Inleiding |
Auteurs | Marc Schuilenburg en Wim Bernasco |
Auteursinformatie |
Artikel |
AI-criminaliteit: een verkenning van actuele verschijningsvormen |
Trefwoorden | artificial intelligence (AI), deepfakes, phishing, botnets, responsibility |
Auteurs | Marc Schuilenburg en Melvin Soudijn |
SamenvattingAuteursinformatie |
Artikel |
Samenwerken met politiemachinesPolitievakmanschap in het tijdperk van artificiële intelligentie |
Trefwoorden | digital skills, data skills, software use, learning processes |
Auteurs | Wouter Landman |
SamenvattingAuteursinformatie |
Artificial intelligence (AI) is slowly but fundamentally changing police practice. Processes of human sensemaking are increasingly augmented and sometimes automated by AI. As a consequence, policemen have to work together with police machines. This requires new knowledge and skills. The foundation of these competences is a digital mindset. In addition, data skills, software skills, and legal and ethical knowledge and skills are crucial parts of the required craftmanship. Police organizations are faced with the task of facilitating learning processes. These processes preferably take place in operational teams by supporting the use of AI applications for tasks at hand. |
Artikel |
Op verkenning in de digitale frontlinie: de mogelijke toepassingen van kunstmatige intelligentie bij de Koninklijke Marechaussee |
Trefwoorden | information-driven operations, sensor technology, Dutch Ministry of Defence, bias, automation |
Auteurs | Jorrit Bootsma en Mariel van Staveren |
SamenvattingAuteursinformatie |
The security domain is currently facing major challenges, both internationally and nationally. The Royal Netherlands Marechaussee is therefore committed to the development of artificial intelligence (AI) to improve its effectiveness. This article describes three possible applications of AI within the Marechaussee: the virtual border guard, smart analysis of sensor data, and autonomous robotics for security tasks. The article then discusses a number of potential problems related to the use of AI: the effectiveness, bias and transparency of algorithms, and the lack of a legal framework. Examples and mitigating measures are given for each of the potential problems. |
Artikel |
Wat kan artificiële intelligentie betekenen voor de kwaliteit van de forensische advisering? |
Trefwoorden | indications for forensic care, advisory processes, artifical intelligence |
Auteurs | Margriet Mutsaers en Maaike Kempes |
SamenvattingAuteursinformatie |
In the criminal justice process and during sentencing, there are several moments when crucial advice needs to be provided. Because these moments have significant consequences for those involved and society, it is essential that these processes are executed with care, and continuous efforts are made to improve and ensure the quality of these processes. With the rise of AI, the question has emerged of whether this technology can assist in enhancing considerations in the context of forensic diagnostics. On the one hand, there is consideration of possibilities for automation, allowing human intervention to be partially removed from the process. On the other hand, there is a consideration of strengthening human capabilities by deploying them more efficiently or effectively. This article delves into the experiences gained from research on how AI can contribute to complex decisions in forensic diagnostics, particularly in the assessment for clinical forensic care by the Netherlands Institute of Forensic Psychiatry and psychology (NIFP). |
Artikel |
Geautomatiseerde herkenning en voorspellen van emoties voor crowdcontrol: kansen en risico’s |
Trefwoorden | group behavior, emotion recognition, emotion contagion, artificial intelligence |
Auteurs | Emmeke Veltmeijer, Erik van Haeringen en Charlotte Gerritsen |
SamenvattingAuteursinformatie |
With an ever-growing world population, mass incidents like those at the Astroworld Festival in 2021 and Halloween in Seoul in 2022 may occur more often. Crowd management, with artificial intelligence (AI) as a promising tool, can offer solutions. AI can analyze crowds, predict escalations, and enable early interventions. Detecting group emotion plays a crucial role, which can be achieved through various data modalities, each with its own advantages and disadvantages. Multi-agent simulation models then predict emotional contagion effects in crowds. Various considerations are discussed, with privacy, validity, and applicability being central. |
Artikel |
Democratische uitdagingen van AI-toepassingen in het Living Lab Scheveningen |
Trefwoorden | inclusive AI, Quintuple Helix, ELSA, living lab, public safety |
Auteurs | Professor Gabriele Jacobs, Dr. Friso van Houdt, Dr. ginger coons e.a. |
SamenvattingAuteursinformatie |
In this article the authors explore the application of artificial intelligence (AI) in the public safety domain at the example of the Living Lab Scheveningen (LLS). The boulevard of Scheveningen serves as a testing ground for experimenting with AI applications. Examples include maritime vessel detection, nitrous oxide detection, crowd density measurement, and predicting crowd levels. Not all experiments prove successful, but the knowledge gained from these experiments remains valuable. To study innovative AI applications, also innovative research concepts and methods are necessary (such as inclusive AI, ELSA, Quintuple Helix). The authors discuss the ethical, legal, and social aspects and democratic challenges of AI applications in the public safety domain. |
Artikel |
Ethische aspecten bij het ontwikkelen en toepassen van AIEen methode voor reflectie en deliberatie |
Trefwoorden | gevolgenethiek, plichtethiek, relatie-ethiek, deugdethiek, iteratief |
Auteurs | Marc Steen |
SamenvattingAuteursinformatie |
This article discusses some ethical aspects involved in developing and applying AI systems: positive contribution, human autonomy, privacy, justice, and transparency. It would be good if the people involved in development and application consider such ethical aspects explicitly and carefully. To this end, they can organize a process of reflection and deliberation: a series of meetings in which, for example, data scientists, software engineers, lawyers, civil servants, human rights experts and citizens discuss such ethical aspects. Rapid Ethical Deliberation is discussed as an example of such an approach, with some examples from the justice and security domain. Finally, a brief discussion follows on the added value of transdisciplinary approaches (ELSA) and virtue ethics for organizing such meetings. |
Artikel |
Is meer écht wel beter?Technische, organisatorische en juridische overwegingen bij het opschalen van AI-toepassingen binnen het veiligheidsdomein |
Trefwoorden | artificial intelligence, violence detection, police, public values |
Auteurs | Martijn Wessels, Jeroen van Rest, Liisa Janssens e.a. |
SamenvattingAuteursinformatie |
Decentralized components of national public safety organizations, such as regional police units, experiment with and develop AI applications. It is often the ambition of the police to make successful local AI applications more widely available through scaling. Based on the application of violence detection technology within the police force, this article discusses the technical, organizational, and legal aspects that are relevant during scaling. The argument is made that fully centralizing AI applications does not necessarily contribute to enhancing the efficiency and effectiveness of the police or their accountability. Nuances around scaling AI applications are discussed through various variants of scaling (scaling up, scaling out, scaling down, and scaling deep). By using regulatory sandboxes, effects of local AI applications can be examined. They provide the opportunity to investigate the organizational, technical, and legal aspects of an AI application and inform choices regarding (forms of) scaling. |