Artificial intelligence is being discussed more and more as a possible solution to the challenges in forensic science and the criminal justice system, particularly for workload pressures, and the summarisation of complex data. Especially within the criminal justice system, where the stakes are high due to the impact of decisions on individuals’ lives, the use of such systems, and therefore their transparency and explainability, needs to be clear to ensure that they are appropriately fair and just.
However, in a forensic context, I am not always convinced that these terms are being used consistently, or that the difference between them is clearly understood.
In discussions about artificial intelligence in forensic science, it is also important to recognise that not all AI systems function in the same way. Broadly, models used in forensic and criminal justice contexts may be predictive, such as rule-based or statistical models that classify, estimate risk, or support decision-making, or generative, which produce new outputs such as text, images, or synthetic data. These different model types raise distinct challenges. Predictive systems may be complex yet bounded by defined inputs and outputs, whereas generative systems introduce additional uncertainty because their outputs are not direct mappings from known variables. As a result, expectations around transparency, explainability, and evidential reliability cannot be applied uniformly across all AI systems.
As AI systems progress, they are moving increasingly from simplicity to complexity and thereby becoming more opaque, often referred to as ‘Black box’ behaviour (Ali 2024). ‘Black box’ behaviour is where there is a lack of transparency and explainability, meaning that it is unclear how specific inputs lead to particular outputs. From a forensic perspective, the difference between the two is important. While transparency and explainability are related, they are not the same.
Transparency refers to openness about how an AI system is designed, used, and governed. This includes details about the purpose, its limits, how it was trained and validated, version control, audit trails, and the extent of human oversight. Transparency supports accountability and procedural fairness because it allows systems to be scrutinised, reviewed, and challenged (Cheong, 2024; OECD, 2025).
Explainability, by contrast, is the understanding of why a system produced a specific output. In practice, this often involves post-hoc techniques such as feature importance scores, saliency maps, or local explanations. These approaches can be helpful, but they are often partial and sometimes unstable, particularly for complex models (Ali, 2024).
A system can therefore be transparent without being meaningfully explainable, and explainable without being properly governed.
In criminal justice discussions, explainability is often presented as the answer to the “black box” problem. If an AI output can be explained, it is assumed to be acceptable for use. In forensic contexts, this assumption does not suffice. Many explanation methods generate outputs that are plausible rather than true representations of how a model actually works.
In forensic contexts, the challenge is therefore not simply whether an AI system can produce an explanation, but whether its use can be justified, scrutinised, and defended within an evidential framework. Transparency supports this process by enabling accountability and challenge, while explainability may assist in the interpretation of outputs.
Confusing the two risks misplaced confidence in AI systems and thereby remaining poorly understood in forensic practice.
References:
Ali, I. (2024) AI Transparency and Explainability. Frankfurt: Frankfurt University of Applied Sciences.
Cheong, B.C. (2024) ‘Transparency and accountability in AI systems: Safeguarding wellbeing in the age of algorithmic decision-making’, Frontiers in Human Dynamics, 6, Article 1421273. doi:10.3389/fhumd.2024.1421273.
OECD (2025) Governing with Artificial Intelligence: The State of Play and Way Forward in Core Government Functions. Paris: OECD Publishing. Available at: https://doi.org/10.1787/795de142-en
