Artificial intelligence is increasingly being integrated into the criminal justice system, offering new capabilities for data analysis, investigation, and decision support. At the same time, artificial intelligence evidence (AI evidence) presents significant challenges to traditional evidentiary frameworks built on the principles of authenticity, relevance, and legality. Unlike conventional evidence derived from direct human observation or passive data recording, AI evidence is typically generated through algorithmic processing, analytical modeling, or automated content generation, introducing a technological intermediary into the evidentiary process.
As discussions around AI governance continue to evolve, growing attention has been given to issues such as algorithm transparency, data integrity, and system accountability. However, within the criminal justice context, widely accepted technical standards and procedural mechanisms for evaluating AI-generated evidence remain underdeveloped. This gap between rapid technological adoption and established evidentiary practices may increase the tension between innovation and the requirements of procedural fairness. Addressing these challenges requires systematic examination from three key perspectives: the classification of AI evidence, the analysis of admissibility challenges, and the development of appropriate regulatory and procedural frameworks.
