| 000 | 01856nam a22001337a 4500 | ||
|---|---|---|---|
| 999 |
_c532890 _d532890 |
||
| 008 | 260330b ||||| |||| 00| 0 eng d | ||
| 100 |
_aKamlakar, Tanay Pramod and Pretana, Sanjay _959893 |
||
| 245 | _aAlgorithmic bias in forensics AI and the legal standards for admissibility in India | ||
| 260 | _aJournal of the Indian Law Institute | ||
| 300 | _a67(1), Jan-Mar, 2025: p.18-48 | ||
| 520 | _aThe integration of Artificial Intelligence (hereinafter, “AI”) in forensic practices particularly facial recognition, predictive policing, and gait analysis has begun to reshape the Indian criminal justice landscape. While these tools offer operational efficiency, they also pose significant risks of algorithmic bias and evidentiary unreliability. This paper critically evaluates the admissibility of AI–generated forensic evidence under the Bharatiya Sakshya Adhiniyam, 2023 and the Bharatiya Nagarik Suraksha Sanhita, 2023. It examines how caste, gender, religion, and socio-economic bias may become structurally encoded within algorithms, thereby violating constitutional protections under articles 14, 20(3), and 21. Drawing on jurisprudential developments from the United States, United Kingdom and European Union, the paper analyses global benchmarks on reliability, transparency and due process in AI– enabled evidence. It concludes by proposing detailed statutory and procedural reforms to ensure algorithmic accountability, evidentiary integrity, and judicial scrutiny, thereby aligning India’s evidentiary framework with constitutional mandates and international best practices.-Reproduced http://14.139.60.116:8080/jspui/bitstream/123456789/48451/1/04_Algorithmic%20Bias%20in%20Forensic%20AI%20and%20the%20Legal%20Standards%20for%20Admissibility%20in%20India.pdf | ||
| 773 | _aJournal of the Indian Law Institute | ||
| 942 | _cAR | ||