Introduction
Facial recognition technology (FRT) is rapidly transforming various sectors, including law enforcement and governance. In India, the government's push for digitalization has seen increased FRT adoption, raising concerns about its legal and safety implications. This essay examines FRT's functionality, legal basis, potential misuse, data breach, and privacy. Finally, it emphasizes the need for a robust legal and regulatory framework to ensure responsible FRT deployment.
FRT and Challenges related to it
Facial Recognition Technology (FRT) is an automated process that compares two facial images to determine if they belong to the same person. Initially, an image is uploaded onto FRT software, which uses a feature analysis algorithm to measure distinct facial features like the nose, eyes, lips, and their spatial relationships. These measurements are converted into a mathematical face template for comparison with a known template. The software then generates a score or percentage indicating the likelihood of a match between the captured image and the template. The accuracy of FRT results is influenced by factors such as photo quality, makeup, lighting conditions, and angles/distances during image capture.
FRT system deployed by Indian agencies
Indian agencies have implemented Facial Recognition Technology (FRT) systems for various purposes. The Punjab Police utilizes the Punjab Artificial Intelligence system (PAIS), allowing officers to capture images with smartphones and cross-reference them with a database of convicted offenders in Punjab's jails. Additionally, technologies like Face Tagr in Chennai and NEC's Neoface in Surat are employed for real-time data matching and monitoring of individuals of interest.
The National Crime Records Bureau (NCRB) initiated the National Automated Facial Recognition System (AFRS) in July 2019 to modernize law enforcement by enabling information recording, analysis, retrieval, and sharing among organizations. The AFRS aims to create a centralized database capable of identifying individuals using existing digital databases like the Crime and Criminal Tracking Network System (CCTNS), Immigration Visa Foreigner Registration Tracking (IVFRT) and Interoperable Criminal Justice System (ICJS) among others.
Furthermore, the Indian government has launched surveillance projects such as the National Intelligence Grid (NATGRID) and the Centralized Monitoring System (CMS) to enhance intelligence gathering capabilities. However, these initiatives, including the Lawful Intercept and Monitoring project (LIM), Network Traffic Analysis System (NETRA) and Crime and Criminal Tracking Network & Systems (CCTNS) have faced criticism for potential mass surveillance, lack of transparency, and inadequate legal oversight mechanisms.
Biased AI
The use of poorly designed and trained Facial Recognition Technology (FRT) systems can lead to discriminatory, inaccurate and biased outcomes. Concerns about the misuse of FRT technology in the UK have prompted recommendations from the Science and Technology Committee of the House of Commons to delay its deployment until issues related to effectiveness and bias are fully addressed.
Potential biases in FRT could stem from factors like skin color, geography, religion, and caste. AI models often exhibit discriminatory tendencies due to biased data collection or labeling methods.
For instance, the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm, used by judges in the US to predict whether defendants should be detained or released on bail pending trial, was found to be biased against African-Americans. This was linked with the skewed training data for AI that perpetuates bias, historical racism, disproportionate surveillance, and other inequalities in police practices.
In India, the introduction of FRT could exacerbate discrimination against marginalized communities like Muslims, Scheduled Castes, and Scheduled Tribes, who already face disproportionate targeting by law enforcement. These groups constitute approximately 39% of India‘s population, but they constitute a much higher percentage of the prison population-55% of the undertrial population as per a 2015 NCRB Report on Prison Statistics.
This reflects systemic biases in policing and criminal justice. Implementing FRT in such contexts could further entrench bias within the technology from its inception.
Data security
In December 2022, RailYatri, a train ticketing platform, disclosed a data breach. Subsequently, reports emerged of a potential leak in the CoWIN portal, where personal data of Indian citizens was allegedly exposed via a Telegram bot. The leaked data included names, passport details and Aadhaar numbers of individuals registered on the COVID-19 vaccine network.
In October, Resecurity, an American cybersecurity firm, revealed that the personal information of 815 million Indian citizens, such as passport details and Aadhaar numbers, were being traded on the dark web. Despite the lack of clarity on how these malicious actors acquired this data, they claimed access to a 1.8 terabyte data leak affecting an undisclosed "India internal law enforcement agency."
The history of data breaches in the Indian government, notably concerning Aadhaar, raises concerns about the security of potential Facial Recognition Technology (FRT) systems in the country.
Right to Privacy
Unlike India, EU member countries have the GDPR as their first line of defence against FRT. Art. 9 of the GDPR forbids the processing of personal data containing biometric data for the purpose of identifying a person.
The Supreme Court of India ruled in 2017 that the right to privacy is a fundamental right under the Constitution. Although this right is subject to reasonable restrictions, these restrictions have to comply with a threefold requirement, (i) proportionality, (ii) legitimate state aim, and (iii) existence of a law.
The requirement for the existence of a law emerges from the requirement of Article 21 of the Constitution which stipulates that ―no person can be deprived of his life or personal liberty, except as per the procedure established by law.
Legal Framework
Under the Personal Data Protection Bill 2022, for the use of facial recognition technology both public sector and private sector data fiduciaries will be required to conduct a data protection impact assessment prior to commencing with the deployment of facial recognition technology.
However, the PDP Bill also provides the Central Government power to exempt any Government agency from the application of the provisions of this Bill. Therefore, it is possible for the Government to exempt law enforcement agencies from the requirements of undertaking a data protection impact assessment. This would threaten privacy, data security and also allow authorities to avoid accountability in case of data breach.
Section 69 and 69 B of the Information Technology (Amendment) Act, 2008, also lay down provisions for the interception, monitoring and decryption of digital information and data by the State.
Conclusions
The use of facial recognition technology in India by law enforcement and the state is nascent but growing. There are a few critical points to consider before implementing FRT in the country.
Need for a legal and regulatory framework
Presently, there are no legal or regulatory frameworks governing the use of FRT in India, and existing legal frameworks for surveillance in India do not clearly extend to the use of FRT technology.
Unbiased training data for FRT AI.
An AI system is only as good as the quality of its input data. Training AI on a biased database creates a bigger villain, affecting people in bigger proportions. Cleaning training datasets from conscious and unconscious biases is a step toward building unbiased AI. However, in practice, achieving complete bias eradication is challenging. Humans create the data, and as long as humans are involved, biases may persist. What we can do about AI bias is to minimize it by testing data beforehand.
Data security and accountability
Data security is paramount for facial recognition technology due to the sensitive nature of biometric data. Unencrypted faces are vulnerable to data breaches, increasing the risk of identity theft, stalking, and harassment. Proper data storage, security measures, transparency, and individual control over biometric data are essential to prevent unauthorized access and misuse. Authorities must ensure accountability, accuracy, and compliance with legal and privacy regulations to protect individuals' privacy rights and mitigate the potential risks associated with data breaches and misuse of facial recognition data.
~Authored by Akash Yadav, 4th year B.A LL.B (Hons.) student from Dr Ram Monohar Lohia National Law University, Lucknow
Comments