This blogpost is inspired by a seminar on police-use of facial recognition technology in India that took place on April 21th 2022 and implicitly quotes and paraphrases speaker Shivangi Narayan.
In a broader wave of nationwide protest, communal riots occurred in northeastern Delhi in February 2020. These actions were directed against the ruling Bharatiya Janata Party (BJP) proposed Citizenship Amendment Act (CAA) and call for registration of citizens (NRC). Officially, 53 persons were killed but unofficial estimations go in the thousands. Over a thousand inhabitants were also displaced, causing further residential segregation. Northeast Delhi is located at the border of the city and predominantly populated by Hindus and Muslims from low castes, who are mostly immigrants to the city and come from the neighboring states of Uttar Pradesh, Rajasthan, and Bihar. The area has the second highest population of Muslims in Delhi. These riots, led by Muslim women, in dominant Muslim and impoverished neighborhoods, can be seen as part of this longer cycle of conflict in the process of creating an authoritarian state based on one Hindu identity. Since the Gujarat pogrom in 2002, Hindu nationalism increasingly forms state discourse in the world’s largest democracy, with prime minister Narendra Mohdi (BJP) as its personification.
During the Delhi riots of 2020, the police identified 1100 ‘rioters’ through facial recognition technology. Even though it was not novel for the Indian police to use this technology for the identification of protesters, it was the first time that the government admitted its use. Facial recognition technology refers to a collection of tools that can be used for identifying or verifying human beings from photographs, videos or in real time. It uses biometric analysis that allows the identification or verification of an individual based on patterns derived from their facial characteristics and features. While there are a range of facial recognition techniques, prevalent models rely on using an image to create a mathematical representation of a person’s face, which can then be compared against the representations captured in an existing database or gallery of photographs to find likely matches. As a lot of these technologies are produced in the West, they are often not trained on Indian faces. To tackle this bias, the Indian Police therefore started using the images of Bollywood actors to train their algorithms.
In this specific case, footage procured from CCTV and other mediums was fed into the software and analyzed. The data was matched with photographs of the e-Vahan database concerning vehicle registration and data from the Election Commission. By using this technology, members of the Muslim minority living in Northeast Delhi were identified and arrested. Shivangi Narayan exemplified through two cases. In one case, a bystander has been officially accused of having ‘the intention to harm’. Without evidence, the police decided on the intentions of the suspects and accused him of murder and organizing riots. This was only possible after being identified by facial recognition software. In another one, a shop owner filmed rioters while destroying his shop. In the tumult of the riots, someone was killed inside. The shop owner himself was outside filming the destruction. He is now in jail for murder after being identified by facial recognition technology as he was present in the shop’s neighborhood.
Most critical research on facial recognition technology focuses on misidentification, accuracy problems and misrepresentations of datasets. In several individual cases, as demonstrated above, the identification and accuracy of targeting people who were present at the riots was performed flawlessly by the software. It can be said that accuracy does not cut it anymore. Social scientists have been continuously asking to look at technical systems as “not just” technological. They are part of a socio-technical system embedded in fields of power. The Home Minister of India, and direct commander of the Delhi police, publicly claimed that the software does not discriminate against gender, religion, or caste. This argument on the neutrality and objectivity of technology is often repeated by powerful actors and needs to be heavily criticized. The idea of seeing what we know and knowing what we see are political and need to be addressed as such, especially in the use of visual technologies. They are entangled in bureaucracy, the social and in this specific case, the political situation of the country. Following of these events, the question on how technology intersects with power demands a more central place in the current academic and public debate.
At this moment, facial recognition technology is a potent tool to marginalize the already marginalized. As the Indian state is increasingly showing authoritarian characteristics backed by a Hindu nationalist ideology, this technology is used to overpolice its Muslim minorities. In order to study this phenomenon, we need to perceive facial recognition technology as an assemblage embedded within political and social power relations. Beside important research on the built-in-biases of this technology, its embeddedness in broader field of power needs to be included to grasp more complete picture.
Shivangi Narayan uses The Lord of the Rings as a metaphor to explain the state of facial recognition in India. As Frodo is crawling towards the fire of Mount Doom, the Ring of Power has a will on its own and dissuades him from his earlier intentions to destroy it. Facial recognition technology is such Ring of Power, that has a push and pull on its own. As biased by its design and pushed by certain agendas through interaction with the state, a Sam is needed to remind us of where to find the entrance of the volcano before turning around and leaving the world in darkness. The question is then, who or what is ‘the Sam’ of facial recognition, and does ‘Sam’ even exist at all?
This post was written by Lander Govaerts, PhD student at the VUB Chair in Surveillance Studies.
 Mausumi Das, Oindrilla De, Zoya Kahn. (March 17, 2022). The Long Shadow of the 2020 Delhi Riots. The Indian Express, https://indianexpress.com/article/opinion/columns/the-long-shadow-of-the-2020-northeast-delhi-riots-7821523/
 Cherian, G. (2016). Hate Spin: The Manufacture of Religious Offence and its Threat to Democracy – The Rise of Hindu Nationalism. MIT Press, https://thereader.mitpress.mit.edu/the-rise-of-hindu-nationalism/
 Definition used by Parsheera, S. (December 5, 2019). Adoption and Regulation of Facial Recognition Technologies in India: Why and Why Not? Data Governance Network Working Paper 05, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3525324 inspired by Moosa, A. (2019) A comprehensive guide to facial recognition algorithms; Introna, L.D. & Nissenbaum, H. (2010). Facial recognition technology: A survey of policy and implementation issues.
 Introna, L. & Murakami Wood, D. (2002). Picturing Algorithmic Surveillance: The Politics of Facial Recognition Systems. Surveillance and Society. 2. 177-198. https://www.researchgate.net/publication/47666221_Picturing_Algorithmic_Surveillance_The_Politics_of_Facial_Recognition_Systems
 For assemblage theory see Deleuze, G. & Guattari, F. (1988). A Thousand Plateaus: Capitalism and Schizophrenia. London: Athlone Press & De Landa, M. (2006). A new philosophy of society: assemblage theory and social complexity. Londen: Continuum.