Introduction
Policing has had, within its folds, the essential element of crime prevention since antiquity. This wasn’t confined to just deterrent effects emanating from deified morality or fear of coarse and arbitrary punishment; but was in many cases linked to the idea of surveillance and pre-emptive action as well.
Back in the days of Ashoka, surveillance is said to have been performed through an elaborate system of spies. Indeed, a police state. The information gathered was assessed, and probable criminals, rebels, insurgents and the like were promptly taken into custody.
Which brings us to an important questions – what about new-age predictive policing? Is it any better? As appealing as it sounds, the relatively novel fanfare behind the supposedly “objective” algorithm-based prediction of datasets, crime records and criminal profiles may not be truly fair and just. Centuries ahead, it is disturbing to ponder if in an era of lofty ideals of International Human Rights, we still seek solutions that are no more refined and no less draconian.
Predictive Policing
Predictive policing today is essentially the use of algorithms to analyse massive amounts of information to predict and possibly prevent crimes in the future. The theoretical justification for such predictive policing comes from a few relevant and proven theories in criminology, such as rational choice theory and crime pattern theory. Together, known as the blended model, it proposes that criminals and victims follow common and overlapping patterns of behaviour. The presence of these traits indicates a higher likelihood of crime. Geographical and temporal features point towards when and where such crimes might occur. Criminals too, while contemplating crimes, take many of these factors into consideration to avoid the risk of getting caught. With legitimacy derived from the same, data is collected to compute possible hotspots within places, communities, ethnic, racial and religious settlements and even households in order to take pre-emptive action against crime.
The most infamous example of algorithm-driven predictive policing comes from the state of Los Angeles and its Police Department, the LAPD. From LASER, a gun-violence likeliness predictors, to PredPol, a software-based assessment of crimes “hotspots”, these Bureau of Justice Assistance funded programs have been the subject of scathing political, ethical and even constitutional scrutiny and criticism. Similar is the case with the New York Police Department’s predictive policing software started in 2012. It was meant to help identify several offences, from felony assaults and burglaries to shootings and motor vehicle accidents. The Chicago Police Department has gone several steps further and has made a heat list of people most likely to commit gun violence.
Faultlines
Behind this theorised marvel is a tale of disastrous and dreadful biases and perilous consequences. Most of these methods were not just ineffective but have also been subject to the criticism for targeting communities of colour. One of the several reports on the same showed that the predictor relied heavily on previous arrest records – and not even those arrests that ended up in convictions. This meant that all the biases that went into the classical arrest of a person, got further magnified by the repetitive feeding of the same into software; these algorithms then keep marking such people, communities and areas as ones of high risk and the chain continues. This feeds into age-old evils as we know them, including systemic racism and selective oppression.
Additionally, the documentation of these algorithms and the data fed into them were (and still are) questionable. Audit records are not maintained systematically, this renders checks and balances by civic society but a distant dream. More alarmingly, studies reveal that the very collection of these datasets is at times corrupt, and more often than not, unethical in means. “Deploying predictive policing systems in jurisdictions with extensive histories of unlawful police practices presents elevated risks that dirty data will lead to flawed or unlawful predictions, which in turn risk perpetuating additional harm via feedback loops throughout the criminal justice system.”
Indian Snapshot
The Indian stage, if anything, is set against an even more dilapidated background drenched in biases. With a police force influenced by casteism and communalism, and an even larger society validating these prejudices. Lower caste and religious minorities are known to have been at the receiving end of constant systemic discrimination and fundamental rights violations. Feeding humungous datasets that scream discrimination to unguarded algorithms will only perpetuate these horrendous practices and incriminate larger swaths of innocents from minority communities. The Crime Mapping Analytics and Predictive System (CMAPS) from Delhi and the Integrated People Information Hub sourced software that the Hyderabad Police employs gathers data from places as diverse as the Indian Space Research Organisation, bank transactions, family background, biometrics and even passport details.
Integrated People Information Hub that claimed to offer a ‘360 degree view’ on every citizen is an intrusive database that stores not just information on arrested people, offenders lists, missing person reports, FIRs, case diaries and the like; but also personal details such as phone numbers, power and water connections, tax payments, voter IDs, RTA licences, relationship profile etc. In short, this database collects and stores all such data that an individual would ideally prefer to shield from public disclosure.
It is also pertinent in this context to invoke the Principle of Least Privilege, which expounds that for any access or use of private information, (i) there is a prerequisite of a legitimate process coupled with a legitimate purpose, and (ii) only such minimal data as is absolutely necessary must be beckoned and utilised. In the case at hand, the Telangana Govt. intends to open such information to all departments without sound safeguards; clearly so, the impugned affair suffers from acute incongruence with these principles.
Making matters worse, it is unsettling to digest the fact that even post-judicial scrutiny, this systematic bias stands glaring. Punishments adjudged by the Courts are disproportionately severe and rather discriminatory when the marginalised are in question. Not to much surprise, and in fact much to our dismay, the same biases that predictive systems showed in the United States’ NYPD and LAPD speckle the Indian terrain.
Further, the newly tabled Criminal Procedure (Identification) Bill of 2022 cannot be overlooked; the proposed piece of legislation intends to collect fingerprints, palm prints, physical and biological samples and even signatures and handwritings of ‘prisoners’ – defined therein as ‘any person arrested under any law’. The draconian ramifications of the Bill upon the various facets of liberties and privacy remains to be observed; however, with explicit legislative intentions such as “establishing the crime of the accused”, read along with a clause that pegs the data retention period as 75 years, fearful scepticism permeates the implications of this Bill.
Conclusion
The case here is one of a conflict between – (a) the possible fortunate predictability of crime and its prevention thereof; and, alternatively (b) the Orwellian, blatant invasion of privacy, compounded by an aggravation of incrimination and discrimination of selective historically oppressed sections of society. Harmoniously reaping technological benefits while discarding biases would then be the ideal way to go about it. Perhaps, in this case, reforms and solutions have to come from numerous facets. Legislatively, civic societies can push for laws that introduce algorithmic accountability – the likes of which can be seen in the form of Local Law 49, New York City. Tech developers can introduce mechanisms such as “step by step algorithm auditing” to scrupulously monitor such codes and ascertain fairness. Citizens must also, in legal rigour, challenge the existing predictive systems. Citizens and civil society can evoke numerous precedents that promise privacy, and call for neutrality in policies and laws that provide protection from discrimination.
Little need be said about how these methods of mass surveillance are prima facie infringements upon the fundamental rights embodied in the Indian Constitution. Law and order practices that disregard privacy, if left unchallenged, will pave way for arbitrary and unbridled demolition of the individual’s privacy and liberties.
Read more: Why the MPC must pay attention to recent monetary arithmetic
Post Disclaimer
The opinions expressed in this essay are those of the authors. They do not purport to reflect the opinions or views of CCS.