Justice and Due Process

The Potential Perils and Promise of Predictive Policing


A new algorithm claims to reliably predict patterns in violent crime and property crime as well as police responses a week or more in advance. The algorithm, produced by researchers from the University of Chicago, was tested in Chicago, Philadelphia, San Francisco, Austin, Los Angeles, Detroit, and Atlanta and produced very accurate results. 

Predictive modeling can potentially be a valuable tool to help law enforcement objectively and fairly allocate resources, deter and prevent crime, and focus on property crimes and crimes of violence rather than drug offenses.

However, like all powerful tools, it carries potential risks.

Police could perceive individuals as suspicious simply because they are in an area that has been predicted to be a high crime area, rather than due to individualized suspicion. 

And, like all models, the resulting predictions are only as robust and accurate as the initial inputs. The algorithm relies on reported crime data to make its predictions. If this input is inaccurate, the resulting outputs will also be inaccurate. In areas where crime is underreported, such modeling could create a feedback loop reinforcing patterns of inadequate policing. 

Police departments have not yet contracted with companies to use this algorithm, but there are dangers associated with overreliance on computer learning. For example, an algorithm may be useful in certain narrow situations—like determining how to allocate a finite number of patrol cars—but is ill-suited at determining how crime patterns may be impacted by use of the algorithm itself. For instance, if enforcement patterns change, crime that normally occurs in one block may occur in the next block over, thus resulting in no net difference in the total amount of crime committed.

Additionally, once a company contracts with law enforcement, monetary incentives become part of the equation. Algorithms may be built out to serve purposes unforeseen at the time of contract signing. Once there is a financial tie, companies will be incentivized to maintain a strong relationship with an agency, which could impact the efficacy of future algorithmic research. Essentially, the extent to which algorithms are effectively used is tightly tied to the good intentions of the law enforcement agency itself. Therefore, poorly managed agencies could result in poorly generated algorithms.

Given the already concerning trend of ignoring individual privacy rights in favor of investigative ease, privacy advocates have reason to be concerned that computer learning could be used to further this trend. 

These types of algorithms may prove valuable in assisting law enforcement in better allocating limited enforcement resources to help the community they serve. It is also crucial that law enforcement remain mindful that tools supplement, rather than replace, sound human judgment, and that all tools have limitations.

This article was co-written by Leslie Corbly, a Privacy Policy Analyst at Libertas Institute.