Artwork

Content provided by Algorithmic Governance Research Network. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Algorithmic Governance Research Network or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Episode 2: Conversation with Simon Egbert and Matthias Leese on Criminal Futures: Predictive Policing and Everyday Police Work

1:24:16
 
Share
 

Manage episode 341029609 series 3394510
Content provided by Algorithmic Governance Research Network. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Algorithmic Governance Research Network or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Joining me today are Simon Egbert, Postdoctoral Fellow at Bielefeld University, on an ERC research project on The Future of Prediction, and Matthias Leese, Senior Researcher at the Center for Security Studies (CSS) in Zürich, to discuss their recent book Criminal Futures: Predictive Policing and Everyday Police Work, published in 2021 with Routledge. The book is available to download open-access here.

Today we discuss predictive policing and the ways in which it is transforming police work. Police departments across the globe are embracing algorithmic techniques to support decision-making through risk assessments and predictions based on big data and real-time analytics, utilizing tools such as facial recognition. Silicon Valley’s ‘technological solutionism’, to use Evgeny Morozov’s concept, has been making its way into law enforcement agencies across the globe, promising to smoothly, efficiently and effortlessly anticipate, predict, and control (future) criminal behaviour and deviance. But predictive policing has met with resistance from civil society and academics alike. Even though data-driven predictions and algorithmic risk assessments are sold by tech developers as ‘neutral’ and ‘objective’ forms of ‘evidence’ and ‘intelligence’ – because technological – as something ‘solid’ and ‘hard’ in ‘liquid times,’ critical social scientists tend to know better. What counts as data and how it is collected, what is included and what excluded, all this reflects historical, representational, cultural, gender, and other inequalities and biases. Prejudices about criminality of certain groups can be built into crime data, resulting in their reinforcement, rather than dispelling. We increasingly read about systems trained on biased and ‘dirty’ data, about ‘rogue algorithms’, and ‘algorithmic injustice’ and violations of human rights and civil liberties. As Cathy O’Neil put it, algorithms can create ‘a pernicious feedback loop’, where ‘policing itself spawns new data, which justifies more policing’ (O’Neil 2016: 87). Last year, acting on these insights, the city of Santa Cruz in California, one of the earlierst adopters of predictive policing, became the first US city to ban the use of predictive technologies in policing. Calls for ethical, transparent and explainable AI are emerging both from within computer science, law and social sciences, and from policymakers and civil society. It is clear that both the development and adoption of these technologies does not happen in a cultural, political or economic vacuum. In many countries, for instance, police forces are experiencing financial cuts, increasing pressures to outsource certain tasks to private actors, often accompanied by organizational reform. Demands on response time, results, performance, and efficiency are increasing, while resources may be shrinking, thus structurally creating a market for a wide range of optimization tools for police work. Simon Egbert and Matthias Leese have studied predictive policing, the datafication of security and the transformation of police work ethnographically in Germany and Switzerland. In this podcast, we discuss in detail the reality behind the sleek commercials for predictive policing software tools that promise to forecast crime and control futures. Are we headed towards a dystopian society of total surveillance, social sorting, and control or a utopia of a perfectly optimized police force? What futures lie ahead for predictive policing and what will the police force of the future look like?

Text © Tereza Østbø Kuldova, 2021

Produced with the financial support of The Research Council of Norway under project no. 313626 – Algorithmic Governance and Cultures of Policing: Comparative Perspectives from Norway, India, Brazil, Russia, and South Africa (AGOPOL).

  continue reading

14 episodes

Artwork
iconShare
 
Manage episode 341029609 series 3394510
Content provided by Algorithmic Governance Research Network. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Algorithmic Governance Research Network or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Joining me today are Simon Egbert, Postdoctoral Fellow at Bielefeld University, on an ERC research project on The Future of Prediction, and Matthias Leese, Senior Researcher at the Center for Security Studies (CSS) in Zürich, to discuss their recent book Criminal Futures: Predictive Policing and Everyday Police Work, published in 2021 with Routledge. The book is available to download open-access here.

Today we discuss predictive policing and the ways in which it is transforming police work. Police departments across the globe are embracing algorithmic techniques to support decision-making through risk assessments and predictions based on big data and real-time analytics, utilizing tools such as facial recognition. Silicon Valley’s ‘technological solutionism’, to use Evgeny Morozov’s concept, has been making its way into law enforcement agencies across the globe, promising to smoothly, efficiently and effortlessly anticipate, predict, and control (future) criminal behaviour and deviance. But predictive policing has met with resistance from civil society and academics alike. Even though data-driven predictions and algorithmic risk assessments are sold by tech developers as ‘neutral’ and ‘objective’ forms of ‘evidence’ and ‘intelligence’ – because technological – as something ‘solid’ and ‘hard’ in ‘liquid times,’ critical social scientists tend to know better. What counts as data and how it is collected, what is included and what excluded, all this reflects historical, representational, cultural, gender, and other inequalities and biases. Prejudices about criminality of certain groups can be built into crime data, resulting in their reinforcement, rather than dispelling. We increasingly read about systems trained on biased and ‘dirty’ data, about ‘rogue algorithms’, and ‘algorithmic injustice’ and violations of human rights and civil liberties. As Cathy O’Neil put it, algorithms can create ‘a pernicious feedback loop’, where ‘policing itself spawns new data, which justifies more policing’ (O’Neil 2016: 87). Last year, acting on these insights, the city of Santa Cruz in California, one of the earlierst adopters of predictive policing, became the first US city to ban the use of predictive technologies in policing. Calls for ethical, transparent and explainable AI are emerging both from within computer science, law and social sciences, and from policymakers and civil society. It is clear that both the development and adoption of these technologies does not happen in a cultural, political or economic vacuum. In many countries, for instance, police forces are experiencing financial cuts, increasing pressures to outsource certain tasks to private actors, often accompanied by organizational reform. Demands on response time, results, performance, and efficiency are increasing, while resources may be shrinking, thus structurally creating a market for a wide range of optimization tools for police work. Simon Egbert and Matthias Leese have studied predictive policing, the datafication of security and the transformation of police work ethnographically in Germany and Switzerland. In this podcast, we discuss in detail the reality behind the sleek commercials for predictive policing software tools that promise to forecast crime and control futures. Are we headed towards a dystopian society of total surveillance, social sorting, and control or a utopia of a perfectly optimized police force? What futures lie ahead for predictive policing and what will the police force of the future look like?

Text © Tereza Østbø Kuldova, 2021

Produced with the financial support of The Research Council of Norway under project no. 313626 – Algorithmic Governance and Cultures of Policing: Comparative Perspectives from Norway, India, Brazil, Russia, and South Africa (AGOPOL).

  continue reading

14 episodes

Alle episoder

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide