Part 3

Chapter 10

Risk Scores and Redlining

This chapter argues that algorithmic welfare systems do not replace historical patterns of racial and spatial exclusion but encode and automate them — the risk score is redlining made computable.

Drafting

Synopsis

The automated welfare systems of the early twenty-first century are not a technological break from the traditions this book has traced. They are their densest crystallisation. The eligibility thresholds are Orshansky’s matrices stripped of material grounding and re-encoded as decision-tree classifiers. The fraud-risk scores are RAND’s objective functions running in real time on claimant data. The conditionality algorithms are the Skinnerian feedback loop of Chapter 5, automated at scale. And the whole infrastructure was funded, designed, and in many cases operated by the VC-backed companies of Chapter 9. What is new is not the logic but the speed, the opacity, and the near-impossibility of appeal. Indiana contracted IBM to automate its welfare eligibility system in 2006 and produced 1.16 million wrongful denials in three years. Michigan’s MIDAS system generated 40,000 false fraud accusations in 18 months. Australia’s Robodebt scheme ran an illegal income-averaging algorithm for three years, producing 470,000 false debt notices and at least two directly linked suicides, before being ruled unlawful. Universal Credit’s standard allowances are not derived from any assessment of what households need; they are set by fiscal targets and enforced by a digital compliance architecture that the claimant faces without a caseworker who knows their circumstances. Virginia Eubanks called the resulting system the “digital poorhouse.” This chapter argues that her term is precise: the logic is not new, the speed and opacity are.

In This Chapter

  • How Orshansky’s 124 household matrices became Universal Credit’s standard allowances: the genealogy from material grounding to fiscal target, and what is lost at each step of that conversion
  • How the Indiana IBM disaster, Michigan’s MIDAS system, and Australian Robodebt each demonstrate the same structural failure: a classifier trained on historically biased investigation records reproduces prior discrimination as an apparently objective probability score
  • How the Robodebt income-averaging algorithm reveals the precise mechanism by which mathematical precision can be used to produce legally unjustifiable outcomes that survive unchallenged for years
  • How Universal Credit’s claimant commitment and sanctions architecture automates the conditionality logic of Chapters 5 and 6, with compliance scoring that cannot be contested by the person it governs
  • How Noble’s “algorithms of oppression” and Benjamin’s “New Jim Code” name the mechanism by which racialised historical data is laundered through algorithmic processing and returned as objective assessment

Connection Forward

Chapter 11 examines Palantir Technologies — the data integration layer that connects welfare, policing, healthcare, and immigration enforcement into a single mathematical space — as the most complete realisation of what the systems described in this chapter become when their data flows are connected.

Key Claims