There is a number attached to your life. You may not know it, but the state does. It has been calculated from your postcode, your payment history, your medical records, your compliance with conditions you may not have understood, your bank transactions from the last three years, the number of children you have and when you had them. The number is not fixed. It updates in real time. It determines whether you receive help, whether you are investigated, whether the safety net opens or closes beneath you. Nobody elected the people who designed it. Nobody can appeal to it directly. In most cases, you are not even told it exists.
This book is about how we got here. It is about the long and largely hidden history of mathematics as an instrument of state power over the poor — a history that runs from Victorian drawing rooms to Silicon Valley server farms, from the skull measurements of eugenicists to the neural networks of contemporary welfare administration. It is a history of counting, and of what counting costs.
But it is also a story with a counterpoint. Because running alongside that history of mathematical domination is another, quieter history: of people who used numbers differently. Who counted carefully, from the ground up, in order to make suffering visible rather than to manage it away. Who understood that how you measure need is a political act — and that getting it right is a form of justice.
The State’s Mathematical Gaze
A state cannot govern what it cannot see. And it cannot see what it cannot count.
This is not a modern insight. The census, the tax roll, the parish register — states have always needed to quantify their populations to administer them, to tax them, to conscript them, to feed them in crisis. But from the mid-nineteenth century onward, something changed. Mathematics became not just an administrative convenience but a legitimating language — the idiom through which political choices about who deserves what could be expressed as scientific facts about how society actually works.
The sociologist Theodore Porter has called this “trust in numbers”: the way that quantification acquires authority precisely because it appears to be impersonal, objective, beyond the reach of political contestation. When a government sets a benefits threshold, it is making a choice about what constitutes an acceptable minimum life. When it deploys a fraud-detection algorithm, it is making a choice about which populations are presumptively dishonest. But if these choices are expressed mathematically — as poverty lines, risk scores, compliance indices — they cease to look like choices at all. They look like facts. And facts, unlike policies, cannot easily be argued with.
The philosopher Ian Hacking described something related when he wrote about “making up people”: the way that statistical categories — the pauper, the delinquent, the at-risk individual — do not simply describe pre-existing populations but actively produce them. Once the state has a category, it can count the people in it. Once it can count them, it can administer them. Once it can administer them, the category becomes real in ways that shape not just policy but identity, self-understanding, social life. The mathematical gaze does not just see the world; it makes it.
This book traces how that power was constructed, deployed, resisted, and — in our own moment — accelerated beyond anything its Victorian architects could have imagined.
The Five Transitions
The argument runs through five transitions, each of which is both a change in mathematical technique and a change in the state’s relationship to the poor.
The first transition is the invention of social statistics in the mid-nineteenth century. Francis Galton, Karl Pearson, and their collaborators built the foundational toolkit of modern statistics — correlation, regression, the normal distribution — to answer questions about hereditary hierarchy. They wanted to know whether talent was inherited, whether the “unfit” were breeding too fast, whether empire could be managed through the selective cultivation of populations. The answers they produced were political in origin and eugenic in implication, but the tools themselves were laundered into scientific neutrality and handed, largely unchanged, to the welfare economists and poverty researchers of the twentieth century. The mathematics we use to count the poor was built to classify them.
The second transition is the post-Second World War social insurance consensus: the moment when several European and settler states decided that mass poverty was no longer an inevitable feature of industrial society but a technical problem to be managed. In Britain, the Beveridge Report of 1942 proposed a system of “social insurance for all from the cradle to the grave,” built on actuarial calculations of risk and flat-rate contributions and benefits. Across these cases, the poor ceased to be primarily an object of charity or moral concern and became a category inside large-scale mathematical models of society.
The third transition is the computational turn: from the post-war think tanks through the spread of databases, means-testing software, and management information systems in welfare administration. This is where the opacity began. As welfare states expanded and then, from the 1970s onward, contracted, the administrative machinery that determined eligibility became increasingly computerised and increasingly invisible to the people it governed. The state’s mathematical gaze was no longer mediated by a caseworker who might know your face — it was processed by systems that reduced your circumstances to variables and your needs to flags.
The fourth transition is the ideological project of the late twentieth and early twenty-first centuries: the deliberate construction, by a network of venture capitalists, libertarian intellectuals, and effective altruists, of a philosophical framework in which mathematical governance is not just expedient but morally correct. In this worldview, the market algorithm is simply better at allocating resources than democratic deliberation; the population that cannot compete is not oppressed but simply optimised away; and the state’s role is to step aside and let the numbers decide. This is not a neutral technical position. It is Victorian eugenics in a hoodie.
The fifth transition is the one happening now: the deployment of machine learning — systems that learn correlations from historical data without being explicitly programmed — in the core functions of the welfare state, the justice system, and the security apparatus. These systems inherit all the biases of the data they are trained on, which is to say all the biases of every prior act of mathematical governance. They operate at a scale and speed that makes human oversight nearly impossible. And crucially, they produce outputs without producing reasons — scores without explanations, decisions without legible logic. The state has acquired a mathematical gaze so powerful it can no longer see itself.
The Arithmetic of Power
Running through all five transitions is a single underlying mechanism, which this book calls the arithmetic of power.
The arithmetic of power describes the process by which mathematical representation becomes a form of governance: first, a population is counted and categorised; the categories encode political judgements about normality, risk, and desert; those judgements are legitimated by the appearance of objectivity; and accountability is progressively evacuated as the machinery becomes more complex, more distributed, and more opaque. At each stage, the people being counted have less access to the mathematics that governs them, and the people controlling the mathematics have more.
This is not a conspiracy. No single person designed the system. It is the result of thousands of technical and political decisions, each of which seemed locally reasonable, accumulating into an architecture of exclusion. Understanding it requires tracing the history — which is Part I of this book — and understanding the ideology — which is Part II — before we can confront the machine itself, which is Part III.
Why Now
The stakes are not abstract. In April 2026, the UK’s Department for Work and Pensions began requiring banks to share real-time transaction data on millions of benefit claimants, feeding an AI risk-scoring system whose own fairness analysis acknowledges it disproportionately flags disabled people, single parents, and ethnic minorities. In the United States, more than twenty states deploy machine learning in unemployment and Medicaid administration, building on a lineage of fraud-detection systems that in Michigan produced a ninety-three percent false positive rate and drove innocent families to suicide and eviction.
At the same time, the tools of algorithmic governance are converging with those of policing and border control. Welfare data feeds risk models; risk models generate investigative targets; investigative targets feed back into welfare eligibility assessments. The arithmetic of power is becoming a closed loop.
The Counterpoint
But this book is not a counsel of despair.
Because at every stage in this history, there have been people who understood that the politics of counting runs in both directions. Mollie Orshansky insisted on the grocery list when economists wanted abstractions. The home economists and settlement house workers of the Progressive era put bodies and calories and rent prices into their surveys when social Darwinists wanted bell curves. Community data projects from Detroit to Barcelona to Kerala are today building participatory measurement systems that make need visible on communities’ own terms. Legal advocates are fighting, case by case, for the right to explanation — the principle that anyone affected by an algorithmic decision has the right to understand the reasoning behind it.
Democratic politics requires democratic mathematics. The struggle over who controls the arithmetic of power is not separate from the struggle for justice, welfare, and representation. It is the same struggle, conducted in a different register.
A Note on Method
This book is not a technical manual, though it does not shy away from technical explanation. Each time a mathematical concept does significant work in the argument — the normal distribution, correlation and regression, the threshold, the black box — it is explained in plain terms in a short section called a Mathematical Interlude, placed at the transition between parts. Readers who want to go deeper will find references in the notes; readers who are happy to take the argument on trust can skip those sections without losing the thread.
The book draws on archival research, investigative journalism, Freedom of Information requests, published technical documentation, and interviews with claimants, caseworkers, researchers, and policy insiders.
Mollie Orshansky kept her notebooks in a kitchen drawer. She believed that the measure of a poverty line was whether it reflected what real people actually needed to survive. That standard — transparent, contestable, grounded in material reality — is the one this book applies, as best it can, to the systems that now govern millions of lives.