Synopsis
On 21 March 1946, a group of men who had spent the previous six years solving problems of killing reconvened in a conference room in New York to discuss problems of living. The first Macy Conference on Feedback Mechanisms and Circular Causal Systems launched what would become ten meetings across seven years: the institutional genesis of cybernetics. The central question was whether the same formal framework that described an anti-aircraft gun tracking an evading aircraft could also describe a brain, a family, an economy, or a government. The answer the participants arrived at was yes. And that answer, translated from the conference room into social science, economics, and social policy, reframed what a welfare model was for. Where the question had been “what do these people need?” — Orshansky’s question, Rowntree’s question, Tillmon’s question — it became “how does the system correct its errors?” The poor were no longer persons with material requirements; they were deviation from a target state, noise in a feedback loop, a variable to be brought back to equilibrium. Norbert Wiener warned against exactly this, in 1947 and at length in 1950. He was heard and not acted on. This chapter traces the transfer.
1. From the Bomber to the Boardroom
Wiener volunteered for war research in autumn 1940, and the problem he was given is the chapter’s origin story. Given a radar return from an enemy aircraft flying an evasive course, and given that a shell would take up to twenty seconds to reach the target, design a system that predicts where the aircraft will be when the shell arrives. The difficulty was that the radar return was noisy and the pilot was taking evasive action — following a trajectory no formula could predict in advance because it was the product of human intention under stress. Wiener’s solution was to model the aircraft’s trajectory as a stationary stochastic process and derive a linear filter that produced the minimum mean-squared-error estimate of the trajectory’s future value. The filter required an assumption that would prove consequential far beyond the weapons laboratory: the enemy pilot must be treated, for the purposes of the system, as a stochastic process. Not as a person deliberating under conditions of uncertainty, but as a mechanism characterised entirely by the statistical properties of its historical trajectory.
Peter Galison’s analysis of the intellectual origins of cybernetics identifies precisely what was happening. To build the anti-aircraft predictor, Wiener was required to treat the enemy pilot as a servomechanism — a feedback-controlled device characterised entirely by its input-output transfer function — because that was the only kind of entity for which stochastic prediction of the relevant form was possible. The black-box method — treat the system as defined by its observable inputs and outputs, without reference to whatever is happening inside — was not a philosophical choice. It was a technical necessity imposed by the absence of access to the pilot’s interior. The technical necessity became, via the Macy Conferences, a general theory of mind. And the general theory of mind became a framework for governing the poor.
Claude Shannon’s 1948 mathematical theory of communication defined information as the reduction of uncertainty in a message source — and explicitly set aside the question of what messages meant. The semantic aspects of communication were “irrelevant” to the engineering problem. Shannon’s bracket was not innocent: by defining information as pattern rather than presence, separable from any material substrate, it produced a framework that applied to anything because it was about nothing in particular. At the ninth Macy Conference in 1952, the neurophysicist Donald MacKay pressed the objection that meaning could not be separated from the observer who received it — that the same statistical signal could produce entirely different effects in receivers with different knowledge and circumstances. MacKay’s argument was heard and left unresolved. The framework that flourished was Shannon’s.
2. Wiener’s Warning and Its Suppression
In January 1947, Wiener published “A Scientist Rebels” in the Atlantic Monthly: a public refusal to accept any further military contracts, and an explanation of why. The mathematical tools he had built for anti-aircraft prediction could be turned against civilian workers as readily as against enemy aircraft. The Human Use of Human Beings (1950) was the systematic version of the warning. Its argument was specific: the application of cybernetic feedback logic to the management of human labour was not a neutral technique that might be handled responsibly but was inherently a form of domination. A cybernetic control system applied to a factory measures worker outputs against a target, identifies deviations as error signals, and adjusts inputs — wages, penalties, conditions, information — to bring behaviour back toward the target. The worker occupies the position the enemy pilot occupied in the predictor: an entity whose future behaviour is to be estimated from past performance and whose deviation is to be corrected. What the worker needs — adequate pay, rest, dignity, security — does not appear in the system’s model, any more than the enemy pilot’s fear appeared in Wiener’s radar equations.
Wiener put the comparison directly: factory automation and feedback-controlled labour management was a new form of slavery — not metaphorically but functionally, in the specific sense that the worker’s behaviour was controlled by a feedback mechanism whose target state was set by someone else and whose deviation from that state was met with correction. The fact that the mechanism was mathematical rather than physical did not change its logic. His colleagues at MIT and RAND proceeded anyway. The Macy Conference framework that flourished was not Wiener’s semantically troubled, politically anxious version of cybernetics but Shannon’s clean, technically rigorous, militarily applicable version. When the British NHS adopted systems analysis in the 1960s and when HEW began applying PPBS budgeting to social programmes, they were applying Shannon’s bracket, not Wiener’s attempt to break it.
3. The System Dynamics of Poverty
In 1969, Jay Forrester published Urban Dynamics. He was a professor at MIT’s Sloan School, the inventor of system dynamics as a formal discipline, and a man with no prior expertise in urban poverty or housing policy. He brought to the problem a simulation language, a set of differential equations representing population stocks and flows in a generic urban system, and the conviction — which the Macy Conferences had made intellectually available to his generation — that any complex self-regulating system could be modelled by specifying its feedback loops.
The book’s central argument was that public housing construction and welfare transfers were positive feedback loops that made urban poverty worse. By attracting more low-income migrants than the economy could absorb, they perpetuated the conditions they intended to relieve. The model’s simulations showed that demolishing low-income housing stock and replacing it with housing for managerial and professional workers would generate economic revival. Daniel Patrick Moynihan, then Nixon’s urban affairs advisor, called it “a remarkable work.” The Boston Redevelopment Authority engaged with its conclusions. The model’s political utility was precisely what made it a useful case study: its conclusions were not findings from an empirical investigation. They were the model’s assumptions made visible through simulation.
Forrester’s model had no variable for racial discrimination in housing markets. By 1969, the evidence that racial discrimination was a structural determinant of American urban poverty was not contested — the 1968 Kerner Commission had stated directly that “white institutions created” the ghetto. Redlining had been documented since the 1930s. None of this appeared in the model. The variable “underemployed” aggregated Black and white residents in low-income employment without any feedback relationship representing differential access to housing credit, employment, or upward mobility. What the model could not see — racial discrimination, employer wage suppression, differential school quality — was not absent from the city. It was absent from the model. Having been rendered absent from the model, it became, for the purposes of policy, absent altogether.
4. The Behavioural Loop
In 1961, Teodoro Ayllon replaced the therapeutic arrangements of a psychiatric ward in Anna State Hospital, Illinois with a system of plastic tokens. Patients earned tokens for specified behaviours — dressing appropriately, attending group sessions, completing cleaning duties. Tokens could be exchanged for privileges. Ward staff shifted from carers to reinforcement administrators. The token economy’s evaluation reported significant improvements in the target behaviours; no measures of patient wellbeing, psychological distress, or capacity to function outside the institution were included. The programme’s success criteria were its compliance criteria.
B.F. Skinner’s behaviourism performed the same displacement as Wiener’s anti-aircraft predictor: the human subject was modelled as a transformation function — inputs in the form of reinforcement histories, outputs in the form of behavioural responses, nothing in between that the science was required to attend to. The token economy scaled this architecture to an institutional setting, applying it to people who had not consented to being experimental subjects. Alan Kazdin’s 1982 review found that behavioural gains in institutional token economy settings did not generalise to non-institutional contexts. Patients who performed the required behaviours when tokens were at stake reverted to previous patterns when the token contingency was removed. The programme had produced compliance. It had not produced capability.
The Work Incentive Program, enacted in the 1967 Social Security Amendments, was the first large-scale statutory application of this logic to welfare recipients. WIN required able-bodied AFDC recipients to register for work or training as a condition of benefit receipt, failing which benefit was reduced. WIN’s evaluations found consistently that it did not increase employment rates among participants. The compliance behaviours it rewarded — registration, attendance at appointments — were not the same as the capabilities that would have enabled participants to find and keep jobs. Each subsequent reform — WIN II, JOBS, TANF, and the Universal Credit claimant commitment — intensified the conditionality architecture. None resolved the fundamental problem, because the framework had no variable for structural conditions: the absence of jobs, the cost of childcare, the geography of labour markets. It had reinforcement schedules and compliance targets.
5. Social Physics Redux
In 2014, Alex Pentland published Social Physics: How Good Ideas Spread, opening with an explicit citation of Adolphe Quetelet as his intellectual precursor. Quetelet had proposed in 1835 that the same normal distribution describing astronomical measurement error also described human social behaviour. Pentland claimed that what Quetelet lacked — continuous behavioural data at population scale — the MIT Media Lab now possessed through mobile phone logs, wearable sociometer badges, and real-time interaction tracking. The project that Quetelet had initiated but could not complete was now completable. Pentland connected himself, explicitly, to a tradition that his critics identified as a tradition of political violence dressed in statistical clothing.
The framework Pentland built applied idea-flow network analysis to poverty in low-income communities: if poor communities have suboptimal social networks — too little exploration, too much insularity — then improving their idea flow will enable them to adopt more productive behaviours. The Oxfam review of Social Physics identified the assumption precisely: Pentland’s poverty analysis takes it for granted that the barrier to productive behaviour is informational rather than material — that poor communities lack good ideas rather than lacking income, assets, infrastructure, or access to credit. The structural determinants of poverty — low wages, inadequate housing, restricted labour markets — have no representation in the idea-flow model, just as they had none in Quetelet’s error-theoretic account of moral deviation. Pentland’s social physics reinstalls the behavioural explanation with better sensors.
Connection Forward
Chapter 6 follows cybernetics’ most consequential institutional vehicle into welfare governance — the RAND Corporation’s transplant of operations research from the Pentagon to the social policy arena, where the cost-effectiveness objective function replaced the question of material need with the question of programme efficiency.