The American left is now waking up to the fact that it slumbered while a historic opportunity to aid the poor passed it by. When Bill Clinton signed the Personal Responsibility and Work Opportunity Reconciliation Act of 1996, he changed the moral debate over poverty. No longer could Ronald Reagan’s heirs wave help-wanted ads and ask sarcastically why those welfare queens couldn’t find a job when there were so many to be had. Now everyone was working, training for a job, or looking for one. It was the law, after all.
While many pundits would credit the “workfare” revolution to conservative think tanks and their proponents, the underlying force propelling Republican-led welfare reform lay at the intersection of demography and economics. When Aid to Families with Dependent Children was created by Franklin D. Roosevelt in 1935, it was meant to serve as insurance for (white) widows with children—specifically, to keep them home as caregivers. As non-marital childbearing rates rose, it quickly transformed into the quintessential welfare program as we envisioned welfare in the late 20th century: financial support for young, non-widowed single mothers.
Meanwhile, the labor force was drastically changing: female labor-force participation rates doubled between World War II and 1990. Looking back, it is no wonder why workfare replaced welfare. Social norms had shifted radically as most women began working and reasonable standards of living increasingly required two incomes. Under these changed conditions, politicians—and the public—were simply not going to pay poor women to stay home with their children when everyone else, male and female, was working longer and longer hours and seeing their children less and less. So with workfare, the system was turned on its head: mothers were paid to leave the home when once they were paid to stay there.
With benefits linked to work, Democrats could have argued for a much more generous safety net. But they missed the opportunity to rethink the social compact in this new moral universe. The last time the federal minimum wage was raised was 1993; that year also marked the last significant expansion of the Federal Earned Income Tax Credit. So we have failed to deliver on two work-linked policies with broad political support. Never mind rethinking the entire welfare state.
But while most left-leaning politicians have been nowhere on the issue for the last decade, significant advances have been made in researching the effect of income, wealth, and neighborhood on the opportunities of the poorest Americans. So before we move forward to revise the social safety net for the 21st century, we should consider what we have learned from poverty research since the time of President Lyndon B. Johnson’s Great Society initiative.
In particular, what answers does this research suggest to the most basic questions concerning the roots of poverty and its remedies? For example, how much does income itself matter? Would raising the incomes of the least well off be enough to break the intergenerational cycle of poverty? Put another way, is poverty cause (as the left would like to believe) or effect (as the conservatives tell us)?
Or is poverty perhaps neither cause nor effect but rather a reflection of an underlying social disease—namely, economic segregation and the time bind that low-wage parents face? Significant new research suggests the truth of this third possibility, which may augur a completely new form of social policy—addressed to time and place, not just money—and in turn, a way for the left to reconceptualize the safety net for the postindustrial economy. But in order to think freshly about issues of opportunity, we first have to look back and figure out how we got here.
In 1968—one year before the start of the rise in income inequality that continues unabated today—the anthropologist Oscar Lewis wrote an article entitled “The Culture of Poverty.” This phrase, which Lewis used to describe a supposedly self-defeating set of practices of poor Mexican peasants, took root in the American public consciousness. The argument was that poor people adopt certain practices that differ from those of “mainstream” society in order to adapt and survive. In the U.S. context these might include illegal work, multigenerational living arrangements, multi-family households, serial relationships in place of marriage, and pooling of community resources as a form of informal social insurance (otherwise known as “swapping”). Each of these cultural practices is taken to be a rational response to a tenuous financial situation. But once these survival adaptations are in place, they take on a life of their own and end up in the long run holding poor people back.
The culture-of-poverty thesis resonated with Daniel Patrick Moynihan’s controversial 1965 report on the “Negro family” in which he argued that a tangle of family pathology held back African-Americans. The analogy is that the cultural arrangement of the black family—matrifocal and multigenerational—was the cause, not the effect, of African-American economic problems. The tangle-of-pathology and culture-of-poverty arguments engulfed much of sociology at the end the 1960s at the same time as fires spread across riotous urban America. This intellectual revolution, if it was that, peaked in 1970, when the Harvard political scientist Edward Banfield wrote a book applying it explicitly to the U.S. context, The Unheavenly City.
It seems that Lyndon Johnson’s war on poverty had—in short time—engendered a sizable backlash. A considerable research effort over the next few years went into rebutting the claim—implicit in the culture-of-poverty thesis—that poverty is not the cause of the poor’s ills, but rather the effect. And the core of that effort was the largest social experiment in our nation’s history: the negative income tax.
At several sites around the country, scientists enrolled poor people into treatment and control groups. Members of the control groups got their welfare checks or their wages as before. Members of the treatment groups received a guaranteed check. The government “reclaimed” the money through taxation until a certain crossover point at which households would start paying “positive” taxes—hence the name. The research study took place across multiple sites in New Jersey, Seattle, and Denver. The results confirmed the left’s worst fears: women left marriages in droves and unemployment spells increased in duration. For example, the Stanford Research Institute found that in Seattle and Denver, the treatment groups had an average work reduction of nine percent for husbands and 18 percent for wives. This suggested, according to Jodie T. Allen, an analyst who wrote about it in Designing Income Maintenance Systems, that as much as 50 to 60 percent of the money paid to two-parent families under a negative income tax would serve to replace earnings. Liberals were burned by their own data.
Then, for the next decade or so there was radio silence. Rather than risk blaming the victims (or proving the right wing’s case for it), the generally left-leaning community of poverty researchers avoided the issue of cause and effect. But the right was not comparably quiet. Writing in The New Yorker, in 1981, the journalist Ken Auletta inaugurated the concept of the “underclass”—the culture of poverty super-sized. Not only were the poor different in their inability to take advantage of what mainstream society has to offer, the theory went, they were increasingly deviant and even dangerous to the rest of us.
As left-leaning academics were trying to debunk the underclass thesis, they were once again outflanked on the right. The conservative social critic Charles Murray made the left’s argument for it: the poor were no different from the rest of us; they respond rationally to economic incentives. According to Murray, poverty per se was not the culprit, nor even was the culture of poverty; the poor, he argued in his 1984 book Losing Ground, were the victims of an ever-expanding welfare state that provided the wrong long-term incentives. To make his case, Murray pointed to the expansion of welfare over the same period that showed a rise in crime, the proportion of unwed births, and unemployment duration. And then to bolster his claim, he rolled out the results of the old negative income tax experiment.
Progressives found themselves trapped. In order to argue against the underclass concept, they had to show that the poor were no different than the rest of us, but were merely responding to a lack of opportunity. But that played right into Murray’s thesis about the perverse incentives of welfare. In order to stake an alternative to both positions, prominent sociologists such as William Julius Wilson argued that welfare was really a minor consideration in shaping the labor and marriage markets in inner cities. The real action was lack of jobs and marriagable, employed men. Deindustrialization, globalization, suburbanization, discrimination, gentrification, and a host of other ills were the real culprits, according to Wilson and his intellectual coterie.
The new analysis came with a new strategy: to fight poverty, “make work pay.” Hence the abandonment of dreams of a Swedish-style welfare state in favor of targeted, work-friendly policies like the expansion of the earned-income tax credit. The left pushed to raise the income limits on Medicaid so that the poor would not lose their health insurance as they left welfare for the low-wage labor market. (In fact, that was one of the rationales behind pushing for universal health coverage in 1993—before welfare reform.) And, of course, there was the Personal Responsibility and Work Opportunity Reconciliation Act, otherwise known as the end of welfare as we know it.
Ultimately, poor adults were all but forsaken for the “deserving poor”—which meant children, through which the left could blunt the conservative critiques of perverse incentives and such. Instead of universal health insurance, progressives would push (successfully) for the Children’s Health Insurance Program. Instead of more income support for families, third-way democrats would propose Children’s Savings Accounts, which parents couldn’t touch (a proposal that has recently been reborn as ASPIRE accounts, with bipartisan support in Congress). Even as late as the 2000 presidential campaign, Al Gore was arguing for universal day care. After all, should children suffer for their parents’ poor choices?
About the same time welfare reform went into effect, a third, almost fatal blow was dealt to the left’s arguments for an expanded safety net and a renewed focus on the well-being of poor children. It came from an unlikely source: Susan Mayer, a sociologist who wrote a book called What Money Can’t Buy: Family Income and Children’s Life Chances. In it, she argued that the effects of income poverty on children have been vastly overstated. Sure, study after study has shown that being poor as a child is associated with poor health, behavioral problems, bad grades, teenage pregnancy, dropping out of school and, ultimately, being poor as an adult. But documenting an association is quite different than proving cause and effect. Mayer used a series of clever devices to make a case that the real impact of income on children is somewhere between trivial and minor. For example, she examined the relationship between parental income and the investments that are supposed to matter for young children. She found a relatively weak relationship between income and the standard measures that psychologists have used to rate the educational environment of the home—such as the presence of books and educational toys. Meanwhile, the household conditions that are highly responsive to income—such as spending on food, size and value of home, and car ownership—were weakly related to children’s outcomes.
The immediate policy conclusion of Mayer’s research was defended with greater polemical energy and much less nuance by Charles Murray and his new coauthor, Richard Herrnstein, in their 1994 book The Bell Curve: the same traits that make adults economically successful make them good parents. While Mayer focused on the fact that it is likely that the same skills that lead to higher incomes make good parenting styles, Murray and Herrnstein went one step further, arguing that what really matters in good parenting is high IQ, which, of course, comes from genes. According to Herrnstein and Murray, successful parents had fortunate genes and they passed these genes on to their kids. Thus, the poor are increasingly beyond help since it is increasingly their genes’ fault that they are at the bottom of the ladder. So why waste any money trying to help them climb out of poverty?
It was classic Murray: rhetorically slick, elegantly straightforward and almost impossible to refute directly, since there was little in the way of testable hypotheses in his argument. Doubts aside, if either Murray and Herrnstein or Mayer were correct, it meant that the arguments of the zero-to-three crowd—that a dollar spent on prenatal care, child care, or reducing child poverty yielded several dollars of savings in the long run—were off the mark.
The poverty-research community once again found itself on the ropes. Just showing an association between growing up poor and any adverse outcome was no longer good enough. Now it had to be established whether poverty was just a side effect of social conditions, or whether it actually limited children’s opportunities. Finally, some sociologists and economists began to respond with a new set of experiments. The last time that researchers ventured into the world of socially engineered experiments, the result was disastrous for the cause. What would make this time any different?
In 1966, a Chicago public-housing resident took part in a class-action suit that alleged that public housing was serving as de facto government segregation. Her name was Dorothy Gautreaux, and as a result of her lawsuit a new social experiment of sorts was undertaken. The result of a settlement between the attorneys for the residents and the U.S. Department of Housing and Urban Development was a housing voucher plan to help 7,100 families in public housing to secure homes in the private rental market (in the more affluent Chicago suburbs). The goal was to disperse the families into neighborhoods with less than 30 percent minority residents. In addition to the vouchers, the participants received counseling and rental referral services to help them locate available units. A study by the Northwestern University sociologist James Rosenbaum undertaken years later found that those who had moved out of the ghetto and into low-poverty areas had better employment situations, and their kids had improved by a number of measures.
The problem was that Gautreaux wasn’t really an experiment: the “moving” group was self-selected, so there wasn’t a fair comparison to a real control group, and, furthermore, researchers were only able to track down 60 percent of the original movers for the second interview in 1989, so their better outcomes might be attributable to the fact that it would have been easier for the researchers to find the successes.
So in 1994 along came a new study, “Moving to Opportunity,” which attempted to pick up where Gautreaux left off. The sample of families was larger: 4,600. They were randomly assigned to move or stay; one third received no housing voucher at all, another third received vouchers with no restrictions on where they moved, and the final—“treatment”—group got the vouchers and assistance in locating units in areas with less than ten percent poverty rates plus tutorials in basic life skills including balancing a checkbook, yard work, and lease negotiation. It turned out that those in the treatment group reported feeling less stress from violence and other factors; they were happier and healthier. Among children the effects were the most significant: school truancy and delinquency dropped, victimization dropped, and health improved.
That’s the good news. The bad news was that there was little in the way of short-run changes in welfare usage between the two groups or in employment and earnings, and no effect on children’s test scores. The biggest return on investment seemed to come in the form of social or cultural improvements, while economic opportunity—such as parents’ success in the labor force—seemed less responsive. Read as a whole, the results so far seemed to suggest that when poor people all live together, there is a negative multiplier effect of risk. The results of the “Moving to Opportunity” study serve as an indictment of both the social problems rampant in poor neighborhoods and the larger society that has invested in gated communities, suburban sprawl, and other forms of economic (and racial) segregation. The study seems to imply that if income is not the main problem, social division is—that is, place and segregation.
We see a similar dynamic when it comes to where kids go to school (controlling for where they live). A study in Chicago—limited to public-school choice—found that those students who won lotteries did indeed attend better schools; but they themselves did no better academically. That said, they did show improvements in terms of disciplinary problems as measured by self-report and arrest records. This result—from Freakonomics co-author Steven D. Levitt and two other researchers—contains an eerie similarity to the “Moving to Opportunity” results: it is behavior that improves when at-risk kids are “scattered” among the rest of us. A similar New York City experiment—focusing on private-school vouchers—was based on the promise of a philanthropist to fund 1,300 kids to go to private (mostly parochial) schools (11,105 eligible students applied). The study appeared initially to show some achievement gains for black students, in particular. But more careful analysis by the economists Alan Krueger and Pei Zhu showed that this was largely a mirage.
So the bottom line on the geographical organization of poverty is that the effects are bigger for children than for adults (no surprise there) and that they hold for health and behavioral measures more than for academic ones. This suggests a possible rapprochement between the culture-of-poverty people and the Wilsonians: perhaps the culture of poverty is magnified when the poor are socially segregated—at home or in school—owing to peer effects. In other words, behavior matters, and context matters for behavior (even if the effects are rather modest). As for academic outcomes, is it any surprise that kids don’t catch up in a couple of years when put into a better social context? At least they are behaving better; over a longer time frame this may indeed lead to better “cognitive” outcomes too.
As for family income itself, there has been no social engineering since the days of the negative income tax. Ironically, for the left, the best evidence in their favor about the impact of income comes not from purposive social experimentation, but rather from accidental science—natural experiments that some researchers have exploited to examine the impact of income. Adding to the irony is the fact that the two recent studies, which together form the best evidence of the impact of income, come from gambling. The economists Guido Imbens, Donald Rubin, and Bruce Sacerdote surveyed people who played the lottery in the mid-1980s and found, among other results, that those who were not in the work force before winning increased their commitment to work after receiving their prize. I take this to mean that those at the very bottom face significant financial obstacles to fully participating in the economy. Perhaps they use their winnings to buy a car to get to work or to put a deposit down on an apartment or to simply stop wasting their time by jumping through welfare hoops. Meanwhile, those who were already working (at low-wage jobs) did reduce their hourly commitment to the labor force. It seems that folks on the low end want to work; they just don’t want to work all the time just to make ends meet.
A second study is of Cherokee children who experienced a windfall in income thanks to the bizarre arrangements of legalized gambling on American Indian reservations. The casino distributions in North Carolina lifted 14 percent of the Cherokee families out of poverty. Among those families, children’s behavioral problems improved, largely as a result of additional time that parents had to supervise their children. (Again, fewer behavioral problems may translate into better academic performance in the long run and thus better economic prospects in adulthood.) The issue of parents having time to spend with their kids should appeal across partisan lines; in fact, it should be the “family values” issue of the Democratic Party. It makes one wonder whether the negative income tax liberals were asking the wrong question 30 years ago: they should have been focusing on the kids rather than the parents who received the checks. Maybe there’s still time for a renunion for the negative income tax kids and a suitable control group. Evidently, we should have been thinking in units of “generations” rather than years.
We cannot think about how place affects poverty without reference to time. They are interwoven: historically high levels of income and wealth inequality mixed with economic segregation mean that those at the bottom—and perhaps even those in the middle—have to drive further and further through congested traffic to reach their low-wage jobs serving the needs of the wealthy. This intersection of inequality and real estate has reached absurd dimensions in Aspen, Colorado, where tourist industry workers often drive hours in their daily commutes (hence Robert Frank’s term “the Aspen effect”). Of course, if additional time spent supervising children is what helps break the cycle of poverty, then these daily marathons may do more harm than good (even setting aside the effects of auto emissions on air quality). We cannot think of place and time as isolated; they are the fabric of the social universe, and any viable policy has to address both.
The good news is that once we realize that to break the intergenerational cycle of poverty we need to address the issues of economic segregation and the family time crunch, we can begin to rethink social policy in a more creative fashion. One way to address the time crunch, of course, is to give parents money (or higher wages) so that they can work fewer hours. But if that is unpalatable, then we can skip the money part and give families time. Such an approach might involve, for example, mandated paid family leave, much like most other rich democracies offer their workers. But it also might involve urban planning to insure low-income housing situated in accessible neighborhoods so service workers don’t suffer such long commutes and poor children who attend decent schools don’t have to sit on school buses for hours a day. Or it could mean better rapid transit.
On the other hand, it is tricky business to relocate families and thereby constrain housing choices—especially when racial politics is involved. And more recent ethnographic work shows that mixed-income communities may not be the Shangri-Las they are advertised to be. The sociologist Mary Patillo, for example, has found that poor families in Chicago feel misunderstood and “surveilled” by their higher-income neighbors in these mixed-income developments, leading to social tension and divisions that may defeat the whole purpose of economic integration. So maybe we should be worrying about how easy it is for low-income workers to reach jobs and not what the exact mix of incomes is where they sleep. If place were thought of in terms of time (commutes to work and good schools), then a new urban policy might come into focus.
Fixing the geography of inequality and making more time in the day are tall tasks, of course. Even middle-class and upper-class parents are working more hours per week than ever before (and than in any other nation). What makes the situation particularly toxic for poor kids is the fact that when their parents are working, they are spending the time in unstructured activity. While the middle class has signed their kids up for ever more soccer leagues, music lessons, and myriad other after-school activities, the poor do not have that luxury. Their kids, disproportionately, spend time “hanging out”; this has been observed by Jason DeParle in his chronicle of three welfare families struggling through the era of reform in Milwaukee, American Dream. A recent study by Annette Lareau, Eliot Weingarter, and myself shows this statistically: outside of school, the time spent by well-off and poor kids could not be more different. Poor children spend 40 percent more time in unstructured activities than middle-class kids. So, if we can’t get the parents home to spend more time with kids, we can at least stitch together a 24-hour social safety net in the form of mandatory full-day kindergarten; universal pre-kindergarten; funding for after-school programs; and, yes, midnight basketball leagues (a much-ridiculed part of the crime-reduction bill President Clinton signed in 1993).
“The days are endless and the years fly by,” goes the parenting adage. Social policy can learn something from this. We need to think in terms of hours in the day when it comes to bettering the lives of poor children; and we must be patient in terms of outcomes, waiting years, perhaps, to realize the payoff. Why not? We’ve waited long enough already since LBJ first declared war on this scourge of American society; and we’ve learned something in the meantime. Now let’s put that knowledge to use.