Public science

Does science have anything worthwhile to say to the public? Should it even aspire to relevance outside of narrow disciplinary communities? And if so, how should we conceptualize the contributions that it is capable of making to public deliberation, or civic life more generally? Today I want to look at a few interesting recent articles that have posed some variant on these questions, the bread and butter of historians of science and other science studies scholars.

First is a piece in the Intercept by Kate Aronoff about climate change “half-measures.” As her title suggests, she argues that the politics of half-measures — faith in a technological deus ex machina like geoengineering, assertions that corporate benevolence or market-based innovation will lead to spontaneous decarbonization, and so on — amounts to “denial by a different name.” And a particularly insidious kind of denialism at that, because its proponents are free to boast about accepting the scientific consensus on climate change, despite the fact that their positive policy vision is almost indistinguishable from that of the more bite-the-bullet denialists.

I think this argument is completely correct. It’s a point that’s been made before: Naomi Oreskes ruffled some feathers a couple years ago by using the denialist label to characterize some of the more extreme rhetoric about nuclear energy, and Aronoff’s case is clearly indebted to influential work on the relationship between climate change and neoliberal politics by Philip Mirowski (whom she cites) and Naomi Klein (whom she does not). Still, Aronoff assembles the pieces of the puzzle with admirable clarity. With even ExxonMobil patting itself on the back for acknowledging the existence of the greenhouse effect, and Scott Pruitt, now the denialist-in-chief, equivocating on his stance on climate science at his confirmation hearings, it is more crucial than ever for climate activists to move beyond simply insisting that “climate change is real.”

But Aronoff doesn’t just denounce half-measures. She also targets activists whose rhetoric makes “it seem like climate change is a primarily scientific issue, rather than an economic, political, or moral one.” In an age of renascent populism, she claims, activists should galvanize action by talking up the economic benefits of robust climate policy for people who currently feel dependent on the fossil fuel industry for their livelihoods – green jobs, wealth redistribution, and so on. If conservatives actually use the rhetoric of denialism only instrumentally — gleefully changing their stance on “the science” depending on audience or context or what argument they’re trying to make — then perhaps the idea that climate change is “really” a scientific issue is, after all, a “red herring.”

Here is where I think some difficulties start to crop up. Taking a step back, it isn’t entirely clear what it means to say that climate change is or isn’t a “scientific issue,” or more specifically why thinking about climate change “scientifically” ought to imply political moderation at all. Aronoff herself, after all, is implicitly relying on a lot of science in her criticism of climate moderates who cast their position as “scientific.” She clearly believes that as a matter of fact, the market-based techno-fix approach will not be capable of achieving the emissions reductions necessary to avert catastrophic climate change. That is a (social and natural) scientific claim!

The problem here lies with the phrase “rather than”: scientific rather than political, economic, etc. Aronoff’s critique seems to simply invert this formulation rather than challenging its premise: that a social issue is apolitical to precisely the extent that scientific knowledge can be brought to bear on it, or put the other way, that scientific knowledge is irrelevant or even threatening to the processes of collective contestation and deliberation that surround genuinely political issues.

But Aronoff’s own analysis of climate change shows the limitations of this view. She supplies us, implicitly or explicitly, with answers to an instrumental question (what kind of social action would be required to prevent a specific climate nightmare scenario?); a normative-political question (what kind of social action ought we to take collectively in general?); a different instrumental question (how should climate activists communicate in order to bring about their desired program for social change?); and a descriptive-historical question (how profound has the commitment to “denialism” of key right-wing socio-political actors actually been?). To argue about whether an issue that raises a set of questions this rich and diverse is scientific “or” political seems quite useless to me.

All of these questions are distinct but complementary. That is to say, the answer to any one doesn’t determine the answer to any other, but none suffices on its own to give a complete analysis of “the climate issue.” It is impossible to think rationally about what we ought to do about climate change while remaining agnostic about what the real-world consequences of various courses of social action will be. But the inquiry that can help clarify those consequences does not emerge spontaneously. It requires deliberate effort: initiative formed within a specific horizon of conviction about what matters.

One comparison that might be useful for some readers is to the feminist political philosopher Nancy Fraser’s multidimensional theory of justice. Fraser claims that justice requires a commitment to what she calls “recognition,” the ability of all people to participate as peers in social interaction, and “redistribution,” the egalitarian provision of economic resources necessary to satisfy material needs (more recently she has added “representation,” the ability of all people to participate in political decision-making that concerns them, as well). Any political vision that addresses patterns of exclusion and marginalization without challenging structures of economic exploitation, or vice versa, is incomplete: in my phrasing, the two (or three) tasks are distinct but complementary.

This complementarity is due in no small part to the dialectical relationship in which recognition and redistribution stand. Social marginalization is often caused by patterns of economic maldistribution, but such patterns are often themselves parasitic on misrecognition: witness the historical dependence of American capitalism on free labor extracted from black people on cotton plantations and from women in middle-class households. We can accurately understand this dialectical process, however, only by distinguishing between misrecognition and maldistribution in the first place. Otherwise, for instance, we will think that the empowerment of women will necessarily dismantle capitalism, or that universalist programs for wealth redistribution will be sufficient to end American racism.

Now we can go back to science and politics: I want to claim that a similar dialectical relationship exists here, a relationship that we can also understand only if we are willing to draw a conceptual distinction between the two. In the scientific realm, by describing natural and social structures, we characterize the consequences of imagined programs of political action or inaction, orienting our practices with reference to consciously chosen political commitments. In the political realm, we reason together on the ends that we think justice compels us to pursue throughout society (including in scientific institutions), and we organize collectively to fight for programs of action that our best scientific knowledge tells us can bring about those ends. When this process functions productively, the social process is effectively guided toward the fulfillment of higher-order collective goals: what one might be so bold as to call “democracy.” When it is malfunctioning, we get the disarray, for instance, of contemporary climate politics, as illustrated (intentionally and unintentionally) by Aronoff’s analysis.

As Fraser insists, the practice of distinction-drawing doesn’t need to wind up creating hierarchies. It can also be an aid to critical reflection. Indeed, in the climate case, conceiving of science and politics as distinct but complementary domains, each with their own dignity and rationality but embedded in a dialectical relationship, helps short-circuit disputes about which category ought to be “on top,” disputes produced by thinking about them as competing claimants to the same “territory.”  We can insist that such disputes are wrong: that to panic about “politicized” climate science (as if science ought to have nothing of relevance to say on “political” issues) on the one hand, or to suggest that criticism of nuclear power or geoengineering is “anti-scientific” (as if showing the existence of a particular technological capacity was sufficient to show the goodness of its unlimited usage in any social circumstance) on the other hand, is in both cases to fundamentally mistake the nature of science and politics. (And we can start to see technocracy and antiscience as two sides of the same coin.) We can move, as Marx put it, from describing the world to changing it: from arguing about whether climate change is scientific or political to acknowledging the magnitude of the collective choice that science has placed before us, a choice that it cannot make for us, but which we cannot escape from making.

Andreas Malm makes a similar argument in his new book, The Progress Of This Stormabout the distinction between nature and society. Making the appropriate substitutions, everything said above about the distinct but complementary, dialectically related nature of science and politics (and the usefulness of thinking about these categories this way) can be said of nature and society too. Natural structures brought about and continue to nurture the existence of human beings, who are capable of forming societies with their own internal relations not reducible to “deeper” natural processes. Human societies in turn reshape nature through processes of resource extraction and utilization necessary to satisfy metabolic needs. When this relationship is functioning productively, human societies can pursue higher-order, autonomously chosen projects without jeopardizing the underlying processes that sustain life: what one might be so bold as to call “sustainability.” When this relationship is malfunctioning, we get the disarray, for instance, of anthropogenic climate change.

Conceiving of nature and society this way helps us critique intellectual and political visions that efface or subordinate one or the other. We can say that it is fundamentally misguided to argue, as such strange ideological bedfellows as “sociobiologists” and the “new” or “vital materialists” are wont to do, that the role of human agency in producing social outcomes (like the nightmare of fossil capitalism) is small or nonexistent compared to the influence or “agency” of nature. And we can say that it is just as misguided to argue, as the free-market environmentalist Stewart Brand once famously put it, that “we are as gods,” and should not doubt the possibility of human ingenuity to attain whatever ends we set our minds to without having to consider the natural structures that may block our way. (And we can start to see naturalistic reductionism and techno-optimism as two sides of the same politically complacent coin.) We can move again from describing the world to changing it: to identifying precisely what about our world we have made, and therefore what precisely we can remake.

The distinction between nature and society as domains of reality doesn’t map on one-to-one to the distinction between science and politics as domains of human thought and activity. There are natural and social sciences alike, and the number of issues where political reflection can get away with thinking about society but not nature is not large. Still, it’s not surprising that the climate crisis underscores the salience of both distinctions. Both are different paths of approach into the distinction between the real and the rational, perhaps the most important distinction to draw in the midst of what the novelist Amitav Ghosh calls the Great Derangement. Now more than ever we need ways of putting our taken-for-granted practices into question, of insisting that a better world is not just possible but necessary.

I used to like the language of “coproduction,” developed by science studies scholars like Sheila Jasanoff and Bruno Latour, as a way to understand the relationship between science and politics. Now, for a variety of reasons that I’ve spelled out at much greater length elsewhere, I find that work less compelling. And I think this theme — the need to draw certain kinds of distinctions in order to critique social reality — is perhaps the most important cause for concern. Very briefly, “coproductionist” scholars envision “science” and “politics” (as well as “nature” and “society”) as competing discursive labels for the same underlying “stuff.” They take their task to be to describe the way that those categories get “stabilized” as the outcome of a game-like process of interaction between agents. There is no such thing as science or politics or nature or society as such; there are only things that come to be taken, at particular times and places, as scientific or political or natural or social.

This body of work blurs distinctions with gleeful abandon in theory: between science and politics, nature and society, discourse and material reality. But it is not always easy to know what to do with these theoretical moves in practice. Latour, in fact, has been explicit in his rejection of “critique.” In his book Reassembling the Social he inverts Marx: “Social scientists have transformed the world in various ways; the point, however, is to interpret it.” But such an ascetic refusal of normative judgments is easier said than done. When coproductionist scholars do make critical interventions despite themselves, they usually fit somewhere in the technocracy/antiscience or agency-minimizing pessimism/techno-optimism binaries sketched above. And they aren’t shy about picking up both ends of the stick. Latour, while warning about the threat of science working to prematurely shut down political deliberation, has simultaneously written for the Breakthrough Institute, a techno-optimist think tank inspired by the work of Stewart Brand.

This incoherence is the predictable consequence of a worldview that regards any confident invocation of the “scientific” or “political” (always in scare quotes) as a power grab in different garb, as a strategic move in a game (regular readers may notice some resonance with James Buchanan’s public choice theory, described in my last post). The only real sin is to attempt to short-circuit the endless social process that makes and unmakes claims to scientificity or sociality or whatever. And the only virtue is “openness” or “inclusion,” the expansion of restricted debates (particularly in science) to encompass as many perspectives and contributions as possible, no matter their source. To diffuse the threat of illegitimately arrogated authority conjured up by the labels of science and politics, each must be tamed — and made indistinguishable from one another in the process — by ensuring that every side of every argument must be taken into account at every point in time: and if that means that no consensus emerges, that no decisions of any real import are made, so be it (or all the better).

The slippery consequences of “open science” in practice are illustrated by another recent article, a piece in The Atlantic by James Somers titled “The Scientific Paper Is Obsolete.” That headline is a bit misleading, because the substance of the piece provides both less and more than it promises. Less, because Somers doesn’t provide any real argument against the journal article as such. More, because his real purpose turns out to be a defense of a specific vision for the entire scientific enterprise, above and beyond publishing.

Somers observes, accurately, a “computational” turn in a wide range of scientific disciplines. Computational science means, as the name suggests, the use of computer technology to pursue a particular approach to scientific problem solving that emphasizes simulation, the development of algorithms for handling complex computations, and the use of large data sets that not so long ago were less feasible to process. Somers makes two additional observations that I think are also correct: that the format of the traditional journal article is ill-suited for reporting on the practice of computational science, and that the computational turn has helped to bring science into increasingly close contact with private industry.

The problem is that, deprived of any normative framework for assessing scientific practice, Somers has no choice but to conclude that this is what science is now, that colonization of every scientific discipline (including the social sciences – of which more anon), the death of the scientific journal, and the exodus of scientists from universities into corporations are done-deal developments: the task is adaptation, not critique. (Notice any similarities to climate change politics?)

Somers draws heavily on the work of one of computational science’s most influential proselytizers, Stephen Wolfram — and his faith does seem to waver when he acknowledges that all of Wolfram’s evangelism does double duty as an advertisement for his own proprietary software system, the state of the art in scientific computation. But Somers’ conscience is salved by his discovery that Wolfram’s monopoly is not total, and an “open-source” alternative called Jupyter has recently gained traction. It is now used by ordinary-joe “musicians [and] teachers” as well as the big boys at “Google [and] Bloomberg.” Because it’s open-source, all those users can actually make modifications to improve the program, without waiting for the annoying kind of community review process enforced by obsolete journals. Thank God! Now it’s not only Ph.D. scientists who can help improve the tools tech companies use to accumulate profit: musicians and teachers can join the fun for free.

Philip Mirowski (one of the historians that Kate Aronoff cites) has shown that the function of “open-source” software as a back door to providing tech companies with free labor is, as the programmers would say, a feature, not a bug. He observes that Jimmy Wales, the founder of open-source paradigm Wikipedia, has strong right-wing libertarian views. Wales, in fact, credits the idea for Wikipedia to his reading of an essay by the neoliberal economic philosopher Friedrich Hayek called “The Use of Knowledge in Society.” In that essay Hayek has two objectives: he argues (a) that “planning” (his catch-all slur for democratic control over the economy) is a practical impossibility; because (b) “the market” is an unparalleled information processor, aggregating knowledge dispersed locally in a way that no individual or group attempting to take a “bird’s eye” view of society could ever rival. Wikipedia’s founding premise, then, is the indispensable economic significance of decentralized, minimally regulated institutions for aggregating all the knowledge that isolated individuals would otherwise keep to themselves.

The result, as Mirowski observes, is that all the uncompensated labor-hours of Wikipedia editors make search engine companies like Google billions: Wikipedia editors, by citing their sources through links to external webpages, provide Google’s optimization algorithm with a crucial aid for processing the reliability of different sites, and Google’s ability to provide a link to Wikipedia near the top of practically every search result enormously enhances its own reliability as an information source. It’s a more complex version of what we have all become hyper-attuned to with sites like Facebook and Twitter: our personal data, willfully surrendered for free, has become one of the most lucrative commodities on the planet. With Facebook there is no productive dialectic between science and politics, only a Blob-like monster growing and consuming everything in its path.

It is worth noting that the “computational” paradigm, at least when extended to the domain of the social sciences, helps to naturalize precisely this mode of economic organization. The favorite object of computational scientists is the “complex system,” one of Hayek’s own favorite concepts for characterizing his understanding of markets. When all you have is a hammer, everything looks like a nail, and so when computational scientists tackle societies or economies they tend to treat them like folding proteins or resonating crystal structures: reducible to the dynamic (even chaotic) interaction between individual “particles,” unaffected by history or power structures. Once again we are stuck at the level of flat description – unable to critique, predict, probe deep structures, or indeed say much of anything of political import. Somers quotes Wolfram: “Pick any field X, from archeology to zoology. There either is now a ‘computational X’ or there soon will be. And it’s widely viewed as the future of the field.” If that is true, it will be a major loss for many disciplines, and for society as a whole.

Unfortunately, the preference of many science studies scholars for flat description has restricted their ability to critique the developments that Somers identifies. And in some cases they’ve given it their explicit support. Helga Nowotny, for instance, a major figure in the European science studies community, issued precisely such an appreciation in her 2008 book Insatiable Curiosity, endorsed by other science studies luminaries such as Sheila Jasanoff. There Nowotny observed that “science increasingly counts on private and privatized means,” but argued that this development and calls for the “democratization” of science “are only seemingly opposites” (p. 22). Commodified science is also accountable science, ostensibly disciplined by consumers through the imperatives of the market. Furthermore, “research conducted by the sciences of complexity and chaos, self-organization, and networks” — computational science, in other words — can help to puncture the “illusory dream” of “susceptibility to planning” (p. 108) by redirecting attention to the uncertainties produced by the dependence of economic and social processes on “a multiplicity of subjective viewpoints and sites” (p. 118). Hayek himself could hardly have put it better.

Analysts of science — academics, but also journalists like Somers — don’t have to settle for this kind of complacency. There are alternatives. Feminist philosophy of science, for instance, has produced a host of important critical insights over the last several decades precisely by insisting on the possibility of subjecting science to normative evaluation. My account of a dialectical relationship between science and politics owes much to the work of Helen Longino, for instance. Longino has argued that when scientists don’t recognize the political horizon within which their work operates — when, in other words, they don’t subject their implicit political choices to critical scrutiny — they tend to just regurgitate the values of their surrounding societies (patriarchal as they often are) in naturalized garb. But when scientists do attain some critical distance, and make different kinds of political choices, their work can help provide a foundation for egalitarian political movements elsewhere in society.

Because, as feminists like Longino, Donna Haraway, and Sandra Harding have reminded us, scientists are not disembodied thinkers but always living and working within specific material contexts, it is also important to think about how to institutionalize such critical practice. It should go without saying that Mathematica (or Jupyter) computational notebooks are not up for the job. Old-fashioned sites of discursive communication (including the dreaded journal) may yet have a roll to play. I would also, building on what I wrote last time, argue for the importance of academic unionization and other movements that confront in a particular way the subordination of knowledge production to the accumulation of private property.

This, then, is what it really looks like to “democratize” science: not the transformation of science into a commodity, but the creation of institutions that force scientists to critically acknowledge their dialectical relationship with politics, and the rights and responsibilities that ought to come with it. It is precisely because science is so important in democratic societies, because so many pressing issues of public concern are simultaneously scientific and political, that we have to challenge the claims of those who would reduce science to a particular form of political centrism or an esoteric way of making money — to insist that science can be more, and better, than that.

Advertisements

Tyranny of the minority

Harvard’s closing argument against graduate student unionization came to students via email this afternoon, and it was… pretty embarrassing:

Screen Shot 2018-04-17 at 6.59.09 PM.png

The logical and factual errors here alone are astounding. Of course the union wants the bargaining unit to be as big as possible! That’s the point of collective bargaining. If only a small segment of the workforce is unionized, it makes it harder to approach the bargaining table with any real power – to hold out the threat of a shop-wide work stoppage, for instance. That’s why so-called “right-to-work” laws have been so damaging to unions around the country. Curran is also playing on a misconception about unionization that has always struck me as bizarre: that a union contract wouldn’t be able to differentiate between different kinds of workers (TAs vs. RAs, humanities vs. science students, et cetera). This idea is so patently ridiculous that I have hard time understanding how anyone could express it in good faith. (And on a more depressing note, it shows how little firsthand experience with unionized workplaces many people have these days.)

There’s also the fact that any given dystopian contract scenario that a majority of workers could allegedly impose on their colleagues is also, tautologically, an arrangement that the employer (here the university) could unilaterally impose on them without a union, i.e., in the status quo. The alternative to a union isn’t codified protections for unspecified put-upon minority classes of workers. It is a workplace where the employer can do virtually whatever they want with no representation for anyone.

And that brings us to what I really want to talk about, which is the question of democracy. Curran’s argument here is a shockingly blunt appeal to anti-democratic values. It’s right there in the subject line: unions are bad because they operate on the principle of majority rule. Curran invokes a venerable tradition of American fears about the “tyranny of the majority,” demanding constraints on democracy, and the power of collective action, out of professed concern for imperiled minorities. But this case illustrates an important historical truth: the “minority” in question that restrictions on democracy are supposed to protect is a minority of elites. The freedom that democracy allegedly imperils is the freedom of bosses, of the proverbial one percent (or the .1 percent, or the eight men who own as much wealth as the bottom half the world’s population), to exercise power without constraint.

This tradition originated with the Antifederalists, the coalition, led by a group of slaveholding Virginians like Patrick Henry, that fought against ratification of the Constitution. In a terrific piece of irony, as the historian Garry Wills has shown, many of today’s “constitutional conservatives” actually channel the ideology of the Constitution’s early opponents. It was the Antifederalists who insisted on the importance of a system of checks and balances between “co-equal” branches to slow down the pace of governmental action, and on the necessity of delegating as much governmental responsibility as possible to “local” communities. The agenda here, like with their calls for the right to local militias, was the defense of their private interests — which meant slaveholding — against the possible depredations of a national majority that didn’t understand their peculiar way of life.

After ratification, the prophet of “tyranny of the majority” fears was the South Carolinian Senator (and later U.S. Vice President) John C. Calhoun — another ardent partisan of slavery. Calhoun was similarly terrified that “majority rule” meant that a distant national government would come to take away Southerners’ slaves and destroy a system they didn’t understand. In a famous declaration that ought to give pause to any contemporary defenders of bosses’ paternalistic virtues, Calhoun insisted that slavery was not a “necessary evil” but a “positive good.” Free from the constraints of meddlesome majorities, local plantations were the site of an interdependent, mutually beneficial relationship between slaves and the masters that took care of them and supposedly helped to educate and “civilize” them.

The most notorious implementation of Calhoun’s philosophy was in Senator Stephen A. Douglas’s “popular sovereignty” “compromise” on the question of slavery in Kansas in the 1850s. The label was quite deceptive. The idea was that rather than letting a national majority decide, through their centralized representative institutions, whether the U.S. ought to preserve slavery, the “slavery question” ought to be decided on a local, state-by-state level — by the people who supposedly “knew best.” Of course, in “Bleeding Kansas,” an important premonition of the Civil War, what this meant in practice was that outside forces descended on the state to attempt to sway the “local” result, producing years of deadly conflict between supporters and opponents of slavery.

As W.E.B. Du Bois would emphasize in his magisterial 1935 history of the period, Reconstruction showed the germ of truth in Calhoun’s odious views. The quest to stamp out slavery in the South after formal abolition was practically equivalent to the effort to create genuine democracy in former slave states, expanding political participation to previously disenfranchised black Americans. The fear of democracy reared its head again, in the form of the slanderous mythology of corrupt black politicians that many students still imbibe in their Reconstruction units in high school history. “The center of the corruption charge,” as Du Bois put it, “was in fact that poor men were ruling and taxing rich men.”

In a development that would not have surprised Calhoun (who died in 1850), the backing of Northern power was crucial to the limited success of the Reconstruction effort. As Du Bois first observed, the Reconstruction effort collapsed only when Northern industrialists realized that their interest in expanding the size of the workforce competing for a wage in their factories was outweighed by the potential financial benefits of recreating the plantation system that had supplied them with such cheap cotton.

Beginning in the New Deal era and later in the early years of the Cold War, when the great national fear was communist totalitarianism, the tyranny of the majority concept made a comeback among conservative and moderate opponents of the ambitious reforms of Roosevelt and his successors. As many historians have noted, this was when James Madison’s Federalist no. 10 essay first began to be treated as a canonical document of the American political tradition. Developed by political scientists in the school of thought typically called “interest group pluralism,” the idea here was that the genius of the American system was the way that it allowed the many competing interest groups that composed the social tapestry to express themselves without rising to a position of dominance or imposing their agenda on the rest of the country. Popular writers like Richard Cornuelle, a fellow at the conservative Hoover Institution, argued that “voluntary associations” and other forms of private activity were sufficient to solve all social problems as long as centralized, majoritarian government got out of the way. (Cornuelle’s biggest idea was to replace government support for higher education with expanded access to private student loans — the policy that has saddled so many young people today with insurmountable levels of debt.)

The problem was that government never did seem to get out of the way. And for a new and increasingly influential generation of conservative thinkers, democratic majorities were the reason why. Austrian emigrés like Friedrich Hayek and Joseph Schumpeter argued in the 1940s that democracy would inevitably be hostile towards free-market capitalism — because politicians could only win over majorities by promising voters to do things for them, not to stand aside and let markets solve their problems (and because the intellectuals who gave politicians their ideas tended to be anti-capitalist, but that’s another story). Schumpeter approached this situation with pessimistic resignation. Hayek argued for constitutional changes to restrict the power of democratic majorities.

Hayek’s ideas found admirers inside and outside of the university. As the historian Nancy MacLean has shown, perhaps the most important academic/businessman right-wing power couple in the wake of Hayek’s critique was James Buchanan and Charles Koch. Buchanan’s first patrons were the avatars of Calhoun in Virginia politics who proposed a program of “massive resistance” to federally-mandated school desegregation. They found Buchanan’s defense of a more-or-less unlimited right to free association to be a plausible intellectual foundation for their effort to recreate Jim Crow in private schools. But it was in the 1980s that Buchanan ascended to his position of greatest influence, after Koch helped to fund the creation of a virtual fiefdom for him at George Mason University, and to spread his ideas throughout the elite far-right network he was in the process of assembling.

Buchanan’s “public choice” theory helped to formalize the claims of Hayek and Schumpeter about democracy and capitalism. His starting point was the assertion that economists ought to model political activity exactly the same way they model activity in markets: as the undirected outcome of interactions between agents pursuing their own interests. Buchanan thought that in a well-functioning market system, the “interests” of agents included commitments to traditional moral values and the spirit of reciprocal cooperation, making free trade almost always beneficial to all parties involved. But he thought that political actors were usually motivated by the pursuit of personal power — the kind of naked selfishness often ascribed to economic agents. Instead of worrying about “market failure,” he wrote, we should be afraid of “government failure”: when, in the name of trying to help solve some public problem, political actors actually push through inefficient policies that benefit them personally. In a democracy, majorities would always be screwing over minorities for their own gain.

In one of the cruelest ironies in recent American history, it was Charles Koch’s war on American democracy, citing and popularizing Buchanan’s ideas, that has done more than anything to make the public-choice nightmare a reality. Buchanan was, of course, correct that politicians are capable of prioritizing their personal self-interest above all else. But in recent history the problem has not been popular anti-capitalist reformers. It has been the political actors, in state legislatures, in Washington, and in the courts at all levels, who have taken Koch money, attended conferences where public-choice ideas are presented as gospel, or even gone to college or law school in programs shaped by right-wing funding, and who have enacted overtly anti-democratic policies — from voter ID laws to the post-Citizens United dismantling of campaign finance reforms — that have devastated American society but have tremendously enriched themselves and their backers.

In a 2009 essay for the Koch-backed Cato Institute, one of those billionaire conservative donors, the major Trump backer Peter Thiel, wrote that he “no longer believe[s] that freedom and democracy are compatible.” The problem, in his view, was simple: “since 1920, the vast increase in welfare beneficiaries and the extension of the franchise to women — two constituencies that are notoriously tough for libertarians — have rendered the notion of ‘capitalist democracy’ into an oxymoron.” Refreshing, if alarming frankness from this founding Facebook board member and close associate of the President of the United States.

I have no way of knowing for sure, but my hunch is that Paul Curran, and many others who oppose graduate student unionization at Harvard and elsewhere, would be repulsed by Thiel’s remarks. I doubt that they would recognize much of themselves in the history that I’ve told here. But like it or not, this is the rhetorical and ideological tradition that they’re exploiting. The fear of the “tyranny of the majority” — the government coming to take your slaves away and make you send your kids to school with the wrong kind of people — is what the literary critic Fredric Jameson might call the “political unconscious” of their argumentation. Capitalism doesn’t care about your good intentions. The university has a material stake in preventing unionization. The individual preferences of specific administrators are a feeble force compared to the overwhelming structural conflict between democracy and the dictatorship of property. As Du Bois put it in Black Reconstruction: “There can be no compromise.”

Kevin Williamson’s Useful Idiocy

For those not keeping up with the cutting edge of elite media drama, National Review writer Kevin D. Williamson was recently hired by the Atlantic to provide conservative “balance” to their opinion coverage — and then promptly un-hired when it became clear that a series of old tweets calling for women who get abortions to be executed by hanging actually reflected a deeply held conviction of his.

Good riddance. No one who wants to execute 25 to 30 percent of American women deserves any platform for their views, period. Far too much ink has already been spilled about the politics of opinion page hiring and firing decisions, and I don’t have anything insightful to add there. But Williamson’s fatal hot take has been black-boxed in these discussions as just a generic “insane” or “offensive” opinion, and that’s what I want to re-examine.

Because treating Williamson simply as a lunatic neglects the far more troubling truth that his opinion – so unpalatable to mainstream ears – is the straightforward logical consequence of two extremely widespread American opinions: that the death penalty is the appropriate punishment for heinous crimes, and that abortion is literally murder. L’affaire Williamson is a useful illustration of an important fact about American politics: that most people, “pro-life” and “pro-choice” alike, don’t actually take the anti-abortion position very seriously, because taking it seriously leads inexorably to conclusions that the vast majority of people find abhorrent.

A couple years ago I wrote after a terrorist attack on an abortion provider in Colorado that “pro-lifers” distancing themselves from the terrorist were kidding themselves – that if you actually profess that abortion is the taking of a human life, it is extremely difficult to condemn violence against providers:

“In general, the use of violence to avert enormous loss of life is praised in our culture. We produce hagiographic movies about would-be-Hitler-assassins. We celebrated in the streets when Osama bin Laden was killed, and then produced a movie about his assassins, too. But if the statements of Ted Cruz and his fellow anti-abortion advocates are taken at face value, abortion has in the last forty years claimed far more lives than Hitler and bin Laden ever did, combined. Every day, there are approximately 3700 abortions in America. That’s the equivalent of seven 9/11s each week, if you are truly committed to the view that abortion is murder. We killed something like 40,000 militants in Afghanistan after just one 9/11. Why, then, would we condemn the use of violence against abortion providers?”‘

The conservative defenses of Williamson in the last 24 hours have often come close to giving the game away.

David French’s defense of Williamson at National Review:

“I’m a moderate, you see. If abortion is ever criminalized in this nation, I think only the abortionist (and not the mother) should face murder charges for poisoning, crushing, or dismembering a living child.”

Ben Domenech at The Federalist:

“In the case of Williamson, even someone who literally wrote a book titled The Case Against Donald Trump was unacceptable for The Atlantic because wrongthink about what ought to be the legal ramifications for tearing an unborn child apart – ramifications that ANY pro-lifer of any seriousness has wrestled with in conversation. Serious ethical and legal ramifications for destroying the unborn or the infirm are debated in philosophy classes every day – Williamson’s mistake, as an adopted son born to an unwed teenage mother, was being too honest about his belief that what he sees as the daily murder of infants should, in a more just society, have severe legal consequences.”

A separate Federalist article’s headline announces: “Kevin Williamson Fired From The Atlantic For Opposing Abortion.” What I want to suggest is that this at-first-glance tendentious description is actually (perhaps unintentionally) completely correct. If you are anti-abortion and take that commitment at all seriously, Williamson’s position must seem reasonable.

I think that this fact has important implications. First, it means that there is no sound justification for the all-too-frequent practice of treating abortion opponents with kid gloves in the media. Opposition to abortion is not just an abstract philosophical question that is fundamentally a private judgment call. It has material implications. Its logical consequence is, again, that a quarter to a third of American women are murderers and should be treated however you think murders ought to be treated — which, again, in America, most commonly means with state-sanctioned or private violence. It is important for principled journalists to treat the “pro-life” argument for what it really is: an intellectual rationalization for violence against women.

But second, as I wrote in 2015, it means that many people who think of themselves as “pro-life” probably don’t hold that conviction very deeply. One consequence of the typical way we talk about abortion is that it comes to seem like a “debate” that is completely irresolvable, on which a public consensus could never be reached because both sides are entrenched in incommensurable value judgments. But this is not true. Polling suggests that it is a vanishing minority of Americans who think that abortion should be illegal in every conceivable circumstance and that women who get abortions should be treated as murderers. But that is actually the only conclusion consistent with a serious commitment to the abortion-is-literally-the-taking-of-a-human-life position. I think that the facts suggest that most “pro-lifers” are really more akin to “cultural Jews” or what George Santayana called “aesthetic Catholics.” That still presents a significant obstacle to any kind of consensus, but it is a profoundly different understanding of political culture than the common depiction of two serious, reasonable positions locked in eternal stalemate.

Pro-choice advocates shouldn’t apologize for their beliefs or worry about offending people who disagree with them. It is okay not only to think “personally” that abortion is morally permissible, but also to think that people who claim that abortion is murder are objectively committed to a repugnant position. And it is okay for us advocates of reproductive justice to hold out hope that this is a winnable fight, and that some day we will live in a more just, humane society and public culture.

The promised land

Somehow it seemed both shocking and inevitable.

After the assassination 50 years ago today of Dr. Martin Luther King, Jr., amidst the outpouring of grief and horror, amidst the nationwide rioting (King: “the language of the unheard”), amidst the disorientation, fear, and uncertainty, in moments of quiet there were some who voiced what was so uncanny about the whole thing: he knew it was going to happen.

How else to explain that haunting, magnificent speech he had delivered the previous day in a Memphis church? “Like anybody, I would like to live a long life,” King mused ominously as he concluded. “Longevity has its place. But I’m not concerned about that now.” In a turn at once prophetic and unsettling, he told his audience that, like Moses, God had allowed him to go up to the mountaintop. “I’ve seen the Promised Land. I may not get there with you. But I want you to know tonight, that we, as a people, will get to the promised land.”

The last years of his life saw an increasingly disillusioned and radical King. The Civil Rights Movement’s legislative victories of 1964 and 1965, along with his 1964 Nobel Peace Prize, cemented King’s celebrity, his heroic aura. But as he watched Jim Crow finally buckle towards collapse in the South, King was left with a bad taste in his mouth. This was what, incessantly for a decade, he had fought for, bled for, and compromised for, painfully and sometimes (as in the public effacement of his mentor Bayard Rustin, the gay ex-Communist who organized the 1963 March on Washington) shamefully. And yet the approach of black Americans towards formal legal equality seemed to have left de facto racism — housing segregation in Northern cities, enduring inequality in the provision of public goods in the South, impoverishment everywhere — untouched. At a time where vigorous government action was necessary to shore up social services, combat unemployment, and ensure access to healthcare, state resources were being funneled into a misguided, even imperialist war in Vietnam. As his criticism of the war and experimentation with anti-capitalist rhetoric began to alienate him from former allies in the liberal establishment, King felt embattled, embittered, and exhausted.

King’s frank acknowledgement of finitude and defeat in “I Have Been to the Mountaintop,” then, was not simple clairvoyance. It was the outcome of his sustained efforts to grapple with the suddenly inescapable awareness that, one way or another, sooner or later, he would die with his life’s work incomplete, with his dream still not fully realized. But the speech also reflected his conclusion: that the fact that we only ever experience justice as an ever-receding horizon, never fully manifest in the mortal world around us, does not make it any less worth fighting for; that our experience of the brokenness of our history is precisely what brings a better world into view; that the promised land is best glimpsed from outside: from the mountaintop.

There were other reasons, however, that King’s death felt, in retrospect, so inevitable, so cosmically, horrifyingly predetermined. For one thing, it happened in April.

King had reflected on and wrestled with the brokenness of American history as profoundly and as publicly as anyone in our nation’s past. He lived his career, figuratively, and on August 28, 1963 quite literally, in Abraham Lincoln’s shadow. Lincoln’s rhetoric was constantly, almost obsessively woven into King’s oratory: King too struggled to make scripture speak secularly, to find reason to hope without denying the reality of contemporary catastrophe, to depict America’s founding vision as both failed in practice and ultimately redeemable. And King’s movement, of course, was made necessary by the fact that the “Great Emancipator” himself, for reasons out of his control and for reasons for which he can be faulted, had left American slaves and their descendants still so profoundly unfree.

And so of course King would, like Lincoln, die by assassination, and die in April, that fateful month in which the Civil War both began and ended. If King really did know that his death was imminent, perhaps it was because of how strongly he felt the past, as Marx put it, weighing like a nightmare on the brains of the living, because of how keenly he understood the way that Americans’ distinctive desperation to leap clean of the past into the glorious future has its roots in the way the horrors of their history continue to haunt the present, like the ghost in Toni Morrison’s Beloved.

The economist Brad DeLong, for instance, wrote a blog post the other day summarily dismissing a wave of recent historical scholarship emphasizing the centrality of American slavery to the emergence of modern capitalism. As Americans have been doing since the collapse of Reconstruction in the late nineteenth century, DeLong’s impulse is to entomb slavery behind an impenetrable stone separating it from the enlightened present: “people alive today are not principal profiteers from the peculiar institution of plantation slavery.” I considered writing an extended response to DeLong, pointing out, for instance, his flawed assumption that plantation owners alone, rather than slave traders, land speculators, and financiers, were the chief economic beneficiaries of slavery; his truly bizarre claim that since there were four other major economic sectors besides cotton textiles in the early 19th century, slavery can therefore mathematically amount to no more than 1/5th of the explanation of the Industrial Revolution; and his utter neglect of the role of slavery in shaping the American institutions, and even the territorial map of the country, that we take for granted today. But I decided that a full-length refutation would be, on some level, besides the point. The enduring weight of slavery is something that you have to feel viscerally first and foremost, as King did.

You have to feel in your bones that the things we try to seal up in the tombs of the past have a tendency not to stay there. That is the great lesson of this month of April, the month that is most frequently, as it was in 1968 and 1865, home to Easter and Passover, two important religious festivals of oppression, liberation, and memory. King was not shot, as Lincoln was, on Good Friday, but the proximity has not been missed: the wave of urban revolt that ensued has occasionally been called the Holy Week Riots. Perhaps that is one reason why King’s death became, almost immediately, a martyrdom, cementing the transformation of a man — wrought with complications, imperfections, mortal limitations — into something more, as he now exists in memory.

Easter and Passover alike are remembering festivals, narrative festivals, where the telling and retelling of the past is understood as a transformative practice. The word in the Christian tradition for this concept is anamnesis, the word Jesus uses when he instructs his disciples to eat the bread (unleavened for Passover) “in memory of me.” The twentieth-century Dutch theologian Edward Schillebeeckx, inspired by the work of the Jewish neo-Marxist philosophers Theodor Adorno and Walter Benjamin, wrote that the anamnesis of the Crucifixion and Resurrection exemplified what he called more generally “negative experiences of contrast.” In negative experiences of contrast, we are how Benjamin imagined “the angel of history.” Regarding the “wreckage upon wreckage” piled up in front of us, we “would like to stay, awaken the dead, and make whole what has been smashed,” though that aspiration remains ever frustrated. But it is precisely this awareness of contrast between the pain, loss, and defeat around us and our sense of justice and righteousness that demonstrates the validity of those ideals, and our conviction that change is necessary — like King’s glimpse of the promised land from the lonely mountaintop in the desert.

King, as a Christian, had faith that the violent death that Jesus suffered at the hands of an oppressive regime was not — could never be — the last word on his mission. What I think King understood on the eve of his death was that the end of his life in disappointment and tragedy would similarly fail to finish his struggle. It would expose anew the violence and cruelty that once drove him to act. It would destabilize any impulse to complacency, to satisfaction with partial victory. This is our task now, as we engage once more in acts of critical remembrance of King and his death: to remind those who sanitize his enduring challenge, who distort him into a symbol of moderation and “colorblindness,” who try to keep him sealed up in his own tomb, that he did not leave us on a note of triumph or self-satisfaction, but with a reminder: There is still work to do.

Reality check on guns

I have been surprised and disturbed over the last few days to see a lot of intelligent, left-leaning people on my social networks expressing unqualified hostility towards the post-Parkland movement for gun control. The views I’ve seen, taken together, range from mere strategic errors at best to wishful thinking bordering on the delusional at worst. So much of this discourse has been targeted at “mainstream liberals,” so I want to explain why, from an anti-police, anti-capitalist perspective, I still think it is profoundly mistaken.

Most arguments I’ve seen fall into three broad channels, ordered from most to least sensible:

1. It reflects poorly on our society that only a movement led by and in response to the deaths of upper-middle class, mostly white kids has captured public attention, while poor black people and other people of color die daily from gun violence, often perpetrated by police officers, to widespread apathy.

2. Guns are necessary for victims of police violence to defend themselves. Gun control laws will just give the criminal justice system one more excuse to surveil, harass, kill, arrest, and incarcerate black people.

3. Arming the working class is a prerequisite to the revolutionary insurrection that is necessary to bring about real social change. Gun control is just an excuse for the ruling class to impose docility on workers and squash any nascent class consciousness. There’s a group called the Socialist Rifle Association that seems to be a focal point for this vein of argumentation on social media.

In order, then.

(1) is clearly correct, as far as it goes. It is a testament to American racism that school shootings are the only real locus of sustained mainstream outrage about gun violence. The conclusion that gun control legislation, or the current movement in its favor, is actively bad just does not follow from this premise at all. It is crucial here not to slip into a reduction of politics to discourse and representation. The goal shouldn’t be just to get the right kinds of people saying the right things in the media, it should be to create real change, and less violent death. As I see it, the obvious implication of this observation is that the movement should be expanded to encompass police demilitarization and other forms of political action against state-sponsored gun violence. And from the footage that I saw of the marches across the country last weekend, the movement is already tending in this direction.

So (2) is the only way to translate the real insight of (1) into an actual argument against gun control. Here’s where a historical perspective is important: I think there’s good reason to believe that, rather than the mutual exclusivity posited here, there’s actually a deep inner resonance between the fight for gun control and the fight against police violence.

Both the Second Amendment and modern American policing (at least in the South) have their origins in attempts to suppress resistance to slavery. Slaveholding anti-Federalists were terrified that the arrogation of the legal ability to bear arms to the centralized federal armed forces would leave them defenseless against uprising slaves — hence their desire for those “militias” in the amendment text. The “slave patrols” that enforced the brutal rule of the Southern plantocracy evolved over time, and with the replacement of de facto apartheid for de jure slavery, into municipal police forces. Ironically, the leftist argument here tends to grant the NRA’s ahistorical reframing of the Second Amendment to focus on the individual right to bear arms. But conceived of properly, it becomes clear that a radical challenge to the Second Amendment is also a radical challenge to the logic of American policing. Once more, the real conclusion here should be an expansion of our understanding of gun control — not that to be anti-gun is somehow to be intrinsically pro-cop.

But then there’s the practical question. Will gun control just result in more incarceration, but not less violence? This is a valid concern, extending one of the most powerful arguments against drug prohibition (for instance) to guns. What I have been arguing up to this point is that strengthened police forces are not a necessary consequence (and that gun control is a more plausible candidate to have an elective affinity with police reform than drug control). Which policies may actually emerge out of the present moment is difficult to predict, but especially given signs of grassroots vigilance from the left, it seems far too early to despair. Other nations — albeit without the U.S.’s distinctive history of racism — have succeeded in nearly eradicating gun violence without swelling prison populations, so it is manifestly possible. And the general policy strategy of this current movement seems to me to be supply-side rather than demand-side: more about cracking down on gun manufacturers and distributors than about expanding “on the street” surveillance.

What we have had up until this point is a discussion about the proper contours of the gun control movement, a discussion that is not just important but necessary. It is absolutely vital to seek to expand the push against guns to encompass political action on police violence. And policy proposals should, of course, be analyzed through a lens that takes into account American structural racism and the consequences of any law for marginalized populations.

Still, we do not yet have any real argument for why gun control is per se bad, only for why there is a scenario in which it might have limited or unintended consequences worth reckoning with. To substantiate the claim, which I have seen, that gun control is a stealth program to suppress resistance, that it is by its very nature a power grab by the ruling class, we will need something more — like the argument about self-defense in (2), which shades over into the argument of (3) about revolution. Granting for the sake of argument that gun control legislation is enacted in the absence of any real police reform, will gun control leave workers or black people powerless against the depredations of the racist, capitalist state?

Perhaps. But the question I want to ask the people making this argument is, What exactly do you think the status quo is like? If widespread gun ownership would frighten police away from abuse, or empower the working class to move towards revolution, it would have happened already. Americans are more heavily armed than a populace has ever been, to absolutely no effect. Wake up and look around you: American guns have left innumerable corpses pointlessly strewn across the country, and Jeff Sessions is the attorney general of the United States. The carceral state does not look likely to collapse spontaneously any time soon.

One of the great lessons of history is how frequently people think that they are the first ones to come up with an idea that many, many people have in fact had before. Do you really think no one has thought to try to use guns to break the power of the U.S. government before? From Shays’ Rebellion (1786) to the Whiskey Rebellion (1791) to the Harper’s Ferry Raid (1859) to the Weathermen’s Days of Rage (1969) to the far-right militia movement of the 1990s (including the Oklahoma City bombing), plenty of earlier American insurrectionaries have tried to foment widespread revolution, each time with exactly the same result: their movement was crushed and innocents died. And the political motivations of armed revolt are just as likely to be wildly reactionary as emancipatory. It is astonishing that the point needs to be belabored, but there has, of course, been exactly one armed revolt in American history that has blossomed into a full-fledged revolutionary war, and it was fought, once more, to preserve slavery.

The leftist argument in favor of guns rests less on any principled vision of political strategy or sophisticated historical or social analysis, but on an aestheticized vision of violence, a fetishism of the weapons themselves, and a romantic obsession with the local, the organic, the spontaneous, the unregulated. Such daydreams, with all their masculinist and fascistic overtones, may satisfy the longings for authentic heroism of disaffected pseudo-radicals who have never had their lives touched by the realities of  gun violence, but it will not staunch the bleeding, and it does a profound disservice to the memories of victims of deadly oppression.

There is a word for the ideology that expects the unfettered circulation of fetishized commodities — detached from the actual relations of their production — to spontaneously disrupt the status quo and unleash pent-up transformative potential. It’s called libertarianism. And the right-wing libertarians in the board rooms of American weapons manufacturers will be laughing at your revolution all the way to the bank.

Working to death

In the last few weeks, NBA All-Stars DeMar DeRozan and Kevin Love have grabbed headlines by disclosing their ongoing struggles with mental illness. DeRozan, a guard for the Toronto Raptors, spoke to the Toronto Sun on February 25th to clarify a cryptic tweet that, as he confirmed, was a reference to the depression that he’s dealt with since his youth. And on March 6th, Love, a forward for the Cleveland Cavaliers, published a first-person piece in the Players’ Tribune discussing his experience with anxiety and in-game panic attacks.

DeRozan’s and Love’s accounts had their differences: Love’s was more narrative and detailed; DeRozan’s more abstract and distilled. But in one respect they were identical. Both identified a compulsion to “throw their life,” in the Sun’s phrase, into their work, choking off opportunities for personal reflection and healing. In order to get help, they had to find a way to think of themselves as people, not just as basketball players. “What you do for a living doesn’t have to define who you are,” Love concluded.  

It is easy — so easy, in fact, that it is a likely reason for the traditional reticence of professional athletes on these matters — to read stories about the struggles of the rich and famous with curiosity and even empathy but to doubt their relevance to the challenges confronted by (to use my favorite American euphemism) the less advantaged. What is most interesting about DeRozan’s and Love’s accounts, and what makes them potentially valuable beyond their already worthwhile destigmatizing function, is that the particular syndrome they highlight — the colonization of personal life by work — is not a rarified concern. More and more people are, in fact, “defined” in one way or another by their work, and not because of the predictable cultural pathologies of prestigious occupations but because of sheer material necessity.

This is the rare phenomenon most visible in its warped funhouse-mirror reflection. Consider a now-infamous recent ad campaign from the online freelance marketplace Fiverr. We learn that the frazzled-looking woman staring blankly back at us is a “doer” on account of her willingness to skip lunch and never sleep. Like love, she endures all things, in order to keep pace on the freelance treadmill — to “follow through on her follow through.” Someone genuinely thought this would be uplifting.

The “doer” is a useful if unintentional mascot for a constellation of changes in the American economy that began in the 1970s and have accelerated since the 1990s. Corporations have downsized in the name of “flexibility.” They have “outsourced” a medley of tasks once done in-house through a labyrinth of sub-contracting, “offshoring” plenty of jobs with the help of free trade agreements and crushing the power of labor unions in the process. Productivity has increased while wages have stagnated; a “casualized” workforce must increasingly take on backbreaking hours at multiple jobs in order to compensate for declines in benefits and the perpetual uncertainty of at-will employment. Almost a quarter of people who work part time do not do so by choice. The elimination of middle-management has created an anti-hierarchical illusion of cooperative “teamwork” while serving mainly to concentrate power and earnings in the C-suite.

A variety of terms have proliferated to refer to this transformation, all imperfect. “The gig economy” captures one important facet, but ultimately barely scrapes the surface of something much more complex. Other similar formulations — the knowledge economy, the service economy, and so on — are simplistic to the point of misleading. In the late 1990s, the French sociologists Luc Boltanski and Eve Chiapello wrote, with reference to Max Weber’s Protestant Ethic, of a “new spirit of capitalism,” in an wide-ranging and often profound contribution that has nonetheless been justly criticized for blurring the boundary between management discourse and shop-floor reality. As a historian, I personally think that the most useful label, because of its emphasis on change over time, is “post-Fordism,” popularized in the late 1980s and early 1990s by the geographers David Harvey and Ash Amin.

“Fordism,” introduced to prominence in the 1930s by the Italian neo-Marxist Antonio Gramsci, is a term that characterizes the dominant productive mode in American and European capitalism around the middle of the twentieth century, particularly at the height (which Gramsci didn’t live to witness) of the postwar Keynesian welfare state. The standardized factory assembly line was the paradigm for what work meant. People (mostly men) would go to work from 9 to 5, spend the day doing routine tasks, and then return to their families in the evenings and on weekends, with whom they would spend their living wage on newly plentiful consumer goods.

The great advantage of Fordism was security. Fordist workers didn’t wonder where their next paycheck was coming from, or the one after that: they expected to do more or less the same thing their whole working lives, until retirement. Union-contract workers couldn’t be fired without just cause. By the mid-1960s (though always to a much lesser extent in the United States than in Europe), workers who for whatever reason couldn’t access the benefits of the Fordist workplace could expect decent assistance from the government.

And with security came precisely that good that seems so elusive today: work-life balance. It was possible to leave work at work, and develop a non-economic personal life. My grandfather, for example, always teetering on the brink attempting to support seven children on a salesman’s salary, nonetheless won prizes in local competitions for his painting and photography, bringing him enduring pride. Personal life didn’t necessarily have to be private, either. It could be, and in fact in the late 1950s and throughout the 1960s frequently was, political, as social movements demanding change blossomed on an unprecedented scale. Space to think, to reflect, to talk, and to organize was space to rebel.

Fordism, to say the least, had many downsides. Its foundation was the patriarchal and heteronormative nuclear family, with untold hours of unpaid household labor expected from wives. The unions that helped secure living wages were often racist, building a system, especially in Northern cities, that protected white workers at the expense of black migrants. And perhaps most obviously, assembly-line work, especially coupled to Taylorist labor management practices, could be dehumanizing, as depicted famously in Charlie Chaplin’s 1936 film Modern Times. These substantial disadvantages were partly why a vision of a post-Fordist future seemed, at one point, quite appealing.

But it is possible to resist the temptation of nostalgia and still question whether the system at which we’ve arrived represents a substantial improvement. The ideal of the familial patriarch has been replaced with the differently but equally masculinist ideal of the heroic entrepreneur, visiting Silicon Valley sex parties when he needs a break from disrupting. No unions are perhaps the only thing worse for workers of color than racist unions, and that’s not to mention the racist mass incarceration that has tracked the emergence of post-Fordism, for a variety of complex reasons explored by scholars like Loïc Wacquant and Bernard Harcourt.

And, as I want to suggest, post-Fordist labor can be just as dehumanizing, in its own way, as the assembly line. The great promise of the new economy — work that you can really put yourself into — has been fulfilled with perverse literalness. “Doers” do indeed put themselves into their work, into their two or three or five jobs, until there is no self left to give, although the work continues vampire-like (to employ Marx’s most famous metaphor) to keep sucking. “The neoliberal urge to privatize everything,” in the phrase of the political scientist Bonnie Honig, has proved remarkably compatible with the emaciation of private existence for much of the workforce. Is it any wonder that a wave of “eliminativist” philosophers in recent decades (Paul and Patricia Churchland, Richard Rorty, Daniel Dennett) have denied the reality of many of the mental phenomena that form the common-sense understanding of consciousness?

The disintegration of the self in an endlessly recombinant world of fleeting connections is powerfully dramatized in Alex Garland’s new film Annihilation. In “the Shimmer,” the terrifying, beautiful mystery region at its heart, Natalie Portman and her scientist-soldier partners experience odd memory gaps, witness bizarre and uncontrollable physical transformations, and confront, as one crew member remarks in a quiet moment, the pull of self-destruction — of annihilation. It spoils very little to say that at the heart of the Shimmer is a yawning black pit, into which Portman’s character, inevitably, must descend.

The character in Annihilation who remarks on the universality of the self-destructive impulse juxtaposes it to the act of suicide, which she says “almost no one” commits. Which is true, in the grand scheme of things, but perhaps misleading. Most people don’t know that the suicide rate in the United States has increased by almost 25 percent since 1999. That is astonishing. It is among the most invisible public health crises of our time. Its causality is surely multifactorial, as will be its solution. But altering our headlong rush into annihilation will require a willingness to go beyond the individualistic language of “self-care,” of “taking time,” of “reaching out,” and to confront the political and economic forces that have helped place the project of selfhood in its present-day jeopardy.

DeRozan and Love have both said that they wanted to share their experience in order to demonstrate the universality of struggles with mental health. Their message is loud and clear: You are not alone. That message is, indeed, the first step. The next step is to start to wonder why.

Good men and good bosses

The Me Too movement was always going to get to this point. From its beginning it was an attempt to reckon with the exposure of a singularly evil individual in a position of power in a major industry and an attempt to publicly reconceptualize a lot of common experiences in women’s lives as unacceptable and deserving of protest. The problem was that the vast majority of men responsible for inflicting quotidian experiences of exploitation, discomfort, and violation on women are not Weinsteins. Almost no one is a Weinstein. There are horror movie villains who aren’t Weinsteins. “Canceling” monsters was never going to be enough to actually open up the abscess festering beneath the skin.

And so now with the case of Aziz Ansari the tension between these two impulses — catching bad guys and reforming everyday life — has reached a breaking point. Many critics have jumped on the ordinariness of what Ansari stands accused of having done to discredit the movement. Every man has done something like this, they say. It’s wrong to conflate this kind of thing with the actions of a Weinstein or a Cosby.

The uncomfortable truth is that they’re right. Every man has done something like this. Aggressive, coercive, disrespectful sexual behavior on a date? A tweet that’s gone viral saying that 75% of adult men have acted similarly at some point is probably an underestimate. Ansari’s character Tom Haverford does things like this constantly on Parks and Rec, to affable chuckling. With Ansari, in other words, we have finally reached a point where we have to move from insisting that “this isn’t normal” to insisting that there is a problem with what is normal, that we need, collectively, to do better than normal.

We have to come to terms with the fact that individual values or personality traits can only do so much in the face of structural incentives to be a certain type of man. We can’t just purge the bad ones. Parenting, media creation, friendship, workplace structure, and so on will all have to change, so that men consistently face consequences when their actions venture onto the spectrum that runs from Ansari at one end to Weinstein at the other. This includes but goes well beyond “education.” Ansari is plenty educated about these matters. What is needed is a non-pathologizing explanation of why he would act the way he did anyways, an explanation consistent with the fact that millions of similarly enlightened men do similar things on a daily basis.

An analogy might make this seem like a less daunting task. Labor relations is often reduced, even on the left, to a matter of the qualities of individual employers, in the same way that there is a persistent tendency in this current moment to reduce gender relations to a matter of the qualities of individual men. This has come up recently with respect to various living-wage campaigns in the U.S. and Canada. “If you can’t pay staff a $15/hr minimum wage AND benefits, you shouldn’t be in business,” one tweeter argued about Tim Horton’s. People have similarly expressed bewilderment about the fact that Vox Media, a liberal company, has been reluctant to recognize its employees’ unionization efforts. “It is not the responsibility of your employees to subsidize your shitty business,” another person recently summed up.

But that quite literally is the function of labor under capitalism. Businesses make money by earning more from selling the stuff that people make for them than they pay out to those people. If they paid their employees what they were actually worth to them, they would have no profits, would not be able to expand and diversify or spend money on advertising, and would likely swiftly go out of business. What separates a kind boss from a cruel boss is not the fact of labor exploitation but the enthusiasm with which they pursue it. That is why labor organizations like unions and regulatory laws like the minimum wage are important: they provide an external constraint on the free reign that the logic of capitalism assigns to employers, because trusting in “good” bosses to spontaneously act with integrity is a recipe for getting burned.

The incentives for men to exploit their sexual partners, especially women, are less material or economic and more a matter of culture and ideology. But the logic of masculinity is similarly such that external constraint, ultimately leading to wholesale alteration, is necessary above and beyond the good will of individuals in power. It’s worth noting here that the two structures are, in practical fact, profoundly intertwined. I was struck by the class markers peppered throughout the Ansari article: his “exclusive” address, his pushiness about fine wine, his demand that he “call her a ride,” and so on. More broadly, the locus of Me Too activism has often been the workplace: it is not just men who have been its targets but male bosses. True “accountability” for men, the robust and consequential kind, will require an equalization of economic power. (Though such a change is obviously not sufficient: I’m drawing an analogy, not an equivalency.)

Marx and Engels once famously railed against the “misconception that induces you to transform into eternal laws of nature and of reason, the social forms springing from your present mode of production and form of property.” As ex-Google employee James Damore reminded us recently, gendered exploitation too is so normalized and so ubiquitous that it can come to seem like a law of nature. These seem to be the two poles that we are caught between: male abuse as an evolutionary necessity and male abuse as an aberration of pathological monsters. We need now more than ever to seek out the excluded middle, where we might find the possibility of collective social transformation — a better normal.

Favorite movies of 2017

This was a really incredible year for film. There were so many great releases, and after seeing many of the top movies in the last several weeks I want to commit my thoughts to “paper,” more for my own sake than anything. This collection of films was extraordinarily rich, not just entertaining but provocative, engaged, and complex.

MV5BNDI0NjU3MTc3NF5BMl5BanBnXkFtZTgwNDc5MTMyMDI@._V1_

1. Get Out (dir. Jordan Peele)

Film is a uniquely public medium. Consumption of other forms of art — music, novels, etc. — has become an increasingly private activity, not just literally solitary in practice, but disconnected from any kind of common conversation. There are no must-read books or must-listen albums any more, if there ever were: people just have their pick from whatever some personalized algorithm selects from an overwhelming stream of new content. But movies, made collectively by large groups of people in the first place, are still by and large (Netflix aside) meant to be seen outside the house, a shared experience with a group of strangers. And there are few enough major films released every year that to some extent people are still seeing the same set of new movies, enabling public conversations of unique participatory scope and depth, and the introduction of new characters, stories, and images to something like a collective imagination.

And Get Out was a uniquely public film. To me it was obviously the movie of 2017. No other non-franchise film was a public event in the same way that Get Out was. I can’t remember another movie that was so important to see in theaters, and that was the subject of so much discussion, analysis, and yes, memeing. It wasn’t just that it was the right film at the right time, although it was. It was that Jordan Peele crafted the film to reward nearly endless rewatchings, promising new discoveries every time, and to be enriched by the experience of watching as a member of an audience. Get Out is as close to perfect as a film can be: flawlessly paced, moving, entertaining, terrifying, challenging, and even, at moments, strikingly beautiful, more or less all in equal measure. I’ve seen it three times and each time I became more impressed with its artistry. To paraphrase Roger Ebert’s assessment of Fargo: films like Get Out are the reason I love the movies.

2. Three Billboards Outside Ebbing, Missouri (dir. Martin McDonagh)

Speaking of Fargo. I’ve been following the backlash to this movie with some bewilderment. I certainly don’t begrudge anyone for hating it. Bland agreement is boring; vigorous argument about movies is great. And I would probably hate the movie I’ve read about too. I just truly feel like I saw a different film than many of its critics. Not just interpretively: I am at a loss to explain how people have moved from the film that I thought I saw to some of the basic summaries of its plot or dramatic arc that I have read. The two explanations that I have come up with are: (1) There have been so many films made, often by Quentin Tarantino, that are similar in lots of superficial respects to this one that some viewers were primed to expect a certain, far inferior movie, and could not adjust when the movie departed in key ways from that mold; (2) I obliviously looked past a set of glaring flaws because I wanted to like this movie going into it, because of my preexisting admiration for everyone involved in its making.

I’d like to think the two are about equally likely. Maybe the latter is more plausible. I really don’t know. It’s difficult to say much more without getting into spoiler-y plot details. So I’ll just say that I’m glad that even this movie’s most ardent detractors have generally acknowledged the brilliance of Frances McDormand’s performance. I think it will go down as one of the best in movie history.

3/4. Lady Bird and The Florida Project (dir. Greta Gerwig/Sean Baker)

Both of these movies are gorgeous, intimate, tender studies of family, growing up, motherhood, daughterhood, class, disappointment, and hope. They are blessed with spectacular ensembles, a rich visual sense of place, and a deft ability to produce that rarest of emotional reactions, the laugh-sob. I don’t have a single bad thing to say about either of them. They didn’t stay with me in quite the same way as nos. 1 and 2, but if you said either of them was the best movie of the year I couldn’t give a counterargument.

5/6/7. The Shape of Water; Call Me By Your Name; Phantom Thread (dir. Guillermo Del Toro/Luca Guadagnino/Paul Thomas Anderson)

Here are three singular, beautiful romance movies that, at the end of the day, I just didn’t love quite as much as the previous four films. I think all of them suffer from telling rather than showing that their central characters are in love, and I think that the development that each of them seeks to give to the traditional romance formula is delivered somewhat clumsily in each case. The creature in The Shape of Water is a little too animalistic and devoid of personality. I am apparently in a tiny minority in finding Armie Hammer absolutely terrible in Call Me By Your Name. The final moments of Phantom Thread feel inexcusably abrupt for such a deliberately paced film. But these are small quibbles. All were a joy to watch, featured at least one stunning performance, and were chock-full of indelible images. If Jonny Greenwood wasn’t in a famous band he’d win Best Score at the Oscars for Phantom Thread and it’s a travesty that he won’t.

8/9. Star Wars: The Last Jedi and Blade Runner 2049 (dir. Rian Johnson/Denis Villeneuve)

Sci-fi nerds like me got to enjoy two of the best franchise films in recent memory this year. I was very scared that both of these movies would be bad, but they both exceeded my expectations by leaps and bounds. That’s a pretty uncontroversial assessment of Blade Runner 2049, which cemented Denis Villeneuve (who did Arrival, my second-favorite movie of last year) as one of the most exciting “genre” directors working in Hollywood today. The Last Jedi was probably the second-most polarizing movie of the year, after Three Billboards. I thought it was brilliant, balancing fan service and thematic and narrative innovation better than its already-great predecessor. It was timely without being heavy-handed; it had, for my money, the most visual flair of any film in the franchise; and, most importantly, it had Mark Hamill.

10. Dunkirk (dir. Christopher Nolan)

Dunkirk was one of my favorite movie-going experiences of the year; I saw it in 70mm on its opening weekend. It has been justly praised for its transportive, immersive quality, and Harry Styles gives the best pop star movie performance since Justin Timberlake in The Social Network. It is a masterpiece of film and sound editing. It’s just that after I saw it I didn’t think about it once for about six months, until awards season commenced in force a few weeks ago. It is formally remarkable: if I was drawing up a film school curriculum it would be at the top of the list (or probably still behind Get Out). But it doesn’t really say anything, and to that extent it suffers by comparison: this was not a year to shirk on thematic complexity.

Honorable Mention: Norman: The Moderate Rise and Tragic Fall of a New York Fixer (dir. Joseph Cedar) 

This movie made no money and got no attention because it was a small-scale drama about Jewishness released in late April with only one big name attached (Richard Gere, who’s shockingly good). But I saw it! And I liked it a lot! In a year that saw Three Billboards, a new season of Fargo, and Suburbicon, which they actually co-wrote, it channeled the spirit of the Coen brothers as well as anything else in 2017. That’s high praise.

Notable haven’t-seens: The Post; The Disaster ArtistI, Tonya. 

The latest neo-Nazi PR campaign

One evening probably two weeks ago I was walking in my neighborhood and found a sticker plastered on the back of a stop sign that said in plain text, “It’s okay to be white.” I was startled because of Cambridge, MA’s liberal reputation (“The People’s Republic of Cambridge”), but not particularly surprised. Steve Bannon and Jared Kushner went to Harvard. Of course there are white supremacist “trolls” here. I scraped the sticker off with my keys and went about my business.

I now regret not taking a photo, because since then similar stickers or posters have appeared around the country, typically on college campuses, including at my alma mater, Northwestern, this past weekend. It’s apparently some sort of organized campaign launched on 4chan, an important “alt-right” meeting place, designed to expose the intelligentsia’s hatred of white people to a national audience and thus garner the support of “normal,” unwitting, comparatively offline whites.

It’s a masterful strategy and to judge by my Facebook feed — so remember, there’s a great deal of selection bias against the alt-right here — it’s been modestly effective. Isn’t it okay to be white? So at the admitted risk of giving any more attention to an insidious neo-Nazi propaganda effort, I want to spell out what’s wrong with the logic behind these stickers.

The most obvious point to make is the same argument that has mercifully taken the wind out of the sails of “all lives matter” over the last year or so: it is just not the case that calling attention to the devaluation of some lives is the same as bemoaning the fact that other lives are valued. Use whatever analogy you want to make this point. Some people like to imagine someone feeling neglected when the fire department comes to extinguish a blaze at their neighbor’s house. All Houses Matter.

But while that argument is all well and good as far as it goes, there’s a more important point that I think people have been too willing to retreat away from in the face of campaigns like this, which is that there actually is a problem with whiteness itself. The proper analogy is not two separate houses, one of which just happens to be on fire. It is more like one group of people forced another to build them a house, then put them in another, much worse house, then lit it on fire, and then when the fire department came, violently insisted that there was in fact no real fire. The philosopher Charles Mills argues that we should say “racial inequality” less and “racial exploitation” more, to drive home the point that the creation of “race” as a system inherently advantages the white race at the expense of the others.

That’s not just to say that “race” or “whiteness” are “social constructs.” That’s not enough. Baseball is a social construct. It is, needless to say, okay to play baseball. It is to say that “whiteness” is made through white supremacy, by the system that Mills calls the Racial Contract. In certain left-liberal circles “whiteness” has come to mean something like “annoyingness,” as if its constitutive features were overconfidence and Vampire Weekend. But the heart of whiteness is more like this:

chart_1 (3)

This is the uncomfortable reality that I and every other white person have to grapple with: it is okay for us to be, as human beings who draw breath, but there is, in fact, a problem of sorts with our being white, because being white means reaping profound unearned material advantage from the system of white supremacy — regardless of whether our ancestors owned slaves or whether, like me, our grandparents or great-grandparents would not necessarily have even been classified as white when they came to the U.S.

This is true whether we like it or not. If we like it, we can join the 4chan Nazi brigade, protecting our advantage by fabricating victimization. But if we don’t like it, the solution is not to feebly attempt to opt out, like Rachel Dolezal. It is to dismantle the system at its roots.

Return of the Repressed

In June earlier this year, the conservative pundit and think-tanker Avik Roy took to Twitter to defend the Affordable Care Act repeal-and-“replace” plan then on its way to the Senate floor. “I’m very open to thoughtful critiques of the Senate bill from the left,” he wrote. “‘MILLIONS WILL DIE’ is not it.”

In late August, after the first of several hurricanes that would leave countless people displaced, injured, and, yes, dead across the Gulf, a spokesman for the new fossil-fuel-industry-controlled EPA blasted climate scientists for connecting the storm’s devastation to climate change. The spokeswoman repudiated those who were “engaging in attempts to politicize an ongoing tragedy.”

And now after the deadliest mass shooting in modern American history the chorus has started up again. “After Las Vegas, can we take a moment to agree not to politicize tragedy?” Glenn Beck asked. Everyone from Patton Oswalt to Hillary Clinton has already faced the wrath of the reflexively anti-political. Discomfort with “divisiveness” has once more trumped the value of real human lives.

Speaking of Trump, this is all important context for the hallucinatory reality show into which we have been collectively plunged since his election. His nihilistic world of self-contained reflectivity, cycling eternally between television, Twitter, and private country clubs, is just the Platonic form of the more ordinary and longer-running impulse among many Americans to drive a wedge between politics and reality. Politics is just a kind of word-game where people arbitrarily select values and then compete with each other to score pointless victories that affect the lives of the participants and no one else. To insist that lawmaking, governing institutions, democratic action, and so forth might actually matter in the real world, beneath the aether of the 24-hour-news-cycle, is to break the rules of the game. In its demand that people confront disempowerment in all its horrifying reality, that we come face-to-face with the human lives around us that have been crushed by structures sustained by our collective action and inaction, it is nothing if not impolite.

But all the polite apathy we can muster can’t make that reality go away for good. Our actions have shaped the natural and social world that we now have no choice but to live in. We are responsible not only for the past that Karl Marx famously described as “weighing like a nightmare upon the brains of the living,” but also the present that will weigh similarly upon future generations. There isn’t a tweet in the world — even from the President — that can insulate our lives forever from the consequences of our politics. “History is what hurts,” as the literary critic Fredric Jameson once put it. And every time we ignore or naturalize or excuse the hurt, we only ensure that it will return again, with greater vengeance.

“Thoughts and prayers,” not politics or protest, has been the refrain today, especially from the Christian right wing, but the Gospel of Luke told it differently, in a passage at the beginning of the Passion sequence. “Some of the Pharisees in the crowd said to Him, ‘Teacher, rebuke Your disciples!’ ‘I tell you,’ He answered, ‘if they remain silent, the very stones will cry out.'”