Never again

An audio clip is circulating, in which one can hear the cries of children separated from their parents at a border detention facility. It is excruciating.

I was reminded of a famous passage from the philosopher Theodor Adorno, a German Jew who escaped the Nazi regime in the 1930s. Some years after his return to Germany he wrote:

Hitler has forced human beings, in the circumstances of their unfreedom, to recognize a new categorical imperative: to order their thinking and acting so that Auschwitz should never be repeated, so that nothing similar should happen. This imperative is as resistant to being grounded as once was the givenness of the Kantian categorical imperative itself. To treat this imperative discursively would be an insult: in this imperative the moment of a supplement to the ethical can be felt in the body. ‘In the body’, because this moment is a practical aversion to unbearable physical pain, an aversion to which individuals are subject even now that individuality as a form of spiritual reflection is beginning to disappear. Morality survives only in the unvarnished materialistic motive.

As I wrote last time, we are inseparable from our bodies. Our bodies can speak when ideology masks our intellectual perception of the truth, of “that which finds expression in the stink of the corpse,” as Adorno put it. And now, in that sensory experience of the cries of children, they speak once more: Never again.

Advertisements

Anthony Bourdain, materialist philosopher

Bourdain-in-Vietnam

Like many people I’ve been processing Anthony Bourdain’s suicide over the last 24 hours with great difficulty. For me personally it has been one of the most affecting “celebrity deaths” in recent memory. Looking back, I suppose it is lurking in the background of what I wrote yesterday, about the importance of looking the absence of holiness in our human reality square in the face. Not just because I was bummed. As I came to really appreciate Bourdain over the past year or so, it was this dimension of his public presence that I found most significant. He was one of our great truth-tellers. He had a remarkable ability to notice, and to force viewers and readers to notice: to see what was in front of their faces the whole time, in all its profanity and grandeur and gristle. He showed us blood and guts.

In other words he was a great philosophical materialist. In U.S. English this word has come to imply something like commercialism, which is ironic. Like many materialists, Bourdain understood how commercialism traffics in getting people to imagine that the material stuff in front of them is really something more in its hidden depths. Buy this, eat that, definitely avoid that, and you can have your own personal transcendence in the here and now. Bourdain would have none of it. It’s all there already in the 1999 New Yorker article that changed his life: his derision for customers who’ve convinced themselves that pigs are “filthy” but chickens aren’t, or who overcook their meat until they can pretend they aren’t eating something that was once a part of a living animal; his joy at eating “strange things,” at arriving at long last at a restaurant where “every part of the animal—hooves, snout, cheeks, skin, and organs—is avidly and appreciatively prepared and consumed.”

Like materialists from Thomas Aquinas to Donna Haraway, Bourdain insisted on the all-too-often forgotten fact that humans, like any other animal, are inseparable from their bodies. He loved eating — perhaps the quintessential embodied activity — too much to pretend otherwise. (Aquinas, legend has it, was colossally overweight.) He reveled in the “sheer weirdness” of lives lived in bodies in kitchens: the impossibly variegated textural and olfactory and sonorous experiences, the disturbing and transformative encounters with people with bizarre personal histories, the relationships to places and landscapes and other species, all made possible and necessary by the brute fact that we are creatures with metabolic needs. The need to put food in our bodies, Bourdain understood, was one of those precious few human universals (alongside maybe death and sleep) that is capable of bursting through everything else that we wrongly take for granted.

But unlike some other materialists (Nietzsche, perhaps, or certain other Romantic vitalists past and present) Bourdain’s vision was not an uncritical celebration of all that is. There is no amor fati to be found here. The flip side to his ability to find beauty, or just tastiness, in places others might have considered too base was his insistence on the reality of pain in places others preferred to conceptualize as sacred, or at least sterile. He was never complacent, with himself or with the world, even to a fault. For starters, the core insight of his entire writing and T.V. career was the fact that food, whatever else it might be, is work. Like the greatest of all materialists, Karl Marx, Bourdain had an uncanny ability to remind people of the labor, performed by real human beings at work, that makes possible our favorite consumable projects.

Like our laptops or our T-shirts or our science, Bourdain understood that with our food we’d prefer to just see the finished product, the thing itself, instead of the social relationships that produced it, what Marx called commodity fetishism. (Indeed, as many feminist scholars have noted, Marx himself mostly ignored the labor that was performed, mostly by women, in settings outside of the nineteenth-century “workplace” — like the kitchen.) But Bourdain insisted on showing us the labor and laborers concealed behind the kitchen door. The historian Andrea Komlosy has described the Janus-faced view of work that predominated in pre-capitalist Europe: the sense of pride and accomplishment that someone could take in their product (the original sense of the English “work”) alongside the toil and exhaustion of work as a seemingly inescapable burden (the original sense of “labor”). It is this feel for the duality of work that Bourdain resurrected, with humor and empathy. The work of food could be full of joy, yes. But “it’s a life that grinds you down,” as he put in in the New Yorker piece.

It was the grind that increasingly preoccupied Bourdain in his last several years. Always preoccupied with issues of exploitation and kitchen power — along with, through his public examination of his own demons, the psychic toll of culinary culture — he emerged in 2017 as one of the most prominent male allies of the Me Too movement. He reflected candidly on the tendency of his earlier work to blur the line between description and celebration of the toxic masculinity of the elite food world, and he unhesitatingly took the side of the women who accused other celebrity chefs like Mario Batali of sexual violence. And he shied away from the hero-worship that sometimes ensued, insisting that his partner, the actress Asia Argento, deserved the credit for his turn toward outspoken feminism. For fans, it was an unsurprising (if, for a male celebrity, by no means guaranteed) extension of his previous commitment to speaking honestly about oppression and violence. Memorials since his death have pointed out, for instance, his remarkable commitment, for an American journalist, to depicting the plight of Palestinians with humanity and without equivocation — and his often hilarious hatred for Henry Kissinger, borne out of his experience traveling in Southeast Asia.

It is this commitment to witnessing injustice without excuse, minimization, or rationalization that is the most challenging corollary of philosophical materialism. It is easy to marvel about all the wonderful things human bodies can do. It is more difficult to acknowledge all of the awful things they can do, all the cruelty they can inflict, all of the ideas they can concoct to mask their cruelty to themselves or others, and the way that the social structures they form (and that can perpetuate themselves despite the wishes of individuals) can magnify their destructiveness to unfathomable dimensions. The task of a historical materialist, Walter Benjamin wrote, is “to brush history against the grain.” Materialists are those who view “cultural treasures” (like great cuisine) “with cautious detachment,” because they know such treasures have an origin which they “cannot contemplate without horror” — they emerge not only from the efforts of “great minds and talents,” but from “the anonymous toil of their contemporaries.”

I wonder if there were moments that Bourdain, like Benjamin’s “angel of history,” looked at our world as “one single catastrophe which keeps piling wreckage upon wreckage.” His tragedy will always make it difficult to contemplate his great mind and talent without some degree of horror: horror above all at the prospect of facing the fight that remains without someone who saw so clearly and spoke so truly.

Theological liberalism

I don’t write much about religion, or more specifically about my own religious views. My scholarly work intersects with the history of U.S. Christianity, but that’s easy enough to compartmentalize. The pro-market evangelizers I occasionally write about are so far removed from anything recognizable as my own faith that it causes me little consternation to write critically about them — their historical function in U.S. science, economics, politics, and so on — while remaining quiet about what an alternative religious position might look like. The biggest reason is that I just don’t think that my primary scholarly audience (secular historians) would get much out of it. Don’t get me wrong: this isn’t a story about religious persecution on Ivory Tower campuses or whatever the running conservative theory is. My advisor wrote the introduction to the North American publication of Pope Francis’ encyclical on climate change and inequality. That’s not it. My professional identity just isn’t theological.

But there is also a matter of principle involved, which has to do with one of the major themes of my work on religion: the extreme perils of sacralizing the mundane (work, politics, markets, nature, intellectual activity). Basically I think that to call myself or think of myself as a Christian or even religious scholar would be a kind of idolatry. I’m thinking here of what Karl Barth, following Kierkegaard, called the infinite qualitative distinction between God and humanity (or “time and eternity,” or however else you want to express it). The point is that however one conceives of God’s presence or action in this world, it is wrong to identify God with any existing thing, or possible thing, or even the totality of existing things (even, or especially, those things called ideas).

It is wrong first in a theological sense. This is the medieval idea of “divine simplicity” or oneness: God is not a thing, or even a being; there are no parts to God, or even one part. There is not one God in the same way that I had one bagel for breakfast today. I never say that I believe in “a” God. That locution conjures up images of the infamous Bearded Man In The Sky, or perhaps of someone who did their due diligence and decided that, as a matter of fact, Zeus was real but the others weren’t. When I say God I am thinking orthogonally to the world of all such things and ideas and people and human arrangements, however “spiritual” they appear or feel — an orthogonality revealed in Jesus, in the prophetic challenge to existing standards and presuppositions (including contemporary religion!) posed by his life and practice, in the shocking affirmation of this failed outcast itinerant preacher from an imperial backwater as Lord, in the unsettling claim that he could not have made God visible except by suffering violent, fatal rejection at the hands of this world’s powers.

It is also wrong in a political sense. The elision of the infinite qualitative distinction is (almost?) always yoked to a conservative affirmation of the actually existing order. If God is fully manifest in, or identifiable with, the world around us it is hard to understand why anyone ought to try to change it. Such efforts may even be sinful! If one conceptualizes current or “traditional” society as essentially holy then one will be on guard for disruptive, Satanic infiltrators seeking to muck things up, and eager to persecute them. Such is the story of many of the Jewish prophets, the tradition that recent scholarship suggests Jesus was likely self-consciously channeling in his own movement. The Biblical narrative is one of God mocking existing society’s pretensions to godliness, over and over again. As the late British philosopher Andrew Collier wrote in his great book Christianity and Marxism, a presupposition of the Lord’s Prayer itself is that God’s will is not already done on earth — period.

In the nineteenth century, in the midst of tremendous enthusiasm about inevitable historical progress among the upper classes of the U.S. and western Europe prompted by their experience of early capitalism’s wealth creation, this insight fell out of fashion. It was a time of terrific religious literalism. Innovations in U.S. Christianity (the seeds of 20th century evangelicalism) like Mormonism or the idea of the Rapture promoted a conception of God and salvation arguably as spatiotemporally bounded as any in the religion’s history. Interestingly, European liberal theologians (the seeds of 20th century “demythologization”) did something very similar, from the opposite starting point. They explained patiently that Jesus did not literally feed 5000 people with a small number of loaves and fish; that a burning bush never literally talked to Moses; that there are not literally beings like angels; that, in the most radical cases, Jesus’ corpse may never have literally found itself resuscitated in the tomb. All well and good, as far as I’m concerned. But then they went a step further and insisted that all that “mythology” was more or less a pointless mistake and could be harmlessly lopped off to get to the core of the faith. The result was still a religion that worshipped Jesus, but a Jesus now imagined essentially as an ordinary guy who just happened to be a supereffective healer, or an especially profound humanitarian thinker — a Jesus whose greatest miracle, in other words, was managing to be a kind of nineteenth-century liberal avant la lettre.

No wonder the young Karl Marx was so attracted to the philosopher Ludwig Feuerbach’s conclusion that when a society worships God it is actually worshipping itself. Marx observed that even Feuerbach’s atheism couldn’t spare him from this virus: he too thought that the “alienated” human reality in question was itself in some sense “divine.” All that was necessary was to recognize the good parts of God as belonging already to humans. As Marx put it in 1845, Feuerbach may have been right that when nineteenth-century Christians thought of the Holy Family they were really thinking of the nineteenth-century bourgeois family in disguise — but the point of that realization is to become empowered to change that family structure (for example) in this world. The various liberal demythologizing redefinitions of God were always plagued by the blithely optimistic complacency of Feuerbach and his more religious mentor GWF Hegel. In the 1930s many German liberal Christians even found Hitler a good enough secular embodiment of a metaphorical God. But safer, more abstract metaphors have their problems too. As Dostoevsky’s Ivan Karamazov might have said, in the “present moment” or the “eternal now” there are still young children dying.

This brings me, at long last, to the real thing I wanted to write about today: an interesting declaration issued last week by a group of very prominent American religious leaders (mostly from mainline Protestantism) called “Reclaiming Jesus.” The document is clearly modeled on the Barmen Declaration written in 1934 by Karl Barth and a group of other German Protestant leaders denouncing the German church’s support for Hitler. The Reclaiming Jesus group sees U.S. Christianity in a state of similar crisis right now. In light, presumably, of Donald Trump (who, like Hitler for the Barmen group, goes unnamed), they argue that “it is time to lament, confess, repent, and turn” back to the principle that “Jesus is Lord” and so “no other authority is absolute.”

In many ways it is a wonderful document. It is a damning condemnation of mainstream American Christianity’s enduring collaboration, in silence and in vocal support, with oppressive systems of power. It is a practically-minded call to action, one that doesn’t merely fret about the inner state of souls but seeks to challenge believers to make change in the material world. And, like Barmen and like Jesus, it does not shy away from conflict, recognizing that “what we confess as our faith leads to what we confront,” even if the result is division (“three against two and two against three,” Luke 12:52). But in other ways it is profoundly inadequate, along precisely the same lines that have plagued the liberal Christian tradition since the nineteenth century. It proclaims the First Commandment, but does not take it at all seriously enough.

There is a telling example in the second section, where they stretch a paraphrase of the Letter to the Galatians a little too far. They write: “In Christ, there is to be no oppression based on race, gender, identity, or class (Galatians 3:28)” But what Paul says is different: “There is neither Jew nor Gentile, neither slave nor free, nor is there male and female, for you are all one in Christ Jesus.” This is infinitely more radical. Reclaiming Jesus proclaims liberal tolerance, maybe even “acceptance,” for different kinds of people. Paul proclaims the complete dissolution of all human-created distinctions, including gender. Reclaiming Jesus urges the “respect, protection, and affirmation of women” — which is, it should go without saying, great! That might be implied by the Gospel for people living in our current situation. It is just by no means equivalent with it. Just as sexism stands under God’s condemnation, so does resting easy with simply “respecting women.” The Gospel is the incomparable light that reveals gender itself — the existence of a system that divides people into men and women and empowers the former at the expense of the latter — as just another fallen, historical, sinful human creation (though of course one that is no less real in our present reality for that!).

I find a similar inadequacy in the declaration’s discussion of poverty in the third section. Here it is worth quoting at length:

We won’t accept the neglect of the well-being of low-income families and children, and we will resist repeated attempts to deny health care to those who most need it. We confess our growing national sin of putting the rich over the poor. We reject the immoral logic of cutting services and programs for the poor while cutting taxes for the rich. Budgets are moral documents. We commit ourselves to opposing and reversing those policies and finding solutions that reflect the wisdom of people from different political parties and philosophies to seek the common good.

Despite the pretension to nonpartisanship, this is essentially just a morally inflected version of Hillary Clinton’s 2016 campaign platform. It is not that I necessarily disagree with it, per se. Maybe it’s some sort of political optimum right now. It’s just not the Gospel. It is not challenging for much of the population. Jesus doesn’t say, “the first will remain first, more or less, but the last will be given health care.”

In fact, there is a character in the New Testament who expresses a humanitarian sense of noblesse oblige for the poor, only to be shown his own self-aggrandizement and limited vision by Jesus: I’m talking about Judas. The story is that a disciple is honoring Jesus with a gift of expensive oil and Judas protests that the oil ought to be sold instead for a donation to the poor. To him Mark (14:7) has Jesus say: “The poor you will always have with you, and you can help them any time you want. But you will not always have me.” John and Matthew both omit “you can help them any time you want.” But I think it’s important to drive home the message here, in what is an elliptical and often misinterpreted parable. The point is that Judas’s humanitarian concern for the “well-being” of the poor is actually self-centered. Jesus’ purpose is not to make satisfying cosmetic improvements to the existing order, the order of our history. His meaning is about the Kingdom of Heaven breaking in from eternity, about eschatological transformation. Hence the temporal juxtaposition: Jesus’ time is divine urgency, putting to question and to shame all of the taken-for-granted features of our slow, tragic history — such as (just like with gender), the very existence of poverty and inequality in the first place (Jesus does not say that there will always be poor people, full stop — only with you).

“True compassion is more than flinging a coin to a beggar; it is not haphazard and superficial,” as the often-effaced radical Martin Luther King Jr. wrote. “It comes to see that an edifice which produces beggars needs restructuring.” That restructuring itself will itself, inevitably, be slow and tragic in practice. If God isn’t the Democratic Party platform, and even less the Republican Party platform, God isn’t socialism, either. But the Gospel punctures any illusions we may hold beforehand about whatever intrinsic limitations bar the way forward, any excuses we may make about why the effort isn’t worth it, any contentment that our existing society is “good enough” (or maybe just a couple small reforms away from being good enough). It is not a matter of “reversing” Trumpism or even Reaganism, as if the Kingdom of Heaven was the postwar welfare state. Nor is a matter of simple bipartisanship, or coming to some consensual solution acceptable to “the wisdom of people from different political parties.” Jesus did not come to bring peace, but a sword (Matthew 10:34), to bring fire on the earth (Luke 12:49).

But it is precisely for this reason that I remain so reticent about my own religious vision. I don’t have the whole truth any more than the Reclaiming Jesus group does. To say the very least, I stand under the infinite qualitative distinction no less than anything else in this world. That goes double for my work: there are things I get wrong, things I ignore or simplify deliberately or accidentally, ways I tailor my writing and my advocacy to specific audiences and specific circumstances that are far from universal. What I say is just as criticizable as the Reclaiming Jesus document. But it is not criticizable for claiming to be the Gospel when it’s not.

They are right that “our faith is personal but never private.” I am happy — here’s proof! — to talk in public about the convictions that drive me. I act in the world differently because of my belief: but it is the action, not the belief, that is ultimately significant. “Show me your faith without your works, and I will show you my faith by my works” (James 2:18). As Reclaiming Jesus itself proves (again, I want to emphasize how much of it I appreciate), it is possible to work for change without having it all figured out in advance. In fact, it is often the illusory label of a truly “Christian” or “religious” politics which curtails radicalism, breeds complacency, or convinces people that a movement isn’t for them. But it is precisely the Gospel’s universal condemnation of all of our pretenses to holiness that makes it an equally universal invitation to join the fight for transformation (both personal and collective). “For he says: ‘In an acceptable time I heard you, and on the day of salvation I helped you.’ Behold, now is a very acceptable time; behold, now is the day of salvation” (2 Corinthians 6:2).

Public science

Does science have anything worthwhile to say to the public? Should it even aspire to relevance outside of narrow disciplinary communities? And if so, how should we conceptualize the contributions that it is capable of making to public deliberation, or civic life more generally? Today I want to look at a few interesting recent articles that have posed some variant on these questions, the bread and butter of historians of science and other science studies scholars.

First is a piece in the Intercept by Kate Aronoff about climate change “half-measures.” As her title suggests, she argues that the politics of half-measures — faith in a technological deus ex machina like geoengineering, assertions that corporate benevolence or market-based innovation will lead to spontaneous decarbonization, and so on — amounts to “denial by a different name.” And a particularly insidious kind of denialism at that, because its proponents are free to boast about accepting the scientific consensus on climate change, despite the fact that their positive policy vision is almost indistinguishable from that of the more bite-the-bullet denialists.

I think this argument is completely correct. It’s a point that’s been made before: Naomi Oreskes ruffled some feathers a couple years ago by using the denialist label to characterize some of the more extreme rhetoric about nuclear energy, and Aronoff’s case is clearly indebted to influential work on the relationship between climate change and neoliberal politics by Philip Mirowski (whom she cites) and Naomi Klein (whom she does not). Still, Aronoff assembles the pieces of the puzzle with admirable clarity. With even ExxonMobil patting itself on the back for acknowledging the existence of the greenhouse effect, and Scott Pruitt, now the denialist-in-chief, equivocating on his stance on climate science at his confirmation hearings, it is more crucial than ever for climate activists to move beyond simply insisting that “climate change is real.”

But Aronoff doesn’t just denounce half-measures. She also targets activists whose rhetoric makes “it seem like climate change is a primarily scientific issue, rather than an economic, political, or moral one.” In an age of renascent populism, she claims, activists should galvanize action by talking up the economic benefits of robust climate policy for people who currently feel dependent on the fossil fuel industry for their livelihoods – green jobs, wealth redistribution, and so on. If conservatives actually use the rhetoric of denialism only instrumentally — gleefully changing their stance on “the science” depending on audience or context or what argument they’re trying to make — then perhaps the idea that climate change is “really” a scientific issue is, after all, a “red herring.”

Here is where I think some difficulties start to crop up. Taking a step back, it isn’t entirely clear what it means to say that climate change is or isn’t a “scientific issue,” or more specifically why thinking about climate change “scientifically” ought to imply political moderation at all. Aronoff herself, after all, is implicitly relying on a lot of science in her criticism of climate moderates who cast their position as “scientific.” She clearly believes that as a matter of fact, the market-based techno-fix approach will not be capable of achieving the emissions reductions necessary to avert catastrophic climate change. That is a (social and natural) scientific claim!

The problem here lies with the phrase “rather than”: scientific rather than political, economic, etc. Aronoff’s critique seems to simply invert this formulation rather than challenging its premise: that a social issue is apolitical to precisely the extent that scientific knowledge can be brought to bear on it, or put the other way, that scientific knowledge is irrelevant or even threatening to the processes of collective contestation and deliberation that surround genuinely political issues.

But Aronoff’s own analysis of climate change shows the limitations of this view. She supplies us, implicitly or explicitly, with answers to an instrumental question (what kind of social action would be required to prevent a specific climate nightmare scenario?); a normative-political question (what kind of social action ought we to take collectively in general?); a different instrumental question (how should climate activists communicate in order to bring about their desired program for social change?); and a descriptive-historical question (how profound has the commitment to “denialism” of key right-wing socio-political actors actually been?). To argue about whether an issue that raises a set of questions this rich and diverse is scientific “or” political seems quite useless to me.

All of these questions are distinct but complementary. That is to say, the answer to any one doesn’t determine the answer to any other, but none suffices on its own to give a complete analysis of “the climate issue.” It is impossible to think rationally about what we ought to do about climate change while remaining agnostic about what the real-world consequences of various courses of social action will be. But the inquiry that can help clarify those consequences does not emerge spontaneously. It requires deliberate effort: initiative formed within a specific horizon of conviction about what matters.

One comparison that might be useful for some readers is to the feminist political philosopher Nancy Fraser’s multidimensional theory of justice. Fraser claims that justice requires a commitment to what she calls “recognition,” the ability of all people to participate as peers in social interaction, and “redistribution,” the egalitarian provision of economic resources necessary to satisfy material needs (more recently she has added “representation,” the ability of all people to participate in political decision-making that concerns them, as well). Any political vision that addresses patterns of exclusion and marginalization without challenging structures of economic exploitation, or vice versa, is incomplete: in my phrasing, the two (or three) tasks are distinct but complementary.

This complementarity is due in no small part to the dialectical relationship in which recognition and redistribution stand. Social marginalization is often caused by patterns of economic maldistribution, but such patterns are often themselves parasitic on misrecognition: witness the historical dependence of American capitalism on free labor extracted from black people on cotton plantations and from women in middle-class households. We can accurately understand this dialectical process, however, only by distinguishing between misrecognition and maldistribution in the first place. Otherwise, for instance, we will think that the empowerment of women will necessarily dismantle capitalism, or that universalist programs for wealth redistribution will be sufficient to end American racism.

Now we can go back to science and politics: I want to claim that a similar dialectical relationship exists here, a relationship that we can also understand only if we are willing to draw a conceptual distinction between the two. In the scientific realm, by describing natural and social structures, we characterize the consequences of imagined programs of political action or inaction, orienting our practices with reference to consciously chosen political commitments. In the political realm, we reason together on the ends that we think justice compels us to pursue throughout society (including in scientific institutions), and we organize collectively to fight for programs of action that our best scientific knowledge tells us can bring about those ends. When this process functions productively, the social process is effectively guided toward the fulfillment of higher-order collective goals: what one might be so bold as to call “democracy.” When it is malfunctioning, we get the disarray, for instance, of contemporary climate politics, as illustrated (intentionally and unintentionally) by Aronoff’s analysis.

As Fraser insists, the practice of distinction-drawing doesn’t need to wind up creating hierarchies. It can also be an aid to critical reflection. Indeed, in the climate case, conceiving of science and politics as distinct but complementary domains, each with their own dignity and rationality but embedded in a dialectical relationship, helps short-circuit disputes about which category ought to be “on top,” disputes produced by thinking about them as competing claimants to the same “territory.”  We can insist that such disputes are wrong: that to panic about “politicized” climate science (as if science ought to have nothing of relevance to say on “political” issues) on the one hand, or to suggest that criticism of nuclear power or geoengineering is “anti-scientific” (as if showing the existence of a particular technological capacity was sufficient to show the goodness of its unlimited usage in any social circumstance) on the other hand, is in both cases to fundamentally mistake the nature of science and politics. (And we can start to see technocracy and antiscience as two sides of the same coin.) We can move, as Marx put it, from describing the world to changing it: from arguing about whether climate change is scientific or political to acknowledging the magnitude of the collective choice that science has placed before us, a choice that it cannot make for us, but which we cannot escape from making.

Andreas Malm makes a similar argument in his new book, The Progress Of This Stormabout the distinction between nature and society. Making the appropriate substitutions, everything said above about the distinct but complementary, dialectically related nature of science and politics (and the usefulness of thinking about these categories this way) can be said of nature and society too. Natural structures brought about and continue to nurture the existence of human beings, who are capable of forming societies with their own internal relations not reducible to “deeper” natural processes. Human societies in turn reshape nature through processes of resource extraction and utilization necessary to satisfy metabolic needs. When this relationship is functioning productively, human societies can pursue higher-order, autonomously chosen projects without jeopardizing the underlying processes that sustain life: what one might be so bold as to call “sustainability.” When this relationship is malfunctioning, we get the disarray, for instance, of anthropogenic climate change.

Conceiving of nature and society this way helps us critique intellectual and political visions that efface or subordinate one or the other. We can say that it is fundamentally misguided to argue, as such strange ideological bedfellows as “sociobiologists” and the “new” or “vital materialists” are wont to do, that the role of human agency in producing social outcomes (like the nightmare of fossil capitalism) is small or nonexistent compared to the influence or “agency” of nature. And we can say that it is just as misguided to argue, as the free-market environmentalist Stewart Brand once famously put it, that “we are as gods,” and should not doubt the possibility of human ingenuity to attain whatever ends we set our minds to without having to consider the natural structures that may block our way. (And we can start to see naturalistic reductionism and techno-optimism as two sides of the same politically complacent coin.) We can move again from describing the world to changing it: to identifying precisely what about our world we have made, and therefore what precisely we can remake.

The distinction between nature and society as domains of reality doesn’t map on one-to-one to the distinction between science and politics as domains of human thought and activity. There are natural and social sciences alike, and the number of issues where political reflection can get away with thinking about society but not nature is not large. Still, it’s not surprising that the climate crisis underscores the salience of both distinctions. Both are different paths of approach into the distinction between the real and the rational, perhaps the most important distinction to draw in the midst of what the novelist Amitav Ghosh calls the Great Derangement. Now more than ever we need ways of putting our taken-for-granted practices into question, of insisting that a better world is not just possible but necessary.

I used to like the language of “coproduction,” developed by science studies scholars like Sheila Jasanoff and Bruno Latour, as a way to understand the relationship between science and politics. Now, for a variety of reasons that I’ve spelled out at much greater length elsewhere, I find that work less compelling. And I think this theme — the need to draw certain kinds of distinctions in order to critique social reality — is perhaps the most important cause for concern. Very briefly, “coproductionist” scholars envision “science” and “politics” (as well as “nature” and “society”) as competing discursive labels for the same underlying “stuff.” They take their task to be to describe the way that those categories get “stabilized” as the outcome of a game-like process of interaction between agents. There is no such thing as science or politics or nature or society as such; there are only things that come to be taken, at particular times and places, as scientific or political or natural or social.

This body of work blurs distinctions with gleeful abandon in theory: between science and politics, nature and society, discourse and material reality. But it is not always easy to know what to do with these theoretical moves in practice. Latour, in fact, has been explicit in his rejection of “critique.” In his book Reassembling the Social he inverts Marx: “Social scientists have transformed the world in various ways; the point, however, is to interpret it.” But such an ascetic refusal of normative judgments is easier said than done. When coproductionist scholars do make critical interventions despite themselves, they usually fit somewhere in the technocracy/antiscience or agency-minimizing pessimism/techno-optimism binaries sketched above. And they aren’t shy about picking up both ends of the stick. Latour, while warning about the threat of science working to prematurely shut down political deliberation, has simultaneously written for the Breakthrough Institute, a techno-optimist think tank inspired by the work of Stewart Brand.

This incoherence is the predictable consequence of a worldview that regards any confident invocation of the “scientific” or “political” (always in scare quotes) as a power grab in different garb, as a strategic move in a game (regular readers may notice some resonance with James Buchanan’s public choice theory, described in my last post). The only real sin is to attempt to short-circuit the endless social process that makes and unmakes claims to scientificity or sociality or whatever. And the only virtue is “openness” or “inclusion,” the expansion of restricted debates (particularly in science) to encompass as many perspectives and contributions as possible, no matter their source. To diffuse the threat of illegitimately arrogated authority conjured up by the labels of science and politics, each must be tamed — and made indistinguishable from one another in the process — by ensuring that every side of every argument must be taken into account at every point in time: and if that means that no consensus emerges, that no decisions of any real import are made, so be it (or all the better).

The slippery consequences of “open science” in practice are illustrated by another recent article, a piece in The Atlantic by James Somers titled “The Scientific Paper Is Obsolete.” That headline is a bit misleading, because the substance of the piece provides both less and more than it promises. Less, because Somers doesn’t provide any real argument against the journal article as such. More, because his real purpose turns out to be a defense of a specific vision for the entire scientific enterprise, above and beyond publishing.

Somers observes, accurately, a “computational” turn in a wide range of scientific disciplines. Computational science means, as the name suggests, the use of computer technology to pursue a particular approach to scientific problem solving that emphasizes simulation, the development of algorithms for handling complex computations, and the use of large data sets that not so long ago were less feasible to process. Somers makes two additional observations that I think are also correct: that the format of the traditional journal article is ill-suited for reporting on the practice of computational science, and that the computational turn has helped to bring science into increasingly close contact with private industry.

The problem is that, deprived of any normative framework for assessing scientific practice, Somers has no choice but to conclude that this is what science is now, that colonization of every scientific discipline (including the social sciences – of which more anon), the death of the scientific journal, and the exodus of scientists from universities into corporations are done-deal developments: the task is adaptation, not critique. (Notice any similarities to climate change politics?)

Somers draws heavily on the work of one of computational science’s most influential proselytizers, Stephen Wolfram — and his faith does seem to waver when he acknowledges that all of Wolfram’s evangelism does double duty as an advertisement for his own proprietary software system, the state of the art in scientific computation. But Somers’ conscience is salved by his discovery that Wolfram’s monopoly is not total, and an “open-source” alternative called Jupyter has recently gained traction. It is now used by ordinary-joe “musicians [and] teachers” as well as the big boys at “Google [and] Bloomberg.” Because it’s open-source, all those users can actually make modifications to improve the program, without waiting for the annoying kind of community review process enforced by obsolete journals. Thank God! Now it’s not only Ph.D. scientists who can help improve the tools tech companies use to accumulate profit: musicians and teachers can join the fun for free.

Philip Mirowski (one of the historians that Kate Aronoff cites) has shown that the function of “open-source” software as a back door to providing tech companies with free labor is, as the programmers would say, a feature, not a bug. He observes that Jimmy Wales, the founder of open-source paradigm Wikipedia, has strong right-wing libertarian views. Wales, in fact, credits the idea for Wikipedia to his reading of an essay by the neoliberal economic philosopher Friedrich Hayek called “The Use of Knowledge in Society.” In that essay Hayek has two objectives: he argues (a) that “planning” (his catch-all slur for democratic control over the economy) is a practical impossibility; because (b) “the market” is an unparalleled information processor, aggregating knowledge dispersed locally in a way that no individual or group attempting to take a “bird’s eye” view of society could ever rival. Wikipedia’s founding premise, then, is the indispensable economic significance of decentralized, minimally regulated institutions for aggregating all the knowledge that isolated individuals would otherwise keep to themselves.

The result, as Mirowski observes, is that all the uncompensated labor-hours of Wikipedia editors make search engine companies like Google billions: Wikipedia editors, by citing their sources through links to external webpages, provide Google’s optimization algorithm with a crucial aid for processing the reliability of different sites, and Google’s ability to provide a link to Wikipedia near the top of practically every search result enormously enhances its own reliability as an information source. It’s a more complex version of what we have all become hyper-attuned to with sites like Facebook and Twitter: our personal data, willfully surrendered for free, has become one of the most lucrative commodities on the planet. With Facebook there is no productive dialectic between science and politics, only a Blob-like monster growing and consuming everything in its path.

It is worth noting that the “computational” paradigm, at least when extended to the domain of the social sciences, helps to naturalize precisely this mode of economic organization. The favorite object of computational scientists is the “complex system,” one of Hayek’s own favorite concepts for characterizing his understanding of markets. When all you have is a hammer, everything looks like a nail, and so when computational scientists tackle societies or economies they tend to treat them like folding proteins or resonating crystal structures: reducible to the dynamic (even chaotic) interaction between individual “particles,” unaffected by history or power structures. Once again we are stuck at the level of flat description – unable to critique, predict, probe deep structures, or indeed say much of anything of political import. Somers quotes Wolfram: “Pick any field X, from archeology to zoology. There either is now a ‘computational X’ or there soon will be. And it’s widely viewed as the future of the field.” If that is true, it will be a major loss for many disciplines, and for society as a whole.

Unfortunately, the preference of many science studies scholars for flat description has restricted their ability to critique the developments that Somers identifies. And in some cases they’ve given it their explicit support. Helga Nowotny, for instance, a major figure in the European science studies community, issued precisely such an appreciation in her 2008 book Insatiable Curiosity, endorsed by other science studies luminaries such as Sheila Jasanoff. There Nowotny observed that “science increasingly counts on private and privatized means,” but argued that this development and calls for the “democratization” of science “are only seemingly opposites” (p. 22). Commodified science is also accountable science, ostensibly disciplined by consumers through the imperatives of the market. Furthermore, “research conducted by the sciences of complexity and chaos, self-organization, and networks” — computational science, in other words — can help to puncture the “illusory dream” of “susceptibility to planning” (p. 108) by redirecting attention to the uncertainties produced by the dependence of economic and social processes on “a multiplicity of subjective viewpoints and sites” (p. 118). Hayek himself could hardly have put it better.

Analysts of science — academics, but also journalists like Somers — don’t have to settle for this kind of complacency. There are alternatives. Feminist philosophy of science, for instance, has produced a host of important critical insights over the last several decades precisely by insisting on the possibility of subjecting science to normative evaluation. My account of a dialectical relationship between science and politics owes much to the work of Helen Longino, for instance. Longino has argued that when scientists don’t recognize the political horizon within which their work operates — when, in other words, they don’t subject their implicit political choices to critical scrutiny — they tend to just regurgitate the values of their surrounding societies (patriarchal as they often are) in naturalized garb. But when scientists do attain some critical distance, and make different kinds of political choices, their work can help provide a foundation for egalitarian political movements elsewhere in society.

Because, as feminists like Longino, Donna Haraway, and Sandra Harding have reminded us, scientists are not disembodied thinkers but always living and working within specific material contexts, it is also important to think about how to institutionalize such critical practice. It should go without saying that Mathematica (or Jupyter) computational notebooks are not up for the job. Old-fashioned sites of discursive communication (including the dreaded journal) may yet have a roll to play. I would also, building on what I wrote last time, argue for the importance of academic unionization and other movements that confront in a particular way the subordination of knowledge production to the accumulation of private property.

This, then, is what it really looks like to “democratize” science: not the transformation of science into a commodity, but the creation of institutions that force scientists to critically acknowledge their dialectical relationship with politics, and the rights and responsibilities that ought to come with it. It is precisely because science is so important in democratic societies, because so many pressing issues of public concern are simultaneously scientific and political, that we have to challenge the claims of those who would reduce science to a particular form of political centrism or an esoteric way of making money — to insist that science can be more, and better, than that.

Tyranny of the minority

Harvard’s closing argument against graduate student unionization came to students via email this afternoon, and it was… pretty embarrassing:

Screen Shot 2018-04-17 at 6.59.09 PM.png

The logical and factual errors here alone are astounding. Of course the union wants the bargaining unit to be as big as possible! That’s the point of collective bargaining. If only a small segment of the workforce is unionized, it makes it harder to approach the bargaining table with any real power – to hold out the threat of a shop-wide work stoppage, for instance. That’s why so-called “right-to-work” laws have been so damaging to unions around the country. Curran is also playing on a misconception about unionization that has always struck me as bizarre: that a union contract wouldn’t be able to differentiate between different kinds of workers (TAs vs. RAs, humanities vs. science students, et cetera). This idea is so patently ridiculous that I have hard time understanding how anyone could express it in good faith. (And on a more depressing note, it shows how little firsthand experience with unionized workplaces many people have these days.)

There’s also the fact that any given dystopian contract scenario that a majority of workers could allegedly impose on their colleagues is also, tautologically, an arrangement that the employer (here the university) could unilaterally impose on them without a union, i.e., in the status quo. The alternative to a union isn’t codified protections for unspecified put-upon minority classes of workers. It is a workplace where the employer can do virtually whatever they want with no representation for anyone.

And that brings us to what I really want to talk about, which is the question of democracy. Curran’s argument here is a shockingly blunt appeal to anti-democratic values. It’s right there in the subject line: unions are bad because they operate on the principle of majority rule. Curran invokes a venerable tradition of American fears about the “tyranny of the majority,” demanding constraints on democracy, and the power of collective action, out of professed concern for imperiled minorities. But this case illustrates an important historical truth: the “minority” in question that restrictions on democracy are supposed to protect is a minority of elites. The freedom that democracy allegedly imperils is the freedom of bosses, of the proverbial one percent (or the .1 percent, or the eight men who own as much wealth as the bottom half the world’s population), to exercise power without constraint.

This tradition originated with the Antifederalists, the coalition, led by a group of slaveholding Virginians like Patrick Henry, that fought against ratification of the Constitution. In a terrific piece of irony, as the historian Garry Wills has shown, many of today’s “constitutional conservatives” actually channel the ideology of the Constitution’s early opponents. It was the Antifederalists who insisted on the importance of a system of checks and balances between “co-equal” branches to slow down the pace of governmental action, and on the necessity of delegating as much governmental responsibility as possible to “local” communities. The agenda here, like with their calls for the right to local militias, was the defense of their private interests — which meant slaveholding — against the possible depredations of a national majority that didn’t understand their peculiar way of life.

After ratification, the prophet of “tyranny of the majority” fears was the South Carolinian Senator (and later U.S. Vice President) John C. Calhoun — another ardent partisan of slavery. Calhoun was similarly terrified that “majority rule” meant that a distant national government would come to take away Southerners’ slaves and destroy a system they didn’t understand. In a famous declaration that ought to give pause to any contemporary defenders of bosses’ paternalistic virtues, Calhoun insisted that slavery was not a “necessary evil” but a “positive good.” Free from the constraints of meddlesome majorities, local plantations were the site of an interdependent, mutually beneficial relationship between slaves and the masters that took care of them and supposedly helped to educate and “civilize” them.

The most notorious implementation of Calhoun’s philosophy was in Senator Stephen A. Douglas’s “popular sovereignty” “compromise” on the question of slavery in Kansas in the 1850s. The label was quite deceptive. The idea was that rather than letting a national majority decide, through their centralized representative institutions, whether the U.S. ought to preserve slavery, the “slavery question” ought to be decided on a local, state-by-state level — by the people who supposedly “knew best.” Of course, in “Bleeding Kansas,” an important premonition of the Civil War, what this meant in practice was that outside forces descended on the state to attempt to sway the “local” result, producing years of deadly conflict between supporters and opponents of slavery.

As W.E.B. Du Bois would emphasize in his magisterial 1935 history of the period, Reconstruction showed the germ of truth in Calhoun’s odious views. The quest to stamp out slavery in the South after formal abolition was practically equivalent to the effort to create genuine democracy in former slave states, expanding political participation to previously disenfranchised black Americans. The fear of democracy reared its head again, in the form of the slanderous mythology of corrupt black politicians that many students still imbibe in their Reconstruction units in high school history. “The center of the corruption charge,” as Du Bois put it, “was in fact that poor men were ruling and taxing rich men.”

In a development that would not have surprised Calhoun (who died in 1850), the backing of Northern power was crucial to the limited success of the Reconstruction effort. As Du Bois first observed, the Reconstruction effort collapsed only when Northern industrialists realized that their interest in expanding the size of the workforce competing for a wage in their factories was outweighed by the potential financial benefits of recreating the plantation system that had supplied them with such cheap cotton.

Beginning in the New Deal era and later in the early years of the Cold War, when the great national fear was communist totalitarianism, the tyranny of the majority concept made a comeback among conservative and moderate opponents of the ambitious reforms of Roosevelt and his successors. As many historians have noted, this was when James Madison’s Federalist no. 10 essay first began to be treated as a canonical document of the American political tradition. Developed by political scientists in the school of thought typically called “interest group pluralism,” the idea here was that the genius of the American system was the way that it allowed the many competing interest groups that composed the social tapestry to express themselves without rising to a position of dominance or imposing their agenda on the rest of the country. Popular writers like Richard Cornuelle, a fellow at the conservative Hoover Institution, argued that “voluntary associations” and other forms of private activity were sufficient to solve all social problems as long as centralized, majoritarian government got out of the way. (Cornuelle’s biggest idea was to replace government support for higher education with expanded access to private student loans — the policy that has saddled so many young people today with insurmountable levels of debt.)

The problem was that government never did seem to get out of the way. And for a new and increasingly influential generation of conservative thinkers, democratic majorities were the reason why. Austrian emigrés like Friedrich Hayek and Joseph Schumpeter argued in the 1940s that democracy would inevitably be hostile towards free-market capitalism — because politicians could only win over majorities by promising voters to do things for them, not to stand aside and let markets solve their problems (and because the intellectuals who gave politicians their ideas tended to be anti-capitalist, but that’s another story). Schumpeter approached this situation with pessimistic resignation. Hayek argued for constitutional changes to restrict the power of democratic majorities.

Hayek’s ideas found admirers inside and outside of the university. As the historian Nancy MacLean has shown, perhaps the most important academic/businessman right-wing power couple in the wake of Hayek’s critique was James Buchanan and Charles Koch. Buchanan’s first patrons were the avatars of Calhoun in Virginia politics who proposed a program of “massive resistance” to federally-mandated school desegregation. They found Buchanan’s defense of a more-or-less unlimited right to free association to be a plausible intellectual foundation for their effort to recreate Jim Crow in private schools. But it was in the 1980s that Buchanan ascended to his position of greatest influence, after Koch helped to fund the creation of a virtual fiefdom for him at George Mason University, and to spread his ideas throughout the elite far-right network he was in the process of assembling.

Buchanan’s “public choice” theory helped to formalize the claims of Hayek and Schumpeter about democracy and capitalism. His starting point was the assertion that economists ought to model political activity exactly the same way they model activity in markets: as the undirected outcome of interactions between agents pursuing their own interests. Buchanan thought that in a well-functioning market system, the “interests” of agents included commitments to traditional moral values and the spirit of reciprocal cooperation, making free trade almost always beneficial to all parties involved. But he thought that political actors were usually motivated by the pursuit of personal power — the kind of naked selfishness often ascribed to economic agents. Instead of worrying about “market failure,” he wrote, we should be afraid of “government failure”: when, in the name of trying to help solve some public problem, political actors actually push through inefficient policies that benefit them personally. In a democracy, majorities would always be screwing over minorities for their own gain.

In one of the cruelest ironies in recent American history, it was Charles Koch’s war on American democracy, citing and popularizing Buchanan’s ideas, that has done more than anything to make the public-choice nightmare a reality. Buchanan was, of course, correct that politicians are capable of prioritizing their personal self-interest above all else. But in recent history the problem has not been popular anti-capitalist reformers. It has been the political actors, in state legislatures, in Washington, and in the courts at all levels, who have taken Koch money, attended conferences where public-choice ideas are presented as gospel, or even gone to college or law school in programs shaped by right-wing funding, and who have enacted overtly anti-democratic policies — from voter ID laws to the post-Citizens United dismantling of campaign finance reforms — that have devastated American society but have tremendously enriched themselves and their backers.

In a 2009 essay for the Koch-backed Cato Institute, one of those billionaire conservative donors, the major Trump backer Peter Thiel, wrote that he “no longer believe[s] that freedom and democracy are compatible.” The problem, in his view, was simple: “since 1920, the vast increase in welfare beneficiaries and the extension of the franchise to women — two constituencies that are notoriously tough for libertarians — have rendered the notion of ‘capitalist democracy’ into an oxymoron.” Refreshing, if alarming frankness from this founding Facebook board member and close associate of the President of the United States.

I have no way of knowing for sure, but my hunch is that Paul Curran, and many others who oppose graduate student unionization at Harvard and elsewhere, would be repulsed by Thiel’s remarks. I doubt that they would recognize much of themselves in the history that I’ve told here. But like it or not, this is the rhetorical and ideological tradition that they’re exploiting. The fear of the “tyranny of the majority” — the government coming to take your slaves away and make you send your kids to school with the wrong kind of people — is what the literary critic Fredric Jameson might call the “political unconscious” of their argumentation. Capitalism doesn’t care about your good intentions. The university has a material stake in preventing unionization. The individual preferences of specific administrators are a feeble force compared to the overwhelming structural conflict between democracy and the dictatorship of property. As Du Bois put it in Black Reconstruction: “There can be no compromise.”

Kevin Williamson’s Useful Idiocy

For those not keeping up with the cutting edge of elite media drama, National Review writer Kevin D. Williamson was recently hired by the Atlantic to provide conservative “balance” to their opinion coverage — and then promptly un-hired when it became clear that a series of old tweets calling for women who get abortions to be executed by hanging actually reflected a deeply held conviction of his.

Good riddance. No one who wants to execute 25 to 30 percent of American women deserves any platform for their views, period. Far too much ink has already been spilled about the politics of opinion page hiring and firing decisions, and I don’t have anything insightful to add there. But Williamson’s fatal hot take has been black-boxed in these discussions as just a generic “insane” or “offensive” opinion, and that’s what I want to re-examine.

Because treating Williamson simply as a lunatic neglects the far more troubling truth that his opinion – so unpalatable to mainstream ears – is the straightforward logical consequence of two extremely widespread American opinions: that the death penalty is the appropriate punishment for heinous crimes, and that abortion is literally murder. L’affaire Williamson is a useful illustration of an important fact about American politics: that most people, “pro-life” and “pro-choice” alike, don’t actually take the anti-abortion position very seriously, because taking it seriously leads inexorably to conclusions that the vast majority of people find abhorrent.

A couple years ago I wrote after a terrorist attack on an abortion provider in Colorado that “pro-lifers” distancing themselves from the terrorist were kidding themselves – that if you actually profess that abortion is the taking of a human life, it is extremely difficult to condemn violence against providers:

“In general, the use of violence to avert enormous loss of life is praised in our culture. We produce hagiographic movies about would-be-Hitler-assassins. We celebrated in the streets when Osama bin Laden was killed, and then produced a movie about his assassins, too. But if the statements of Ted Cruz and his fellow anti-abortion advocates are taken at face value, abortion has in the last forty years claimed far more lives than Hitler and bin Laden ever did, combined. Every day, there are approximately 3700 abortions in America. That’s the equivalent of seven 9/11s each week, if you are truly committed to the view that abortion is murder. We killed something like 40,000 militants in Afghanistan after just one 9/11. Why, then, would we condemn the use of violence against abortion providers?”‘

The conservative defenses of Williamson in the last 24 hours have often come close to giving the game away.

David French’s defense of Williamson at National Review:

“I’m a moderate, you see. If abortion is ever criminalized in this nation, I think only the abortionist (and not the mother) should face murder charges for poisoning, crushing, or dismembering a living child.”

Ben Domenech at The Federalist:

“In the case of Williamson, even someone who literally wrote a book titled The Case Against Donald Trump was unacceptable for The Atlantic because wrongthink about what ought to be the legal ramifications for tearing an unborn child apart – ramifications that ANY pro-lifer of any seriousness has wrestled with in conversation. Serious ethical and legal ramifications for destroying the unborn or the infirm are debated in philosophy classes every day – Williamson’s mistake, as an adopted son born to an unwed teenage mother, was being too honest about his belief that what he sees as the daily murder of infants should, in a more just society, have severe legal consequences.”

A separate Federalist article’s headline announces: “Kevin Williamson Fired From The Atlantic For Opposing Abortion.” What I want to suggest is that this at-first-glance tendentious description is actually (perhaps unintentionally) completely correct. If you are anti-abortion and take that commitment at all seriously, Williamson’s position must seem reasonable.

I think that this fact has important implications. First, it means that there is no sound justification for the all-too-frequent practice of treating abortion opponents with kid gloves in the media. Opposition to abortion is not just an abstract philosophical question that is fundamentally a private judgment call. It has material implications. Its logical consequence is, again, that a quarter to a third of American women are murderers and should be treated however you think murders ought to be treated — which, again, in America, most commonly means with state-sanctioned or private violence. It is important for principled journalists to treat the “pro-life” argument for what it really is: an intellectual rationalization for violence against women.

But second, as I wrote in 2015, it means that many people who think of themselves as “pro-life” probably don’t hold that conviction very deeply. One consequence of the typical way we talk about abortion is that it comes to seem like a “debate” that is completely irresolvable, on which a public consensus could never be reached because both sides are entrenched in incommensurable value judgments. But this is not true. Polling suggests that it is a vanishing minority of Americans who think that abortion should be illegal in every conceivable circumstance and that women who get abortions should be treated as murderers. But that is actually the only conclusion consistent with a serious commitment to the abortion-is-literally-the-taking-of-a-human-life position. I think that the facts suggest that most “pro-lifers” are really more akin to “cultural Jews” or what George Santayana called “aesthetic Catholics.” That still presents a significant obstacle to any kind of consensus, but it is a profoundly different understanding of political culture than the common depiction of two serious, reasonable positions locked in eternal stalemate.

Pro-choice advocates shouldn’t apologize for their beliefs or worry about offending people who disagree with them. It is okay not only to think “personally” that abortion is morally permissible, but also to think that people who claim that abortion is murder are objectively committed to a repugnant position. And it is okay for us advocates of reproductive justice to hold out hope that this is a winnable fight, and that some day we will live in a more just, humane society and public culture.

The promised land

Somehow it seemed both shocking and inevitable.

After the assassination 50 years ago today of Dr. Martin Luther King, Jr., amidst the outpouring of grief and horror, amidst the nationwide rioting (King: “the language of the unheard”), amidst the disorientation, fear, and uncertainty, in moments of quiet there were some who voiced what was so uncanny about the whole thing: he knew it was going to happen.

How else to explain that haunting, magnificent speech he had delivered the previous day in a Memphis church? “Like anybody, I would like to live a long life,” King mused ominously as he concluded. “Longevity has its place. But I’m not concerned about that now.” In a turn at once prophetic and unsettling, he told his audience that, like Moses, God had allowed him to go up to the mountaintop. “I’ve seen the Promised Land. I may not get there with you. But I want you to know tonight, that we, as a people, will get to the promised land.”

The last years of his life saw an increasingly disillusioned and radical King. The Civil Rights Movement’s legislative victories of 1964 and 1965, along with his 1964 Nobel Peace Prize, cemented King’s celebrity, his heroic aura. But as he watched Jim Crow finally buckle towards collapse in the South, King was left with a bad taste in his mouth. This was what, incessantly for a decade, he had fought for, bled for, and compromised for, painfully and sometimes (as in the public effacement of his mentor Bayard Rustin, the gay ex-Communist who organized the 1963 March on Washington) shamefully. And yet the approach of black Americans towards formal legal equality seemed to have left de facto racism — housing segregation in Northern cities, enduring inequality in the provision of public goods in the South, impoverishment everywhere — untouched. At a time where vigorous government action was necessary to shore up social services, combat unemployment, and ensure access to healthcare, state resources were being funneled into a misguided, even imperialist war in Vietnam. As his criticism of the war and experimentation with anti-capitalist rhetoric began to alienate him from former allies in the liberal establishment, King felt embattled, embittered, and exhausted.

King’s frank acknowledgement of finitude and defeat in “I Have Been to the Mountaintop,” then, was not simple clairvoyance. It was the outcome of his sustained efforts to grapple with the suddenly inescapable awareness that, one way or another, sooner or later, he would die with his life’s work incomplete, with his dream still not fully realized. But the speech also reflected his conclusion: that the fact that we only ever experience justice as an ever-receding horizon, never fully manifest in the mortal world around us, does not make it any less worth fighting for; that our experience of the brokenness of our history is precisely what brings a better world into view; that the promised land is best glimpsed from outside: from the mountaintop.

There were other reasons, however, that King’s death felt, in retrospect, so inevitable, so cosmically, horrifyingly predetermined. For one thing, it happened in April.

King had reflected on and wrestled with the brokenness of American history as profoundly and as publicly as anyone in our nation’s past. He lived his career, figuratively, and on August 28, 1963 quite literally, in Abraham Lincoln’s shadow. Lincoln’s rhetoric was constantly, almost obsessively woven into King’s oratory: King too struggled to make scripture speak secularly, to find reason to hope without denying the reality of contemporary catastrophe, to depict America’s founding vision as both failed in practice and ultimately redeemable. And King’s movement, of course, was made necessary by the fact that the “Great Emancipator” himself, for reasons out of his control and for reasons for which he can be faulted, had left American slaves and their descendants still so profoundly unfree.

And so of course King would, like Lincoln, die by assassination, and die in April, that fateful month in which the Civil War both began and ended. If King really did know that his death was imminent, perhaps it was because of how strongly he felt the past, as Marx put it, weighing like a nightmare on the brains of the living, because of how keenly he understood the way that Americans’ distinctive desperation to leap clean of the past into the glorious future has its roots in the way the horrors of their history continue to haunt the present, like the ghost in Toni Morrison’s Beloved.

The economist Brad DeLong, for instance, wrote a blog post the other day summarily dismissing a wave of recent historical scholarship emphasizing the centrality of American slavery to the emergence of modern capitalism. As Americans have been doing since the collapse of Reconstruction in the late nineteenth century, DeLong’s impulse is to entomb slavery behind an impenetrable stone separating it from the enlightened present: “people alive today are not principal profiteers from the peculiar institution of plantation slavery.” I considered writing an extended response to DeLong, pointing out, for instance, his flawed assumption that plantation owners alone, rather than slave traders, land speculators, and financiers, were the chief economic beneficiaries of slavery; his truly bizarre claim that since there were four other major economic sectors besides cotton textiles in the early 19th century, slavery can therefore mathematically amount to no more than 1/5th of the explanation of the Industrial Revolution; and his utter neglect of the role of slavery in shaping the American institutions, and even the territorial map of the country, that we take for granted today. But I decided that a full-length refutation would be, on some level, besides the point. The enduring weight of slavery is something that you have to feel viscerally first and foremost, as King did.

You have to feel in your bones that the things we try to seal up in the tombs of the past have a tendency not to stay there. That is the great lesson of this month of April, the month that is most frequently, as it was in 1968 and 1865, home to Easter and Passover, two important religious festivals of oppression, liberation, and memory. King was not shot, as Lincoln was, on Good Friday, but the proximity has not been missed: the wave of urban revolt that ensued has occasionally been called the Holy Week Riots. Perhaps that is one reason why King’s death became, almost immediately, a martyrdom, cementing the transformation of a man — wrought with complications, imperfections, mortal limitations — into something more, as he now exists in memory.

Easter and Passover alike are remembering festivals, narrative festivals, where the telling and retelling of the past is understood as a transformative practice. The word in the Christian tradition for this concept is anamnesis, the word Jesus uses when he instructs his disciples to eat the bread (unleavened for Passover) “in memory of me.” The twentieth-century Dutch theologian Edward Schillebeeckx, inspired by the work of the Jewish neo-Marxist philosophers Theodor Adorno and Walter Benjamin, wrote that the anamnesis of the Crucifixion and Resurrection exemplified what he called more generally “negative experiences of contrast.” In negative experiences of contrast, we are how Benjamin imagined “the angel of history.” Regarding the “wreckage upon wreckage” piled up in front of us, we “would like to stay, awaken the dead, and make whole what has been smashed,” though that aspiration remains ever frustrated. But it is precisely this awareness of contrast between the pain, loss, and defeat around us and our sense of justice and righteousness that demonstrates the validity of those ideals, and our conviction that change is necessary — like King’s glimpse of the promised land from the lonely mountaintop in the desert.

King, as a Christian, had faith that the violent death that Jesus suffered at the hands of an oppressive regime was not — could never be — the last word on his mission. What I think King understood on the eve of his death was that the end of his life in disappointment and tragedy would similarly fail to finish his struggle. It would expose anew the violence and cruelty that once drove him to act. It would destabilize any impulse to complacency, to satisfaction with partial victory. This is our task now, as we engage once more in acts of critical remembrance of King and his death: to remind those who sanitize his enduring challenge, who distort him into a symbol of moderation and “colorblindness,” who try to keep him sealed up in his own tomb, that he did not leave us on a note of triumph or self-satisfaction, but with a reminder: There is still work to do.

Reality check on guns

I have been surprised and disturbed over the last few days to see a lot of intelligent, left-leaning people on my social networks expressing unqualified hostility towards the post-Parkland movement for gun control. The views I’ve seen, taken together, range from mere strategic errors at best to wishful thinking bordering on the delusional at worst. So much of this discourse has been targeted at “mainstream liberals,” so I want to explain why, from an anti-police, anti-capitalist perspective, I still think it is profoundly mistaken.

Most arguments I’ve seen fall into three broad channels, ordered from most to least sensible:

1. It reflects poorly on our society that only a movement led by and in response to the deaths of upper-middle class, mostly white kids has captured public attention, while poor black people and other people of color die daily from gun violence, often perpetrated by police officers, to widespread apathy.

2. Guns are necessary for victims of police violence to defend themselves. Gun control laws will just give the criminal justice system one more excuse to surveil, harass, kill, arrest, and incarcerate black people.

3. Arming the working class is a prerequisite to the revolutionary insurrection that is necessary to bring about real social change. Gun control is just an excuse for the ruling class to impose docility on workers and squash any nascent class consciousness. There’s a group called the Socialist Rifle Association that seems to be a focal point for this vein of argumentation on social media.

In order, then.

(1) is clearly correct, as far as it goes. It is a testament to American racism that school shootings are the only real locus of sustained mainstream outrage about gun violence. The conclusion that gun control legislation, or the current movement in its favor, is actively bad just does not follow from this premise at all. It is crucial here not to slip into a reduction of politics to discourse and representation. The goal shouldn’t be just to get the right kinds of people saying the right things in the media, it should be to create real change, and less violent death. As I see it, the obvious implication of this observation is that the movement should be expanded to encompass police demilitarization and other forms of political action against state-sponsored gun violence. And from the footage that I saw of the marches across the country last weekend, the movement is already tending in this direction.

So (2) is the only way to translate the real insight of (1) into an actual argument against gun control. Here’s where a historical perspective is important: I think there’s good reason to believe that, rather than the mutual exclusivity posited here, there’s actually a deep inner resonance between the fight for gun control and the fight against police violence.

Both the Second Amendment and modern American policing (at least in the South) have their origins in attempts to suppress resistance to slavery. Slaveholding anti-Federalists were terrified that the arrogation of the legal ability to bear arms to the centralized federal armed forces would leave them defenseless against uprising slaves — hence their desire for those “militias” in the amendment text. The “slave patrols” that enforced the brutal rule of the Southern plantocracy evolved over time, and with the replacement of de facto apartheid for de jure slavery, into municipal police forces. Ironically, the leftist argument here tends to grant the NRA’s ahistorical reframing of the Second Amendment to focus on the individual right to bear arms. But conceived of properly, it becomes clear that a radical challenge to the Second Amendment is also a radical challenge to the logic of American policing. Once more, the real conclusion here should be an expansion of our understanding of gun control — not that to be anti-gun is somehow to be intrinsically pro-cop.

But then there’s the practical question. Will gun control just result in more incarceration, but not less violence? This is a valid concern, extending one of the most powerful arguments against drug prohibition (for instance) to guns. What I have been arguing up to this point is that strengthened police forces are not a necessary consequence (and that gun control is a more plausible candidate to have an elective affinity with police reform than drug control). Which policies may actually emerge out of the present moment is difficult to predict, but especially given signs of grassroots vigilance from the left, it seems far too early to despair. Other nations — albeit without the U.S.’s distinctive history of racism — have succeeded in nearly eradicating gun violence without swelling prison populations, so it is manifestly possible. And the general policy strategy of this current movement seems to me to be supply-side rather than demand-side: more about cracking down on gun manufacturers and distributors than about expanding “on the street” surveillance.

What we have had up until this point is a discussion about the proper contours of the gun control movement, a discussion that is not just important but necessary. It is absolutely vital to seek to expand the push against guns to encompass political action on police violence. And policy proposals should, of course, be analyzed through a lens that takes into account American structural racism and the consequences of any law for marginalized populations.

Still, we do not yet have any real argument for why gun control is per se bad, only for why there is a scenario in which it might have limited or unintended consequences worth reckoning with. To substantiate the claim, which I have seen, that gun control is a stealth program to suppress resistance, that it is by its very nature a power grab by the ruling class, we will need something more — like the argument about self-defense in (2), which shades over into the argument of (3) about revolution. Granting for the sake of argument that gun control legislation is enacted in the absence of any real police reform, will gun control leave workers or black people powerless against the depredations of the racist, capitalist state?

Perhaps. But the question I want to ask the people making this argument is, What exactly do you think the status quo is like? If widespread gun ownership would frighten police away from abuse, or empower the working class to move towards revolution, it would have happened already. Americans are more heavily armed than a populace has ever been, to absolutely no effect. Wake up and look around you: American guns have left innumerable corpses pointlessly strewn across the country, and Jeff Sessions is the attorney general of the United States. The carceral state does not look likely to collapse spontaneously any time soon.

One of the great lessons of history is how frequently people think that they are the first ones to come up with an idea that many, many people have in fact had before. Do you really think no one has thought to try to use guns to break the power of the U.S. government before? From Shays’ Rebellion (1786) to the Whiskey Rebellion (1791) to the Harper’s Ferry Raid (1859) to the Weathermen’s Days of Rage (1969) to the far-right militia movement of the 1990s (including the Oklahoma City bombing), plenty of earlier American insurrectionaries have tried to foment widespread revolution, each time with exactly the same result: their movement was crushed and innocents died. And the political motivations of armed revolt are just as likely to be wildly reactionary as emancipatory. It is astonishing that the point needs to be belabored, but there has, of course, been exactly one armed revolt in American history that has blossomed into a full-fledged revolutionary war, and it was fought, once more, to preserve slavery.

The leftist argument in favor of guns rests less on any principled vision of political strategy or sophisticated historical or social analysis, but on an aestheticized vision of violence, a fetishism of the weapons themselves, and a romantic obsession with the local, the organic, the spontaneous, the unregulated. Such daydreams, with all their masculinist and fascistic overtones, may satisfy the longings for authentic heroism of disaffected pseudo-radicals who have never had their lives touched by the realities of  gun violence, but it will not staunch the bleeding, and it does a profound disservice to the memories of victims of deadly oppression.

There is a word for the ideology that expects the unfettered circulation of fetishized commodities — detached from the actual relations of their production — to spontaneously disrupt the status quo and unleash pent-up transformative potential. It’s called libertarianism. And the right-wing libertarians in the board rooms of American weapons manufacturers will be laughing at your revolution all the way to the bank.

Working to death

In the last few weeks, NBA All-Stars DeMar DeRozan and Kevin Love have grabbed headlines by disclosing their ongoing struggles with mental illness. DeRozan, a guard for the Toronto Raptors, spoke to the Toronto Sun on February 25th to clarify a cryptic tweet that, as he confirmed, was a reference to the depression that he’s dealt with since his youth. And on March 6th, Love, a forward for the Cleveland Cavaliers, published a first-person piece in the Players’ Tribune discussing his experience with anxiety and in-game panic attacks.

DeRozan’s and Love’s accounts had their differences: Love’s was more narrative and detailed; DeRozan’s more abstract and distilled. But in one respect they were identical. Both identified a compulsion to “throw their life,” in the Sun’s phrase, into their work, choking off opportunities for personal reflection and healing. In order to get help, they had to find a way to think of themselves as people, not just as basketball players. “What you do for a living doesn’t have to define who you are,” Love concluded.  

It is easy — so easy, in fact, that it is a likely reason for the traditional reticence of professional athletes on these matters — to read stories about the struggles of the rich and famous with curiosity and even empathy but to doubt their relevance to the challenges confronted by (to use my favorite American euphemism) the less advantaged. What is most interesting about DeRozan’s and Love’s accounts, and what makes them potentially valuable beyond their already worthwhile destigmatizing function, is that the particular syndrome they highlight — the colonization of personal life by work — is not a rarified concern. More and more people are, in fact, “defined” in one way or another by their work, and not because of the predictable cultural pathologies of prestigious occupations but because of sheer material necessity.

This is the rare phenomenon most visible in its warped funhouse-mirror reflection. Consider a now-infamous recent ad campaign from the online freelance marketplace Fiverr. We learn that the frazzled-looking woman staring blankly back at us is a “doer” on account of her willingness to skip lunch and never sleep. Like love, she endures all things, in order to keep pace on the freelance treadmill — to “follow through on her follow through.” Someone genuinely thought this would be uplifting.

The “doer” is a useful if unintentional mascot for a constellation of changes in the American economy that began in the 1970s and have accelerated since the 1990s. Corporations have downsized in the name of “flexibility.” They have “outsourced” a medley of tasks once done in-house through a labyrinth of sub-contracting, “offshoring” plenty of jobs with the help of free trade agreements and crushing the power of labor unions in the process. Productivity has increased while wages have stagnated; a “casualized” workforce must increasingly take on backbreaking hours at multiple jobs in order to compensate for declines in benefits and the perpetual uncertainty of at-will employment. Almost a quarter of people who work part time do not do so by choice. The elimination of middle-management has created an anti-hierarchical illusion of cooperative “teamwork” while serving mainly to concentrate power and earnings in the C-suite.

A variety of terms have proliferated to refer to this transformation, all imperfect. “The gig economy” captures one important facet, but ultimately barely scrapes the surface of something much more complex. Other similar formulations — the knowledge economy, the service economy, and so on — are simplistic to the point of misleading. In the late 1990s, the French sociologists Luc Boltanski and Eve Chiapello wrote, with reference to Max Weber’s Protestant Ethic, of a “new spirit of capitalism,” in an wide-ranging and often profound contribution that has nonetheless been justly criticized for blurring the boundary between management discourse and shop-floor reality. As a historian, I personally think that the most useful label, because of its emphasis on change over time, is “post-Fordism,” popularized in the late 1980s and early 1990s by the geographers David Harvey and Ash Amin.

“Fordism,” introduced to prominence in the 1930s by the Italian neo-Marxist Antonio Gramsci, is a term that characterizes the dominant productive mode in American and European capitalism around the middle of the twentieth century, particularly at the height (which Gramsci didn’t live to witness) of the postwar Keynesian welfare state. The standardized factory assembly line was the paradigm for what work meant. People (mostly men) would go to work from 9 to 5, spend the day doing routine tasks, and then return to their families in the evenings and on weekends, with whom they would spend their living wage on newly plentiful consumer goods.

The great advantage of Fordism was security. Fordist workers didn’t wonder where their next paycheck was coming from, or the one after that: they expected to do more or less the same thing their whole working lives, until retirement. Union-contract workers couldn’t be fired without just cause. By the mid-1960s (though always to a much lesser extent in the United States than in Europe), workers who for whatever reason couldn’t access the benefits of the Fordist workplace could expect decent assistance from the government.

And with security came precisely that good that seems so elusive today: work-life balance. It was possible to leave work at work, and develop a non-economic personal life. My grandfather, for example, always teetering on the brink attempting to support seven children on a salesman’s salary, nonetheless won prizes in local competitions for his painting and photography, bringing him enduring pride. Personal life didn’t necessarily have to be private, either. It could be, and in fact in the late 1950s and throughout the 1960s frequently was, political, as social movements demanding change blossomed on an unprecedented scale. Space to think, to reflect, to talk, and to organize was space to rebel.

Fordism, to say the least, had many downsides. Its foundation was the patriarchal and heteronormative nuclear family, with untold hours of unpaid household labor expected from wives. The unions that helped secure living wages were often racist, building a system, especially in Northern cities, that protected white workers at the expense of black migrants. And perhaps most obviously, assembly-line work, especially coupled to Taylorist labor management practices, could be dehumanizing, as depicted famously in Charlie Chaplin’s 1936 film Modern Times. These substantial disadvantages were partly why a vision of a post-Fordist future seemed, at one point, quite appealing.

But it is possible to resist the temptation of nostalgia and still question whether the system at which we’ve arrived represents a substantial improvement. The ideal of the familial patriarch has been replaced with the differently but equally masculinist ideal of the heroic entrepreneur, visiting Silicon Valley sex parties when he needs a break from disrupting. No unions are perhaps the only thing worse for workers of color than racist unions, and that’s not to mention the racist mass incarceration that has tracked the emergence of post-Fordism, for a variety of complex reasons explored by scholars like Loïc Wacquant and Bernard Harcourt.

And, as I want to suggest, post-Fordist labor can be just as dehumanizing, in its own way, as the assembly line. The great promise of the new economy — work that you can really put yourself into — has been fulfilled with perverse literalness. “Doers” do indeed put themselves into their work, into their two or three or five jobs, until there is no self left to give, although the work continues vampire-like (to employ Marx’s most famous metaphor) to keep sucking. “The neoliberal urge to privatize everything,” in the phrase of the political scientist Bonnie Honig, has proved remarkably compatible with the emaciation of private existence for much of the workforce. Is it any wonder that a wave of “eliminativist” philosophers in recent decades (Paul and Patricia Churchland, Richard Rorty, Daniel Dennett) have denied the reality of many of the mental phenomena that form the common-sense understanding of consciousness?

The disintegration of the self in an endlessly recombinant world of fleeting connections is powerfully dramatized in Alex Garland’s new film Annihilation. In “the Shimmer,” the terrifying, beautiful mystery region at its heart, Natalie Portman and her scientist-soldier partners experience odd memory gaps, witness bizarre and uncontrollable physical transformations, and confront, as one crew member remarks in a quiet moment, the pull of self-destruction — of annihilation. It spoils very little to say that at the heart of the Shimmer is a yawning black pit, into which Portman’s character, inevitably, must descend.

The character in Annihilation who remarks on the universality of the self-destructive impulse juxtaposes it to the act of suicide, which she says “almost no one” commits. Which is true, in the grand scheme of things, but perhaps misleading. Most people don’t know that the suicide rate in the United States has increased by almost 25 percent since 1999. That is astonishing. It is among the most invisible public health crises of our time. Its causality is surely multifactorial, as will be its solution. But altering our headlong rush into annihilation will require a willingness to go beyond the individualistic language of “self-care,” of “taking time,” of “reaching out,” and to confront the political and economic forces that have helped place the project of selfhood in its present-day jeopardy.

DeRozan and Love have both said that they wanted to share their experience in order to demonstrate the universality of struggles with mental health. Their message is loud and clear: You are not alone. That message is, indeed, the first step. The next step is to start to wonder why.

Good men and good bosses

The Me Too movement was always going to get to this point. From its beginning it was an attempt to reckon with the exposure of a singularly evil individual in a position of power in a major industry and an attempt to publicly reconceptualize a lot of common experiences in women’s lives as unacceptable and deserving of protest. The problem was that the vast majority of men responsible for inflicting quotidian experiences of exploitation, discomfort, and violation on women are not Weinsteins. Almost no one is a Weinstein. There are horror movie villains who aren’t Weinsteins. “Canceling” monsters was never going to be enough to actually open up the abscess festering beneath the skin.

And so now with the case of Aziz Ansari the tension between these two impulses — catching bad guys and reforming everyday life — has reached a breaking point. Many critics have jumped on the ordinariness of what Ansari stands accused of having done to discredit the movement. Every man has done something like this, they say. It’s wrong to conflate this kind of thing with the actions of a Weinstein or a Cosby.

The uncomfortable truth is that they’re right. Every man has done something like this. Aggressive, coercive, disrespectful sexual behavior on a date? A tweet that’s gone viral saying that 75% of adult men have acted similarly at some point is probably an underestimate. Ansari’s character Tom Haverford does things like this constantly on Parks and Rec, to affable chuckling. With Ansari, in other words, we have finally reached a point where we have to move from insisting that “this isn’t normal” to insisting that there is a problem with what is normal, that we need, collectively, to do better than normal.

We have to come to terms with the fact that individual values or personality traits can only do so much in the face of structural incentives to be a certain type of man. We can’t just purge the bad ones. Parenting, media creation, friendship, workplace structure, and so on will all have to change, so that men consistently face consequences when their actions venture onto the spectrum that runs from Ansari at one end to Weinstein at the other. This includes but goes well beyond “education.” Ansari is plenty educated about these matters. What is needed is a non-pathologizing explanation of why he would act the way he did anyways, an explanation consistent with the fact that millions of similarly enlightened men do similar things on a daily basis.

An analogy might make this seem like a less daunting task. Labor relations is often reduced, even on the left, to a matter of the qualities of individual employers, in the same way that there is a persistent tendency in this current moment to reduce gender relations to a matter of the qualities of individual men. This has come up recently with respect to various living-wage campaigns in the U.S. and Canada. “If you can’t pay staff a $15/hr minimum wage AND benefits, you shouldn’t be in business,” one tweeter argued about Tim Horton’s. People have similarly expressed bewilderment about the fact that Vox Media, a liberal company, has been reluctant to recognize its employees’ unionization efforts. “It is not the responsibility of your employees to subsidize your shitty business,” another person recently summed up.

But that quite literally is the function of labor under capitalism. Businesses make money by earning more from selling the stuff that people make for them than they pay out to those people. If they paid their employees what they were actually worth to them, they would have no profits, would not be able to expand and diversify or spend money on advertising, and would likely swiftly go out of business. What separates a kind boss from a cruel boss is not the fact of labor exploitation but the enthusiasm with which they pursue it. That is why labor organizations like unions and regulatory laws like the minimum wage are important: they provide an external constraint on the free reign that the logic of capitalism assigns to employers, because trusting in “good” bosses to spontaneously act with integrity is a recipe for getting burned.

The incentives for men to exploit their sexual partners, especially women, are less material or economic and more a matter of culture and ideology. But the logic of masculinity is similarly such that external constraint, ultimately leading to wholesale alteration, is necessary above and beyond the good will of individuals in power. It’s worth noting here that the two structures are, in practical fact, profoundly intertwined. I was struck by the class markers peppered throughout the Ansari article: his “exclusive” address, his pushiness about fine wine, his demand that he “call her a ride,” and so on. More broadly, the locus of Me Too activism has often been the workplace: it is not just men who have been its targets but male bosses. True “accountability” for men, the robust and consequential kind, will require an equalization of economic power. (Though such a change is obviously not sufficient: I’m drawing an analogy, not an equivalency.)

Marx and Engels once famously railed against the “misconception that induces you to transform into eternal laws of nature and of reason, the social forms springing from your present mode of production and form of property.” As ex-Google employee James Damore reminded us recently, gendered exploitation too is so normalized and so ubiquitous that it can come to seem like a law of nature. These seem to be the two poles that we are caught between: male abuse as an evolutionary necessity and male abuse as an aberration of pathological monsters. We need now more than ever to seek out the excluded middle, where we might find the possibility of collective social transformation — a better normal.