Sveriges mest populära poddar

The Convivial Society

Outsourcing Virtue

17 min • 18 augusti 2021

“What is fundamental to a convivial society is not the total absence of manipulative institutions and addictive goods and services, but the balance between those tools which create the specific demands they are specialized to satisfy and those complementary, enabling tools which foster self-realization. The first set of tools produces according to abstract plans for men in general; the other set enhances the ability of people to pursue their own goals in their unique way.”

— Ivan Illich, Tools for Conviviality

Welcome to the Convivial Society, especially the many of you for whom this is the first actual installment to hit your inbox. If you signed up in the last week or so, you may want to check out the brief orientation to the newsletter I sent out recently to new readers. Below you’ll find a full installment of the newsletter, which contains an essay followed by links to a variety of items, some of them with a bit of additional commentary from me, and a closing note. Read on. Share promiscuously.

In lines he composed for a play in the mid-1930s, T. S. Eliot wrote of those who

“constantly try to escape From the darkness outside and within By dreaming of systems so perfect that no one will need to be good.”

That last line has always struck me as a rather apt characterization of a certain technocratic impulse, which presumes that techno-bureaucratic structures and processes can eliminate the necessity of virtue, or maybe even human involvement altogether. We might just as easily speak of systems so perfect that no one will need to be wise or temperate or just. Just adhere to the code or the technique with unbending consistency and all will be well.

This dream, as Eliot put it, remains explicitly compelling in many quarters. It is also tacitly embedded in the practices fostered by many of our devices, tools, and institutions. So it’s worth thinking about how this dream manifests itself today and why it can so easily take on a nightmarish quality.

In Eliot’s age, increasingly elaborate and Byzantine bureaucracies automated human decision making in the pursuit of efficiency, speed, and scale, thus outsourcing human judgment and, consequently, responsibility. One did not require virtue or good judgment, only a sufficiently well-articulated system of rules. Of course, under these circumstances, bureaucratic functionaries might become “papier-mâché Mephistopheles,” in Conrad’s memorable phrase, and they may abet forms of what Arendt later called banal evil. But the scale and scope of modern societies also seem to require such structures in order to operate reasonably well, although this is certainly debatable. Whether strictly necessary or not, these systems introduce a paradox: in order to ostensibly serve human society, they must eliminate or displace elements of human experience. Of course, what becomes evident eventually is that the systems are not, in fact, serving human ends, at least not necessarily so.

To take a different class of example, we might also think of the modern fixation with technological fixes to what may often be irreducibly social and political problems. In a prescient 2020 essay about the pandemic, Ed Yong observed that “instead of solving social problems, the U.S. uses techno-fixes to bypass them, plastering the wounds instead of removing the source of injury—and that’s if people even accept the solution on offer.” No need for good judgment, responsible governance, self-sacrifice, or mutual care if there’s an easy technological fix to ostensibly solve the problem. No need, in other words, to be good, so long as the right technological solution can be found.

Likewise, there’s no shortage of examples involving algorithmic tools intended to outsource human judgment. Most recently, I encountered the case of NarxCare reported in Wired. NarxCare is “an ‘analytics tool and care management platform’ that purports to instantly and automatically identify a patient’s risk of misusing opioids.” The article details the case of a 32-year-old woman suffering from endometriosis, whose pain medications were cut off, without explanation or recourse, because she triggered a high-risk score from the proprietary algorithm. You can read the article for further details, which are both fascinating and disturbing, but here’s the pertinent part for my purposes:

“Appriss is adamant that a NarxCare score is not meant to supplant a doctor’s diagnosis. But physicians ignore these numbers at their peril. Nearly every state now uses Appriss software to manage its prescription drug monitoring programs, and most legally require physicians and pharmacists to consult them when prescribing controlled substances, on penalty of losing their license.”

This is an obviously complex and sensitive issue, but it’s hard to escape the conclusion that the use of these algorithmic systems exacerbates the same demoralizing opaqueness, evasion of responsibility, and CYA dynamics that have characterized analog bureaucracies. It becomes difficult to assume responsibility for a particular decision made in a particular case. Or, to put it otherwise, it becomes too easy to claim the algorithm made me do it, and it becomes so, in part, because the existing bureaucratic dynamics all but require it.

This technocratic impulse is alive and well and we’ll come back to it in a moment, but it occurs to me that we might also profitably invert Eliot’s claim and apply it to our digital media environment in which we experience systems so imperfect that it turns out everyone will need to be really, really good. Let me explain what I mean by this. The thought occurred to me when I read yet another tweet advocating for the cultivation of digital media literacy. You should know that, at one level, I think this is fine and possibly helpful under certain circumstances. However, I also think it underestimates or altogether ignores the non-intellectual elements of the problem. It seems unrealistic, for example, to expect that someone, who is likely already swamped by the demands of living in a complex, fast-paced, and precarious social milieu, will have the leisure and resources to thoroughly “do their own research” about every dubious or contested claim they encounter online, or to adjudicate the competing claims made by those who are supposed to know what they are talking about. There’s a lot more to be said about this dynamic, of course. It raises questions about truth, certainty, trust, authority, expertise, and more, but here I simply want to highlight the moral demands, because searching for the truth, or a sufficient approximation, is more than a merely intellectual activity. It involves, for example, humility, courage, and patience. It presumes a willingness to break with one’s tribe or social network with all the risks that may entail. In short, you need to be not just clever but virtuous, and, depending on the degree to which you lived online, you would need to do this persistently over time, and, recently, of course, during a health crisis that has generated an exhausting amount of uncertainty and a host of contentious debates about private and public actions.

This is but one case, the one which initially led me to invert Eliot’s line. It doesn’t take a great deal of imagination to conjure up other similar examples of the kind of virtue our digital devices and networks tacitly demand of us. Consider the discipline required to responsibly direct one’s attention from moment to moment rather than responding with Pavlovian alacrity when our devices beckon us. Or the degree of restraint necessary to avoid the casual voyeurism that powers so much of our social media feeds. Or, how those same platforms can be justly described as machines for the inducement of petty vindictiveness and less-than-righteous indignation. Or, alternatively, as carefully calibrated engines of sloth, greed, envy, despair, and self-loathing. The point is not that our digital media environment necessarily generates vice, rather it’s that it constitutes an ever-present field of temptation, which can require, in turn, monastic degrees of self-discipline to manage. I’m reminded, for example, of how years ago Evgeny Morozov described buying a timed safe in which to lock his smartphone, and how, when he discovered he could unscrew the timing mechanism, he locked the screwdriver in there, too. Under certain circumstances and for certain people, maintaining a level of basic human decency or even psychic well-being, may feel like an exercise in moral sainthood. Perhaps this explains the recent interest in stoicism, although, we do well to remember Pascal’s pointed criticism of the stoics: “They conclude that we can always do what we can sometimes do.”

We alternate, then, between environments that seek to render virtue superfluous and environments that tacitly demand a high degree of virtue in order to operate benignly. Both engender their own set of problems, and, not surprisingly, there’s a reciprocal relationship between these two dynamics. Failure to exhibit the requisite virtue creates a demand for the enhancement of rule-based systems to regulate human behavior. Speech on social media platforms is a case in point that comes readily to mind. The scale and speed of communication on social media platforms generate infamously vexing issues related to speech and expression, which are especially evident during a volatile election season or a global pandemic. These issues do not, in my view, admit of obvious solutions beyond shutting down the platforms altogether. That not being a presently viable option, companies and law makers are increasingly pressured to apply ever more vigilant and stringent forms of moderation, often with counterproductive results. This is yet another complex problem, of course, but it also illustrates the challenge of governing by codes that seek to manage human behavior by generating rules of conduct with attendant consequences for their violation, which, again, may be the only viable way of governing human behavior at the numeric, spatial, and temporal scale of digital information environments. In any case, the impulse is to conceive of moral and political challenges as technical problems admitting of engineered solutions.

To be clear, it’s not that codes and systems are useless. They can have their place, but they require sound judgment in their application, precisely to the degree that they fail to account for the multiplicity of meaningful variables and goods at play in human relations. Trouble arises when we are tempted to make the code and its application coterminous, which would require a rule to cover every possible situation and extenuating circumstance, ad infinitum. This is the temptation that animates the impulse to apply a code with blind consistency as if this would be equivalent to justice itself. The philosopher Charles Taylor has called this tendency in modern liberal societies “code fetishism,” and it ought to be judiciously resisted. According to Taylor, code fetishism “tends to forget the background which makes sense of any code—the variety of goods which rules and norms are meant to realize—as well as the vertical dimension which arises above all of these.” Code fetishism in this sense is not unlike what Jacques Ellul called technique, a relentless drive toward efficiency that eventually became an end in itself having lost sight of the goods for the sake of which efficiency was pursued in the first place.

As an aside, I’ll note that code fetishism may be something like a default setting for modern democratic societies, which have a tendency to tilt toward technocracy (while, of course, also harboring potent counter-tendencies). The tilting follows from a preference for proceduralism, or the conviction that an ostensibly neutral set of rules and procedures are an adequate foundation for a just society, particularly in the absence of substantive agreement about the nature of a good society. In this way, there is a longstanding symbiosis between modern politics and modern technology. They both traffic in the ideal of neutrality—neutral tools, neutral processes, and neutral institutions. It should not be surprising, then, that contemporary institutions turn toward technological tools to shore up the ideal of neutrality. The presumably neutral algorithm will solve the problem of bias in criminal sentencing or loan applications or hiring, for example. And neither should it be surprising to discover that what we think of as modern society, built upon this tacit pact between ostensibly neutral political and technological structures, begins to fray and lose its legitimacy as the supposed neutrality of both becomes increasingly implausible. (Okay, I realize this paragraph calls for a book of footnotes, but it will have to do for now.)

As it turns out, Charles Taylor also wrote the Foreword to Ivan Illich’s Rivers North of the Future. (And—caveat lector, new readers—at the Convivial Society, we eventually come around to Illich at some point.) In his Foreword, Taylor explored Illich’s seemingly eccentric arguments about the origins of modernity in the corruption of the Christian church. It’s an eccentric but compelling argument, however, I’ll leave its merits to one side here in order to hone in on Taylor’s comments about code fetishism, or, to recall where we began, the impulse to build systems so perfect no one will need to be good.

[There’s an excellent discussion of Taylor, code fetishism, and Illich in Jeffrey Bilbro’s wonderful guide to the work of Wendell Berry, Virtues of Renewal: Wendell Berry’s Sustainable Forms.]

“We think we have to find the right system of rules, of norms, and then follow them through unfailingly,” Taylor wrote. “We cannot see any more,” he continued, “the awkward way these rules fit enfleshed human beings, we fail to notice the dilemmas they have to sweep under the carpet [….]”

These codes often spring from decent motives and good intentions, but they may be all the worse for it. “Ours is a civilization concerned to relieve suffering and enhance human well-being, on a universal scale unprecedented in history,” Taylor argued, “and which at the same time threatens to imprison us in forms that can turn alien and dehumanizing.” “Codes, even the best codes,” Taylor concludes, “can become idolatrous traps that tempt us to complicity in violence.” Or, as Illich argued, if you forget the particular, bodily, situated context of the other, then the freedom to do good by them exemplified in the story of the good Samaritan can become the imperative to impose the good as you imagine it on them. “You have,” as Illich bluntly put it, “the basis on which one might feel responsible for bombing the neighbour for his own good.”

In Taylor’s reading, Illich “reminds us not to become totally invested in the code … We should find the centre of our spiritual lives beyond the code, deeper than the code, in networks of living concern, which are not to be sacrificed to the code, which must even from time to time subvert it.” “This message,” Taylor acknowledges, “comes out of a certain theology, but it should be heard by everybody.” And, for what it’s worth, I second Taylor on that note. My chief aim in this post as been to suggest that the code fetishism Taylor described manifests itself both intellectually and materially. Which is to say that it can be analyzed as a principle animating formal legal codes, and it can be implicit in our material culture, informing the technologies that shape our habits and assumptions. To put it another way, dealing with humanity’s imperfections through systems, tools, and techniques is a longstanding strategy. It has its benefits, but we need to be mindful of its limitations, especially when ignoring those limitations can lead to demoralizing and destructive consequences.

As I was wrapping up this post, I caught a tweet from Timothy Burke that rather nicely sums this up, and I’ll give him the last word. Commenting on an article arguing that “student engagement data” should replace student recommendations, Burke observed, “This is one of those pieces that identifies a problem that's rooted in the messy and flawed humanity of the systems we make and then imagines that there is some metric we could make that would flush that humanity out--in order to better judge some kind of humanity.”

It will be worth pondering this impulse to alleviate the human condition by eliminating elements of human experience.

News and Resources

* Clive Thompson (I almost typed Owen!) on “Why CAPTCHA Pictures Are So Unbearably Depressing”: “They weren’t taken by humans, and they weren’t taken for humans. They are by AI, for AI. They thus lack any sense of human composition or human audience. They are creations of utterly bloodless industrial logic. Google’s CAPTCHA images demand you to look at the world the way an AI does.”

* And here is Thompson again on productivity apps in an article titled “Hundreds of Ways to Get S#!+ Done—and We Still Don’t”: “To-do lists are, in the American imagination, a curiously moral type of software. Nobody opens Google Docs or PowerPoint thinking ‘This will make me a better person.’ But with to-do apps, that ambition is front and center. ‘Everyone thinks that, with this system, I’m going to be like the best parent, the best child, the best worker, the most organized, punctual friend,’ says Monique Mongeon, a product manager at the book-sales-tracking firm BookNet and a self-admitted serial organizational-app devotee. ‘When you start using something to organize your life, it’s because you’re hoping to improve it in some way. You’re trying to solve something.’”There’s a lot I’m tempted to say in response to the subject of this piece. I’m reminded, for example, of a quip from Umberto Eco, “We make lists because we don’t want to die.” I think, too, of Hartmut Rosa describing how modernity turns the human experience of the world into “a series of points of aggression.” And then all sorts of Illichian responses come to mind. At one point Thompson mentions how “quite apart from one’s paid toil, there’s been an increase in social work—all the messaging and posts and social media garden-tending that the philosopher and technologist Ian Bogost calls “‘hyperemployment,’” and I’m immediately reminded of what Illich called shadow work, a “form of unpaid work which an industrial society demands as a necessary complement to the production of goods and services.” So here we are dealing with digitized shadow work, except that we’re now serving an economy based on the accumulation of data. And, finally, I’m tempted to ask, quite seriously, why anyone should think that they need to be productive at all. Of course, I know some of the answers that are likely to be given, that I would give. But, honestly, that’s just the sort of question that I think is worth taking seriously and contemplating. What counts as productivity anyway? Who defines it? Who imposes the standard? Why have I internalized it? What is the relationship among productivity and purpose and happiness? The problem with productivity apps, as Thompson suggests at one point, is the underlying set of assumptions about human well-being and purpose that are themselves built into the institutions and tools of contemporary society.

* Speaking of shadow work, here is a terrific piece on some of the lesser known, but actually critical themes in Ivan Illich’s later work written by Jackie Brown and Philippe Mesly for Real Life: “Meanwhile, the economy’s incessant claims on our time and energy diminishes our engagement in non-commodified activities. According to Illich, it is only the willing acceptance of limits — a sense of enoughness — that can stop monopolistic institutions from appropriating the totality of the Earth’s available resources, including our identities, in their constant quest for growth.”

* From an essay by Shannon Vallor on technology and the virtues (about which she quite literally wrote the book): “Humanity’s greatest challenge today is the continued rise of a technocratic regime that compulsively seeks to optimise every possible human operation without knowing how to ask what is optimal, or even why optimising is good.”

* Thoughtful piece by Deb Chachra on infrastructure as “Care at Scale”:“Our social relationships with each other—our culture, our learning, our art, our shared jokes and shared sorrow, raising our children, attending to our elderly, and together dreaming of our future—these are the essence of what it means to be human. We thrive as individuals and communities by caring for others, and being taken care of in turn. Collective infrastructural systems that are resilient, sustainable, and globally equitable provide the means for us to care for each other at scale. They are a commitment to our shared humanity.”I confess, however, that I did quibble with this line: “Artificial light compensates for our species’ poor night vision and gives us control over how we spend our time, releasing us from the constraints of sunrise and sunset.” Chiefly, perhaps with the implications of “control” and “constraints.” Nonetheless, this was in many ways a model for how to make a public case for moral considerations in regards to technical systems.

* Podcast interview with Zachary Loeb on “Tech criticism before the Techlash” (which is the best tech criticism), focusing on Lewis Mumford and Joseph Weizenbaum. Loeb knows the tradition well, and I commend his work.

* A 2015 piece from Adam Elkus exploring the relationship between algorithms and bureaucracies: “If computers implementing some larger social value, preference, or structure we take for granted offends us, perhaps we should do something about the value, preference, or structure that motivates the algorithm.”

* An excerpt in Logic from Predict and Surveil: Data, Discretion, and the Future of Policing by Sarah Brayne looking at the use of Palantir in policing: “Because one of Palantir’s biggest selling points is the ease with which new, external data sources can be incorporated into the platform, its coverage grows every day. LAPD data, data collected by other government agencies, and external data, including privately collected data accessed through licensing agreements with data brokers, are among at least nineteen databases feeding Palantir at JRIC. The data come from a broad range of sources, including field interview cards, automatic license plate readings, a sex offender registry, county jail records (including phone calls, visitor logs, and cellblock movements), and foreclosure data.”

* With data collection, facial-recognition technology, and questions of bias in mind, consider this artifact discussed in a handout produced by Jim Strickland of the Computer History Museum. It is a rail ticket with a primitive facial recognition feature: “punched photographs,” generic faces to be punched by the conductor according to their similarity to the ticket holder. These inspired the Hollerith machine, which was used to tabulate census data from 1890 to the mid-20th century.

* “Interior view of the Central Social Insurance Institution showing men working in mobile work stations used to access the card catalog drawers, Prague, Czechoslovakia.” Part of a 2009 exhibition, “Speed Limits.”

* A review of Shannon Mattern’s new collection of essays, A City Is Not a Computer: Other Urban Intelligences. Mattern’s work is always worth reading. If you recognize the name but are not sure why, it might be because I’ve shared her work in the newsletter on a number of occasions.

* “In Ocado's grocery warehouses, thousands of mechanical boxes move on the Hive”:

* For your amusement, I was amused anyway, an historian of naval warfare rates nine Hollywood battle scenes for accuracy. The professor’s deadpan style makes the video.

Re-framings

— “Another Time,” by W. H. Auden (1940):

For us like any other fugitive,Like the numberless flowers that cannot numberAnd all the beasts that need not remember,It is today in which we live.

So many try to say Not Now,So many have forgotten howTo say I Am, and would beLost, if they could, in history.

Bowing, for instance, with such old-world graceTo a proper flag in a proper place,Muttering like ancients as they stump upstairsOf Mine and His or Ours and Theirs.

Just as if time were what they used to willWhen it was gifted with possession still,Just as if they were wrongIn no more wishing to belong.

No wonder then so many die of grief,So many are so lonely as they die;No one has yet believed or liked a lie,Another time has other lives to live.

— I stumbled upon an essay by Wendell Berry written circa 2002 titled “A Citizen’s Response to the National Security Strategy.” It struck me as a piece worth revisiting:

AND SO IT IS NOT WITHOUT REASON or precedent that a citizen should point out that, in addition to evils originating abroad and supposedly correctable by catastrophic technologies in “legitimate” hands, we have an agenda of domestic evils, not only those that properly self aware humans can find in their own hearts, but also several that are indigenous to our history as a nation: issues of economic and social justice, and issues related to the continuing and worsening maladjustment between our economy and our land.

Thanks for reading this latest installment of the newsletter, which was, I confess, a bit tardy. As always, my hope is that you found something useful, encouraging, or otherwise helpful in the foregoing.

In case you’ve not seen it yet, my first essay with Comment Magazine is now online: “The Materiality of Digital Culture.” The key point more or less is this: “The problem with digital culture, however, is not that it is, in fact, immaterial and disembodied, but that we have come to think of it is as such.”

By way of reminder, comments are open to paid subscribers, but all are always welcome to reach out via email. Depending on what happens to be going on when you do, I’ll try to get back to you in relatively short order.

Finally, I read a comment recently about the guilt someone felt unsubscribing from newsletters, especially if they thought the author would be notified. Life’s too short, folks. I’d be glad for you to give the newsletter time to prove itself, but I absolve you of any guilt should you conclude that this just isn’t for you. Besides I’ve turned that notification off, as any sane person would. I’ll never know!

Trust you all are well.

Cheers,

Michael



Get full access to The Convivial Society at theconvivialsociety.substack.com/subscribe
Förekommer på
00:00 -00:00