50 avsnitt • Längd: 20 min • Månadsvis
Audio version of The Convivial Society, a newsletter exploring the intersections of technology, society, and the moral life.
theconvivialsociety.substack.com
The podcast The Convivial Society is created by L. M. Sacasas. The podcast and the artwork on this page are embedded on this page using the public podcast feed (RSS).
Hello all,
The audio version keep coming. Here you have the audio for Secularization Comes for the Religion of Technology.
Below you’ll find a couple of paintings that I cite in the essay.
Thanks for listening. Hope you enjoy it.
Cheers,
Michael
I continue to catch up on supplying audio versions of past essays. Here you have the audio for “Vision Con,” an essay about Apple’s mixed reality headset originally published in early February.
The aim is to get caught up and then post the audio version either the same day as or very shortly after I publish new written essays.
Thanks for listening!
Just before my unexpected hiatus during the latter part of last year, I had gotten back to the practice of recording audio version of my essays. Now that we’re up and running again, I wanted to get back to these recordings as well, beginning with this recording of the first essay of this year. Others will follow shortly, and as time allows I will record some of the essay from the previous year as well.
You can sign up for the audio feed at Apple Podcasts or Spotify.
At long last, the audio version of the Convivial Society returns.
It’s been a long time, which I do regret. Going back to 2020, it had been my practice to include an audio version of the essay with the newsletter. The production value left a lot to be desired, unless simplicity is your measure, but I know many of you appreciated the ability to listen to the essays. The practice became a somewhat inconsistent in mid-2022, and then fell off altogether this year. More than a few of you have inquired about the matter over the past few months. Some of you graciously assumed there must have been some kind of technical problem. The truth, however, was simply that this was a ball I could drop without many more things falling apart, so I did. But I was sorry to do so and have always intended to bring the feature back.
So, finally, here it is, and I aim to keep it up.
I’m sending this one out via email to all of you on the mailing list in order to get us all on the same page, but moving forward I will simply post the the audio to the site, which will also publish the episode to Apple Podcasts and Spotify.
So if you’d like to keep up with the audio essays, you can subscribe to the feed at either service to be notified when new audio posts. Otherwise just keep an eye on the newsletter’s website for the audio versions that will accompany the text essays. The main newsletter will, of course, still come straight to your inbox.
One last thing. I intend, over the coming weeks, to post audio versions of the past dozen or so essays for which no audio version was ever recorded. If that’s of interest to you, stay tuned.
Thanks for reading and now, once again, for listening.
Cheers,
Michael
The newsletter is public and free to all, but sustained by readers who value the writing and have the means to support it.
Welcome back to the Convivial Society. In this installment, you’ll find the audio version of the latest essay, “What You Get Is the World.” I try to record an audio version of most installments, but I send them out separately from the text version for reasons I won’t bore you with here. Incidentally, you can also subscribe to the newsletter’s podcast feed on Apple Podcasts and Spotify. Just look up The Convivial Society.
Aside from the audio essay, you’ll find an assortment of year-end miscellany below.
I trust you are all well as we enter a new year. All the best to you and yours!
A Few Notable Posts
Here are six installments from this past year that seemed to garner a bit of interest. Especially if you’ve just signed up in recent weeks, you might appreciate some of these earlier posts.
Incidentally, if you have appreciated the writing and would like to become a paid supporter at a discounted rate, here’s the last call for this offer. To be clear, the model here is that all the writing is public but I welcome the patronage of those who are able and willing. Cheers!
Podcast Appearances
I’ve not done the best job of keeping you all in loop on these, but I did show up in a few podcasts this year. Here are some of those:
With Charlie Warzel on how being online traps us in the past
With Georgie Powell on reframing our experience
Year’s End
It is something of a tradition at the end of the year for me to share Richard Wilbur’s poem, “Year’s End.” So, once again I’ll leave you with it.
Now winter downs the dying of the year, And night is all a settlement of snow;From the soft street the rooms of houses show A gathered light, a shapen atmosphere, Like frozen-over lakes whose ice is thin And still allows some stirring down within.
I’ve known the wind by water banks to shakeThe late leaves down, which frozen where they fell And held in ice as dancers in a spell Fluttered all winter long into a lake; Graved on the dark in gestures of descent, They seemed their own most perfect monument.
There was perfection in the death of ferns Which laid their fragile cheeks against the stone A million years. Great mammoths overthrown Composedly have made their long sojourns, Like palaces of patience, in the grayAnd changeless lands of ice. And at Pompeii
The little dog lay curled and did not rise But slept the deeper as the ashes roseAnd found the people incomplete, and froze The random hands, the loose unready eyes Of men expecting yet another sunTo do the shapely thing they had not done.
These sudden ends of time must give us pause. We fray into the future, rarely wroughtSave in the tapestries of afterthought.More time, more time. Barrages of applause Come muffled from a buried radio.The New-year bells are wrangling with the snow.
Thank you all for reading along in 2022. We survived, and I’m looking forward to another year of the Convivial Society in 2023.
Cheers, Michael
Welcome again to the Convivial Society, a newsletter about technology and culture. This post features the audio version of the essay that went out in the last installment: “Lonely Surfaces: On AI-generated Images.”
For the sake of recent subscribers, I’ll mention that I ordinarily post audio of the main essays (although a bit less regularly than I’d like over the past few months). For a variety of reasons that I won’t bore you with here, I’ve settled on doing this by sending a supplement with the audio separately from the text version of the essay. That’s what you have here.
The newsletter is public but reader supported. So no customers, only patrons. This month if you’d like to support my work at a reduced rate from the usual $45/year, you can click here:
You can go back to the original essay for links to articles, essays, etc. You can find the images and paintings I cite in the post below.
Jason Allen’s “Théâtre D’opéra Spatial”
Rembrandt’s “The Anatomy Lesson of Dr Nicolaes Tulp”
Detail from Pieter Bruegel’s “Harvesters”
The whole of Bruegel’s “Harvesters”
Welcome back to the Convivial Society. In this installment, you’ll find the audio version of two recent posts: “The Pathologies of the Attention Economy” and “Impoverished Emotional Lives.” I’ve not combined audio from two separate installments before, but the second is a short “Is this anything?” post, so I thought it would be fine to include it here. (By the way, I realized after the fact that I thoughtlessly mispronounced Herbert Simon’s name as Simone. I’m not, however, sufficiently embarrassed to go back and re-record or edit the audio. So there you have it.)
If you’ve been reading over the past few months, you know that I’ve gone back and forth on how best to deliver the audio version of the essays. I’ve settled for now on this method, which is to send out a supplement to the text version of the essay. Because not all of you listen to the audio version, I’ll include some additional materials (links, resources, etc.) so that this email is not without potential value to those who do not listen to the audio.
Farewell Real Life
I noted in a footnote recently that Real Life Magazine had lost its funding and would be shutting down. This is a shame. Real Life consistently published smart and thoughtful essays exploring various dimensions of internet culture. I had the pleasure of writing three pieces for the magazine between 2018 and 2019: ”The Easy Way Out,” “Always On,” and “Personal Panopticons.”
I was also pleasantly surprised to encounter essays in the past year or two drawing on the work of Ivan Illich: “Labors of Love” and “Appropriate Measures,” each co-authored by Jackie Brown and Philippe Mesly, as well as “Doctor’s Orders” by Aimee Walleston.
And at any given time I’ve usually had a handful of Real Life essays open in tabs waiting to be read or shared. Here are some more recent pieces that are worth your time: “Our Friend the Atom The aesthetics of the Atomic Age helped whitewash the threat of nuclear disaster,” “Hard to See How trauma became synonymous with authenticity,” and “Life’s a Glitch The non-apocalypse of Y2K obscures the lessons it has for the present.”
Links
The latest installment in Jon Askonas’s ongoing series in The New Atlantis is out from behind the paywall today. In “How Stewart Made Tucker,” Askonas weaves a compelling account of how Jon Stewart prepared the way for Tucker Carlson and others:
In his quest to turn real news from the exception into the norm, he pioneered a business model that made it nearly impossible. It’s a model of content production and audience catering perfectly suited to monetize alternate realities delivered to fragmented audiences. It tells us what we want to hear and leaves us with the sense that “they” have departed for fantasy worlds while “we” have our heads on straight. Americans finally have what they didn’t before. The phony theatrics have been destroyed — and replaced not by an earnest new above-the-fray centrism but a more authentic fanaticism.
You can find earlier installments in the series here: Reality — A post-mortem. Reading through the essay, I was struck again and again by how foreign and distant the world of late 90s and early aughts. In any case, the Jon’s work in this series is worth your time.
Kashmir Hill spent a lot of time in Meta’s Horizons to tell us about life in the metaverse:
My goal was to visit at every hour of the day and night, all 24 of them at least once, to learn the ebbs and flows of Horizon and to meet the metaverse’s earliest adopters. I gave up television, books and a lot of sleep over the past few months to spend dozens of hours as an animated, floating, legless version of myself.
I wanted to understand who was currently there and why, and whether the rest of us would ever want to join them.
Ian Bogost on smart thermostats and the claims made on their behalf:
After looking into the matter, I’m less confused but more distressed: Smart heating and cooling is even more knotted up than I thought. Ultimately, your smart thermostat isn’t made to help you. It’s there to help others—for reasons that might or might not benefit you directly, or ever.
Sun-ha Hong’s paper on predictions without futures. From the abstract:
… the growing emphasis on prediction as AI's skeleton key to all social problems constitutes what religious studies calls cosmograms: universalizing models that govern how facts and values relate to each other, providing a common and normative point of reference. In a predictive paradigm, social problems are made conceivable only as objects of calculative control—control that can never be fulfilled but that persists as an eternally deferred and recycled horizon. I show how this technofuture is maintained not so much by producing literally accurate predictions of future events but through ritualized demonstrations of predictive time.
Miscellany
As I wrote about the possibility that the structure of online experience might impoverish our emotional lives, I recalled the opening paragraph of the Dutch historian Johan Huizinga’s The Waning of the Middle Ages. I can’t say that I have a straightforward connection to make between “the passionate intensity of life” Huizinga describes and my own speculations the affective consequences of digital media, but I think there may be something worth getting at.
When the world was half a thousand years younger all events had much sharper outlines than now. The distance between sadness and joy, between good and bad fortune, seemed to be much greater than for us; every experience had that degree of directness and absoluteness that joy and sadness still have in the mind of a child. Every even, every deed was defined in given and expressive forms and was in accord with the solemnity of a tight, invariable life style. The great events of human life—birth, marriage, death—by virtue of the sacraments, basked in the radiance of divine mystery. But even the lesser events—a journey, labor, a visit—were accompanied by a multitude of blessings, ceremonies, sayings, and conventions.
From the perspective of media ecology, the shift to print as the dominant cultural medium is interpreted as having the effect of tempering the emotional intensity of oral culture and tending instead toward an ironizing effect as it generates a distance between an emotion and its experssion. Digital media curiously scrambles these dynamics by generating an instantaneity of delivery that mimics the immediacy of physical presence. In 2019, I wrote in The New Atlantis about how digital media scrambles the pscyhodynamics (Walter Ong’s phrase) of orality and literacy in often unhelpful ways: “The Inescapable Town Square.” Here’s a bit from that piece:
The result is that we combine the weaknesses of each medium while losing their strengths. We are thrust once more into a live, immediate, and active communicative context — the moment regains its heat — but we remain without the non-verbal cues that sustain meaning-making in such contexts. We lose whatever moderating influence the full presence of another human being before us might cast on the passions the moment engendered. This not-altogether-present and not-altogether-absent audience encourages a kind of performative pugilism.
To my knowledge, Ivan Illich never met nor corresponded with Hannah Arendt. However, in my efforts to “break bread with the dead,” as Auden once put it, they’re often seated together at the table. In a similarly convivial spirit, here is an excerpt from a recent book by Alissa Wilkinson:
I learn from Hannah Arendt that a feast is only possible among friends, or people whose hearts are open to becoming friends. Or you could put it another way: any meal can become a feast when shared with friends engaged in the activity of thinking their way through the world and loving it together. A mere meal is a necessity for life, a fact of being human. But it is transformed into something much more important, something vital to the life of the world, when the people who share the table are engaging in the practices of love and of thinking.
Finally, here’s a paragraph from Jacques Ellul’s Propaganda recently highlighted by Jeffrey Bilbro:
In individualist theory the individual has eminent value, man himself is the master of his life; in individualist reality each human being is subject to innumerable forces and influences, and is not at all master of his own life. As long as solidly constituted groups exist, those who are integrated into them are subject to them. But at the same time they are protected by them against such external influences as propaganda. An individual can be influenced by forces such as propaganda only when he is cut off from membership in local groups. Because such groups are organic and have a well-structured material, spiritual, and emotional life, they are not easily penetrated by propaganda.
Cheers! Hope you are all well,
Michael
Welcome to the Convivial Society, a newsletter about technology and culture. In this installment, I explore a somewhat eccentric frame by which to consider how we relate to our technologies, particularly those we hold close to our bodies. You’ll have to bear through a few paragraphs setting up that frame, but I hope you find it to be a useful exercise. And I welcome your comments below. Ordinarily only paid subscribers can leave comments, but this time around I’m leaving the comments open for all readers. Feel free to chime in. I will say, though, that I may not be able to respond directly to each one. Cheers!
Pardon what to some of you will seem like a rather arcane opening to this installment. We’ll be back on more familiar ground soon enough, but I will start us off with a few observations about liturgical practices in religious traditions.
A liturgy, incidentally, is a formal and relatively stable set of rites, rituals, and forms that order the public worship of a religious community. There are, for example, many ways to distinguish among the varieties of Christianity in the United States (or globally, for that matter). One might distinguish by region, by doctrine, by ecclesial structure, by the socioeconomic status its members, etc. But one might also place the various strands of the tradition along a liturgical spectrum, a spectrum whose poles are sometimes labeled low church and high church.
High church congregations, generally speaking, are characterized by their adherence to formal patterns and rituals. At high church services you would be more likely to observe ritual gestures, such as kneeling, bowing, or crossing oneself as well as ritual speech, such as set prayers, invocations, and responses. High church congregations are also more likely to observe a traditional church calendar and employ traditional vestments and ornamentation. Rituals and formalities of this sort would be mostly absent in low church congregations, which tend to place a higher premium on informality, emotion, and spontaneity of expression. I am painting with a broad brush, but it will serve well enough to set up the point I’m driving at.
But one more thing before we get there. What strikes me about certain low church communities is that they sometimes imagine themselves to have no liturgy at all. In some cases, they might even be overtly hostile to the very idea of a liturgy. This is interesting to me because, in practice, it is not that they have no liturgy at all as they imagine—they simply end up with an unacknowledged liturgy of a different sort. Their services also feature predictable patterns and rhythms, as well as common cadences and formulations, even if they are not formally expressed or delineated and although they differ from the patterns and rhythms of high church congregations. It’s not that you get no church calendar, for example, it’s that you end up trading the old ecclesial calendar of holy days and seasons, such as Advent, Epiphany, and Lent, for a more contemporary calendar of national and sentimental holidays, which is to say those that have been most thoroughly commercialized.
Now that you’ve borne with this eccentric opening, let me get us to what I hope will be the payoff. In the ecclesial context, this matters because the regular patterns and rhythms of worship, whether recognized as a liturgy or not, are at least as formative (if not more so) as the overt messages presented in a homily, sermon, or lesson, which is where most people assume the real action is. This is so because, as you may have heard it said, the medium is the message. In this case, I take the relevant media to be the embodied ritual forms, the habitual practices, and the material layers of the service of worship. These liturgical forms, acknowledged or unacknowledged, exert a powerful formative influence over time as they write themselves not only upon the mind of the worshipper but upon their bodies and, some might say, hearts.
With all of this in mind, then, I would propose that we take a liturgical perspective on our use of technology. (You can imagine the word “liturgical” in quotation marks, if you like.) The point of taking such a perspective is to perceive the formative power of the practices, habits, and rhythms that emerge from our use of certain technologies, hour by hour, day by day, month after month, year in and year out. The underlying idea here is relatively simple but perhaps for that reason easy to forget. We all have certain aspirations about the kind of person we want to be, the kind of relationships we want to enjoy, how we would like our days to be ordered, the sort of society we want to inhabit. These aspirations can be thwarted in any number of ways, of course, and often by forces outside of our control. But I suspect that on occasion our aspirations might also be thwarted by the unnoticed patterns of thought, perception, and action that arise from our technologically mediated liturgies. I don’t call them liturgies as a gimmick, but rather to cast a different, hopefully revealing light on the mundane and commonplace. The image to bear in mind is that of the person who finds themselves handling their smartphone as others might their rosary beads.
To properly inventory our technologically mediated liturgies we need to become especially attentive to what our bodies want. After all, the power of a liturgy is that it inscribes itself not only on the mind, but also on the body. In that liminal moment before we have thought about what we are doing but find our bodies already in motion, we can begin to discern the shape of our liturgies. In my waking moments, do I find myself reaching for a device before my eyes have had a chance to open? When I sit down to work, what routines do I find myself engaging? In the company of others, to what is my attention directed? When I as a writer, for example, notice that my hands have moved to open Twitter the very moment I begin to feel my sentence getting stuck, I am under the sway of a technological liturgy. In such moments, I might be tempted to think that my will power has failed me. But from the liturgical perspective I’m exploring here, the problem is not a failure of willpower. Rather, it’s that I’ve trained my will—or, more to the point, I have allowed my will to be trained—to want something contrary to my expressed desire in the moment. One might even argue that this is, in fact, a testament to the power of the will, which is acting in keeping with its training. By what we unthinkingly do, we undermine what we say we want.
Say, for example, that I desire to be a more patient person. This is a fine and noble desire. I suspect some of you have desired the same for yourselves at various points. But patience is hard to come by. I find myself lacking patience in the crucial moments regardless of how ardently I have desired it. Why might this be the case? I’m sure there’s more than one answer to this question, but we should at least consider the possibility that my failure to cultivate patience stems from the nature of the technological liturgies that structure my experience. Because speed and efficiency are so often the very reason why I turn to technologies of various sorts, I have been conditioning myself to expect something approaching instantaneity in the way the world responds to my demands. If at every possible point I have adopted tools and devices which promise to make things faster and more efficient, I should not be surprised that I have come to be the sort of person who cannot abide delay and frustration.
“The cunning of pedagogic reason,” sociologist Pierre Bourdieu once observed, “lies precisely in the fact that it manages to extort what is essential while seeming to demand the insignificant.” Bourdieu had in mind “the respect for forms and forms of respect which are the most visible and most ‘natural’ manifestation of respect for the established order, or the concessions of politeness, which always contain political concessions.”
What I am suggesting is that our technological liturgies function similarly. They, too, manage to extort what is essential while seeming to demand the insignificant. Our technological micro-practices, the movements of our fingers, the gestures of our hands, the posture of our bodies—these seem insignificant until we realize that we are in fact etching the grooves along which our future actions will tend to habitually flow.
The point of the exercise is not to divest ourselves of such liturgies altogether. Like certain low church congregations that claim they have no liturgies, we would only deepen the power of the unnoticed patterns shaping our thought and actions. And, more to the point, we would be ceding this power not to the liturgies themselves, but to the interests served by those who have crafted and designed those liturgies. My loneliness is not assuaged by my habitual use of social media. My anxiety is not meaningfully relieved by the habit of consumption engendered by the liturgies crafted for me by Amazon. My health is not necessarily improved by compulsive use of health tracking apps. Indeed, in the latter case, the relevant liturgies will tempt me to reduce health and flourishing to what the apps can measure and quantify.
Hannah Arendt once argued that totalitarian regimes succeed, in part, by dislodging or disemedding individuals from their traditional and customary milieus. Individuals who have been so “liberated” are more malleable and subject to new forms of management and control. The consequences of many modern technologies can play out in much the same way. They promise some form of liberation—from the constraints of place, time, community, or even the body itself. Such liberation is often framed as a matter of greater efficiency, convenience, or flexibility. But, to take one example, when someone is freed to work from home, they may find that they can now be expected to work anywhere and at anytime. When older patterns and rhythms are overthrown, new patterns and rhythms are imposed and these are often far less humane because they are not designed to serve human ends.
So I leave you with a set of questions and a comment section open to all readers. I’ve given you a few examples of what I have in mind, but what technological liturgies do you find shaping your days? What are their sources or whose interests do they serve? How much power do you have to resist these liturgies or subvert them if you find that they do, in fact, undermine your own aims and goals? Finally, what liturgies do you seek to implement for yourselves (these may be explicitly religious or not)? After all, as the philosopher Albert Borgmann once put it, we must “meet the rule of technology with a deliberate and regular counterpractice.”
This is the audio version of the last essay posted a couple of days ago, “What Is To Be Done? — Fragments.”
It was a long time between installments of the newsletter, and it has been an even longer stretch since the last audio version. As I note in the audio, my apologies to those of you who primarily rely on the audio version of the essays. I hope to be more consistent on this score moving forward!
Incidentally, in recording this installment I noticed a handful of typos in the original essay. I’ve edited these in the web version, but I'm sorry those of you who read the emailed version had to endure them. Obviously, my self-editing was also a bit rusty!
One last note, I’ve experimented with a paid-subscribers* discussion thread for this essay. It’s turned out rather well, I think. There’ve been some really insightful comments and questions. So, if you are a paid subscriber, you might want to check that out: Discussion Thread.
Cheers,
Michael
* Note to recent sign-ups: I follow a patronage model. All of the writing is public, there is no paywall for the essays. But I do invite those who value this work to support it as they are able with paid subscriptions. Those who do so, will from time to time have some additional community features come their way.
Welcome to the Convivial Society, a newsletter about technology and culture. This is the audio version of the last installment, which focused on the Blake Lemoine/LaMDA affair. I argued that while LaMDA is not sentient, applications like it will push us further along toward a digitally re-enchanted world. Also: to keep the essay to a reasonable length I resorted to some longish footnotes in the prior text version. That version also contains links to the various articles and essays I cited throughout the piece.
I continue to be somewhat flummoxed about the best way to incorporate the audio and text versions. This is mostly because of how Substack has designed the podcast template. Naturally, it is designed to deliver a podcast rather than text, but I don’t really think of what I do as a podcast. Ordinarily, it is simply an audio version of a textual essay. Interestingly, Substack just launched what, in theory, is an ideal solution: the option to include a simple voiceover of the text, within the text post template. Unfortunately, I don’t think this automatically feeds the audio to Apple Podcasts, Spotify, etc. And, while I don’t think of myself as having a podcast, some of you do access the audio through those services. So, at present, I’ll keep to this somewhat awkward pattern of sending out the text and audio versions separately.
Thanks as always to all of you who read, listen, share, and support the newsletter. Nearly three years into this latest iteration of my online work, I am humbled by and grateful for the audience that has gathered around it.
Cheers,
Michael
Welcome to the Convivial Society, a newsletter exploring the relationship between technology and culture. This is what counts as a relatively short post around here, 1800 words or so, about a certain habit of mind that online spaces seem to foster.
Almost one year ago, this exchange on Twitter caught my attention, enough so that I took a moment to capture it with a screen shot, thinking I’d go on to write about it at some point.
Set aside for a moment whatever your particular thoughts might be on the public debate, if we can call it that, over vaccines, vaccine messaging, vaccine mandates, etc. Instead, consider the form of the claim, specifically the “anti-anti-” framing. I think I first noticed this peculiar way of talking about (or around) an issue circa 2016. In 2020, contemplating the same dynamics, I observed that “social media, perhaps Twitter especially, accelerates both the rate at which we consume information and the rate at which ensuing discussion detaches from the issue at hand, turning into meta-debates about how we respond to the responses of others, etc.” So by the time the Nyhan quote-tweeted Rosen last summer, the “anti-anti-” framing, to my mind, had already entered its mannerist phase.
The use of “anti-anti-ad infinitum” is easy to spot, and I’m sure you’ve seen the phrasing deployed on numerous occasions. But the overt use of the “anti-anti-” formulation is just the most obvious manifestation of a more common style of thought, one that I’ve come to refer to as meta-positioning. In the meta-positioning frame of mind, thinking and judgment are displaced by a complex, ever-shifting, and often fraught triangulation based on who holds certain views and how one might be perceived for advocating or failing to advocate for certain views. In one sense, this is not a terribly complex or particularly novel dynamic. Our pursuit of understanding is often an uneasy admixture of the desire to know and the desire to be known as one who knows by those we admire. Unfortunately, social media probably tips the scale in favor of the desire for approval given its rapid-fire feedback mechanisms.
Earlier this month, Kevin Baker commented on this same tendency in a recent thread that opened with the following observation, “A lot of irritating, mostly vapid people and ideas were able to build huge followings in 2010s because the people criticizing them were even worse.”
Baker goes on to call this “the decade of being anti-anti-” and explains that he felt like he spent “the better part of the decade being enrolled into political and discursive projects that I had serious reservations about because I disagreed [with] their critics more and because I found their behavior reprehensible.” In his view, this is a symptom of the unchecked expansion of the culture wars. Baker again: “This isn't censorship. There weren't really censors. It's more a structural consequence of what happens when an issue gets metabolized by the culture war. There are only two sides and you just have to pick the least bad one.”
I’m sympathetic to this view, and would only add that perhaps it is more specifically a symptom of what happens when the digitized culture wars colonize ever greater swaths of our experience. I argued a couple of years ago that just as industrialization gave us industrial warfare, so digitization has given us digitized culture warfare. My argument was pretty straightforward: “Digital media has dramatically enhanced the speed, scale, and power of the tools by which the culture wars are waged and thus transformed their norms, tactics, strategies, psychology, and consequences.” Take a look at the piece if you missed it.
I’d say, too, that the meta-positioning habit of mind might also be explained as a consequence of the digitally re-enchanted discursive field. I won’t bog down this post, which I’m hoping to keep relatively brief, with the details of that argument, but here’s the most relevant bit:
For my purposes, I’m especially interested in the way that philosopher Charles Taylor incorporates disenchantment theory into his account of modern selfhood. The enchanted world, in Taylor’s view, yielded the experience of a porous, and thus vulnerable self. The disenchanted world yielded an experience of a buffered self, which was sealed off, as the term implies, from beneficent and malignant forces beyond its ken. The porous self depended upon the liturgical and ritual health of the social body for protection against such forces. Heresy was not merely an intellectual problem, but a ritual problem that compromised what we might think of, in these times, as herd immunity to magical and spiritual forces by introducing a dangerous contagion into the social body. The answer to this was not simply reasoned debate but expulsion or perhaps a fiery purgation.
Under digitally re-enchanted conditions, policing the bounds of the community appears to overshadow the value of ostensibly objective, civil discourse. In other words, meta-positioning, from this perspective, might just be a matter of making sure you are always playing for the right team, or at least not perceived to be playing for the wrong one. It’s not so much that we have something to say but that we have a social space we want to be seen to occupy.
But as I thought about the meta-positioning habit of mind recently, another related set of considerations came to mind, one that is also connected to the digital media ecosystem. As a point of departure, I’d invite you to consider a recent post from Venkatesh Rao about “crisis mindsets.”
“As the world has gotten more crisis prone at all levels from personal to geopolitical in the last few years,” Rao explained, “the importance of consciously cultivating a more effective crisis mindset has been increasingly sinking in for me.” I commend the whole post to you, it offers a series of wise and humane observations about how we navigate crisis situations. Rao’s essay crossed my feed while I was drafting this post about meta-positioning, and these lines near the end of the essay caught my attention:
“We seem to be entering a historical period where crisis circumstances are more common than normalcy. This means crisis mindsets will increasingly be the default, not flourishing mindsets.”
I think this is right, but it also has a curious relationship to the digital media ecosystem. I can imagine someone arguing that genuine crisis circumstances are no more common now than they have ever been but that digital media feeds heighten our awareness of all that is broken in the world and also inaccurately create a sense of ambient crisis. This argument is not altogether wrong. In the digital media ecosystem, we are enveloped by an unprecedented field of near-constant information emanating from the world far and near, and the dynamics of the attention economy also encourage the generation of ambient crisis.
But two things can both be true at the same time. It is true, I think, that we are living through a period during which crisis circumstances have become more frequent. This is, in part, because the structures, both social and technological, of the modern world do appear increasingly fragile if not wholly decrepit. It is also true that our media ecosystem heightens our awareness of these crisis circumstances (generating, in turn, a further crisis of the psyche) and that it also generates a field of faux crisis circumstances.
Consequently, learning to distinguish between a genuine crisis and a faux crisis will certainly be an essential skill. I would add that it is also critical to distinguish among the array of genuine crisis circumstances that we encounter. Clearly, some will bear directly and unambiguously upon us—a health crisis, say, or a weather emergency. Others will bear on us less directly or acutely, and others still will not bear on us at all. Furthermore, there are those we will be able to address meaningfully through our actions and those we cannot. We should, therefore, learn to apportion our attention and our labors wisely and judiciously.
But let’s come back to the habit of mind with which we began. If we are, in fact, inhabiting a media ecosystem that, through sheer scale and ubiquity, heightens our awareness of all that is wrong with the world and overwhelms pre-digital habits of sense-making and crisis-management, then meta-positioning might be more charitably framed as a survival mechanism. As Rao noted, “I have realized there is no such thing as being individually good or bad in a crisis. Humans either deal with crises in effective groups, or not at all.” Just as digital re-enchantment retrieves the communal instinct, so too, perhaps, does the perma-crisis mindset. Recalling Baker’s analysis, we might even say that the digitized culture war layered over the crisis circumstances intensifies the stigma of breaking ranks.
There’s one last perspective I’d like to offer on the meta-positioning habit of mind. It also seems to suggest something like a lack of grounding or a certain aimlessness. There is a picture that is informing my thinking here. It is the picture of being adrift in the middle of the ocean with no way to get our bearings. Under these circumstances the best we can ever do is navigate away from some imminent danger, but we can never purposefully aim at a destination. So we find ourselves adrift in the vast digital ocean, and we have no idea what we are doing there or what we should be doing. All we know is that we are caught up in wave after wave of the discourse and the best we can do is to make sure we steer clear of obvious perils and keep our seat on whatever raft we find ourselves in, a raft which might be in shambles but, nonetheless, affords us the best chance of staying afloat.
So, maybe the meta-positioning habit of mind is what happens when I have clearer sense of what I am against than what I am for. Or maybe it is better to say that meta-positioning is what happens when we lack meaningful degrees of agency and are instead offered the simulacra of action in digitally mediated spheres, which generally means saying things about things and about the things other people are saying about the things—the “internet of beefs,” as Rao memorably called it. The best we can do is survive the beefs by making sure we’re properly aligned.
To give it yet another turn, perhaps the digital sea through which we navigate takes the form of a whirlpool sucking us into the present. The whirlpool is a temporal maelstrom, keeping us focused on immediate circumstances, unable to distinguish, without sufficient perspective, between the genuine and the faux crisis.
Under such circumstances, we lack what Alan Jacobs, borrowing the phrase from novelist Thomas Pynchon, has called “temporal bandwidth.” In Gravity’s Rainbow (1973), a character explains the concept: “temporal bandwidth is the width of your present, your now … The more you dwell in the past and future, the thicker your bandwidth, the more solid your persona. But the narrower your sense of Now, the more tenuous you are.” Paradoxically, then, the more focused we are on the present, the less of a grip we’re able to get on it. As Jacobs notes, the same character went on to say, “It may get to where you’re having trouble remembering what you were doing five minutes ago.” Indeed, so.
Jacobs recommends extending our temporal bandwidth through a deliberate engagement with the past through our reading as well as a deliberate effort to bring the more distant future into our reckoning. As the philosopher Hans Jonas, whom Jacobs cites, encouraged us to ask, “What force shall represent the future in the present?” The point is that we must make an effort to wrest our gaze away from the temporal maelstrom, and to do so not only in the moment but as a matter of sustained counter-practice. Perhaps then we’ll be better equipped to avoid the meta-positioning habit of mind, which undoubtedly constrains our ability to think clearly, and to find better ways of navigating the choppy, uncertain waters before us.
Welcome to the Convivial Society, a newsletter about technology and culture. I tend to think of my writing as way of clarify my thinking, or, alternatively, of thinking out loud. Often I’m just asking myself, What is going on? That’s the case in this post. There was a techno-cultural pattern I wanted to capture in what follows, but I’m not sure that I’ve done it well enough. So, I’ll submit this for your consideration and critique. You can tell me, if you’re so inclined, whether there’s at least the grain of something helpful here or not. Also, you’ll note that my voice suggests a lingering cold that’s done a bit of a number on me over the past few days, but I hope this is offset by the fact that I’ve finally upgraded my mic and, hopefully, improved the sound quality. Cheers!
If asked to define modernity or give its distinctive characteristics, what comes to mind? Maybe the first thing that comes to mind is that such a task is a fool’s errand, and you wouldn’t be wrong. There’s a mountain of books addressing the question, What is or was modernity? And another not insignificant hill of books arguing that, actually, there is or was no such thing, or at least not in the way it has been traditionally understood.
Acknowledging as much, perhaps we’d still offer some suggestions. Maybe we’d mention a set of institutions or practices such as representative government or democratic liberalism, scientific inquiry or the authority of reason, the modern university or the free press. Perhaps a set of values comes to mind: individualism, free speech, rule of law, or religious freedom. Or perhaps some more abstract principles, such as instrumental rationality or belief in progress and the superiority of the present over the past. And surely some reference to secularization, markets, and technology would also be made, not to mention colonization and economic exploitation.
I won’t attempt to adjudicate those claims or rank them. Also, you’ll have to forgive me if I failed to include you preferred account of modernity; they are many. But I will venture my own tentative and partial theory of the case with a view to possibly illuminating elements of the present state affairs. I’ve been particularly struck of late by the degree to which what I’ll call the myth of the machine became an essential element of the modern or, maybe better, the late modern world. Two clarifications before we proceed. First, I was initially calling this the “myth of neutrality” because I was trying to get at the importance of something like neutral or disinterested or value-free automaticity in various cultural settings. I wasn’t quite happy with neutrality as a way of capturing this pattern, though, and I’ve settled on the myth of the machine because it captures what may be the underlying template that manifests differently across various social spheres. And part of my argument will be that this template takes the automatic, ostensibly value-free operation of a machine as its model. Second, I use the term myth not to suggest something false or duplicitous, but rather to get at the normative and generative power of this template across the social order. That said, let’s move on, starting with some examples of how I see this myth manifesting itself.
Objectivity, Impartiality, Neutrality
The myth of the machine underlies a set of three related and interlocking presumptions which characterized modernity: objectivity, impartiality, and neutrality. More specifically, the presumptions that we could have objectively secured knowledge, impartial political and legal institutions, and technologies that were essentially neutral tools but which were ordinarily beneficent. The last of these appears to stand somewhat apart from the first two in that it refers to material culture rather than to what might be taken as more abstract intellectual or moral stances. In truth, however, they are closely related. The more abstract intellectual and institutional pursuits were always sustained by a material infrastructure, and, more importantly, the machine supplied a master template for the organization of human affairs.
There are any number of caveats to be made here. This post obviously paints with very broad strokes and deals in generalizations which may not prove useful or hold up under closer scrutiny. Also, I would stress that I take these three manifestations of the myth of the machine to be presumptions, by which I mean that this objectivity, impartiality, and neutrality were never genuinely achieved. The historical reality was always more complicated and, at points, tragic. I suppose the question is whether or not these ideals appeared plausible and desirable to a critical mass of the population, so that they could compel assent and supply some measure of societal cohesion. Additionally, it is obviously true that there were competing metaphors and models on offer, as well as critics of the machine, specifically the industrial machine. The emergence of large industrial technologies certainly strained the social capital of the myth. Furthermore, it is true that by the mid-20th century, a new kind of machine—the cybernetic machine, if you like, or system—comes into the picture. Part of my argument will be that digital technologies seemingly break the myth of the machine, yet not until fairly recently. But the cybernetic machine was still a machine, and it could continue to serve as an exemplar of the underlying pattern: automatic, value-free, self-regulating operation.
Now, let me suggest a historical sequence that’s worth noting, although this may be an artifact of my own limited knowledge. The sequence, as I see it, begins in the 17th century with the quest for objectively secured knowledge animating modern philosophy as well as the developments we often gloss as the scientific revolution. Hannah Arendt characterized this quest as the search for an Archimedean point from which to understand the world, an abstract universal position rather than a situated human position. Later in the 18th century, we encounter the emergence of political liberalism, which is to say the pursuit of impartial political and legal institutions or, to put it otherwise, “a ‘machine’ for the adjudication of political differences and conflicts, independently of any faith, creed, or otherwise substantive account of the human good.” Finally, in the 19th century, the hopes associated with these pursuits became explicitly entangled with the development of technology, which was presumed to be a neutral tool easily directed toward the common good. I’m thinking, for example, of the late Leo Marx’s argument about the evolving relationship between progress and technology through the 19th century. “The simple republican formula for generating progress by directing improved technical means to societal ends,” Marx argued, “was imperceptibly transformed into a quite different technocratic commitment to improving ‘technology’ as the basis and the measure of — as all but constituting — the progress of society.”
I wrote “explicitly entangled” above because, as I suggested at the outset, I think the entanglement was always implicit. This entanglement is evident in the power of the machine metaphor. The machine becomes the template for a mechanistic view of nature and the human being with attendant developments in a variety of spheres: deism in religion, for example, and the theory of the invisible hand in economics. In both cases, the master metaphor is that of self-regulating machinery. Furthermore, contrasted to the human, the machine appears dispassionate, rational, consistent, efficient, etc. The human was subject to the passions, base motives, errors of judgement, bias, superstition, provincialism, and the like. The more machine-like a person became, the more likely they were to secure objectivity and impartiality. The presumed neutrality of what we today call technology was a material model of these intellectual and moral aspirations. The trajectory of these assumptions leads to technocracy. The technocratic spirit triumphed through at least the mid-twentieth century, and it has remained a powerful force in western culture. I’m tempted to argue, however, that, in the United States at least, the Obama years may come to be seen as its last confident flourish. In any case, the machine supplied a powerful metaphor that worked its way throughout western culture.
Another way to frame all of this, of course, is by reference to Jacques Ellul’s preoccupation with what he termed la technique, the imperative to optimize all areas of human experience for efficiency, which he saw as the defining characteristic of modern society. Technique manifests itself in a variety of ways, but one key symptom is the displacement of ends by a fixation on means, so much so that means themselves become ends. The smooth and efficient operation of the system becomes more important than reckoning with which substantive goods should be pursued. Why something ought to be done comes to matter less than that it can be done and faster. The focus drifts toward a consideration of methods, procedures, techniques, and tools and away from a discussion of the goals that ought to be pursued.
The Myth of the Machine Breaks Down
Let’s revisit the progression I described earlier to see how the myth of the machine begins to break down, and why this is may illuminate the strangeness of our moment. Just as the modern story began with the quest for objectively secured knowledge, this ideal may have been the first to lose its implicit plausibility. Since the late 19th century onward, philosophers, physicists, sociologists, anthropologists, psychologists, and historians have, among others, proposed a more complex picture that emphasized the subjective, limited, contingent, situated, and even irrational dimensions of how humans come to know the world. The ideal of objectively secured knowledge became increasingly questionable throughout the 20th century. Some of these trends get folded under the label “postmodernism,” but I found the term unhelpful at best a decade ago—now find it altogether useless.
We can similarly trace a growing disillusionment with the ostensible impartiality of modern institutions. This takes at least two forms. On the one hand, we might consider the frustrating and demoralizing character of modern bureaucracies, which we can describe as rule-based machines designed to outsource judgement and enhance efficiency. On the other, we can note the heightened awareness of the actual failures of modern institutions to live up to the ideals of impartiality, which has been, in part, a function of the digital information ecosystem.
But while faith in the possibility of objectively secured knowledge and impartial institutions faltered, the myth of the machine persisted in the presumption that technology itself was fundamentally neutral. Until very recently, that is. Or so it seems. And my thesis (always for disputation) is that the collapse of this last manifestation of the myth brings the whole house down. This in part because of how much work the presumption of technological neutrality was doing all along to hold American society together. (International readers: as always read with a view to your own setting. I suspect there are some areas of broad overlap and other instances when my analysis won’t travel well). Already by the late 19th century, progress had become synonymous with technological advancements, as Leo Marx argued. If social, political, or moral progress stalled, then at least the advance of technology could be counted on.
The story historian David Nye tells in American Technological Sublime is also instructive here. Nye convincingly argued that technology became an essential element of America’s civil religion (that’s my characterization) functionally serving through its promise and ritual civic commemoration as a source of cultural vitality and cohesion. It’s hard to imagine this today, but Nye documents how through the 19th and early to mid-20th century, new technologies of significant scale and power were greeted with what can only be described as religious reverence and their appearance heralded in civic ceremonies.
But over the last several years, the plausibility of this last and also archetypal manifestation of the myth of the machine has also waned. Not altogether, to be sure, but in important and influential segments of society and throughout a wide cross-section of society, too. One can perhaps see the shift most clearly in the public discourse about social media and smart phones, but this may be a symptom of a larger disillusionment with technology. And not only technological artifacts and systems, but also with the technocratic ethos and the public role of expertise.
After the Myth of the Machine
If the myth of the machine in these three manifestations, was, in fact, a critical element of the culture of modernity, underpinning its aspirations, then when each in turn becomes increasingly implausible the modern world order comes apart. I’d say that this is more or less where we’re at. You could usefully analyze any number of cultural fault lines through this lens. The center, which may not in fact hold, is where you find those who still operate as if the presumptions of objectivity, impartiality, and neutrality still compelled broad cultural assent, and they are now assailed from both the left and the right by those who have grown suspicious or altogether scornful of such presumptions. Indeed, the left/right distinction may be less helpful than the distinction between those who uphold some combination of the values of objectivity, impartiality, and neutrality and those who no longer find them compelling or desirable.
At present, contemporary technologies are playing a dual role in these developments. On the one hand, I would argue that the way the technologies classified, accurately or not, as A.I. are framed suggests an effort to save the appearances of modernity, which is to say to aim at the same ideals of objectivity, impartiality, and neutrality while acknowledging that human institutions failed to consistently achieve them. Strikingly, they also retrieve the most pernicious fixations of modern science, such as phrenology. The implicit idea is that rather than make human judgement, for example, more machine-like, we simply hand judgment over to the machines altogether. Maybe the algorithm can be thoroughly objective even though the human being cannot. Or we might characterize it as a different approach to the problem of situated knowledge. It seeks to solve the problem by scale rather than detachment, abstraction, or perspective. The accumulation of massive amounts of data about the world can yield new insights and correlations which, while not subject to human understanding, will nonetheless prove useful. Notice how in these cases, the neutrality of the technology involved is taken for granted. When it becomes clear, however, that the relevant technologies are not and cannot, in fact, be neutral in this way, then this last ditch effort to double down on the old modern ideals stalls out.
It is also the case that digital media has played a key role in weakening the plausibility of claims to objectively secured knowledge and impartial institutions. The deluge of information through which we all slog everyday is not hospitable to the ideals of objectivity and impartiality, which to some degree were artifacts of print and mass media ecosystems. The present condition of information super-abundance and troves of easily searchable memory databases makes it trivially easy to either expose actual instances of bias, self-interest, inconsistency, and outright hypocrisy or to generate (unwittingly for yourself or intentionally for others) the appearance of such. In the age of the Database, no one controls the Narrative. And while narratives proliferate and consolidate along a predictable array of partisan and factional lines, the notion that the competing claims could be adjudicated objectively or impartially is defeated by exhaustion.
The dark side of this thesis involves the realization that the ideals of objectivity, impartiality, and neutrality, animated by the myth of the machine, were strategies to diffuse violent and perpetual conflict over competing visions of the true, the good, and the just during the early modern period in Europe. I’ve been influenced in this line of thought by the late Stephen Toulmin’s Cosmopolis: The Hidden Agenda of Modernity. Toulmin argued that modernity experienced a false start in the fifteenth and sixteenth century, one characterized by a more playful, modest, and humane spirit, which was overwhelmed by the more domineering spirit of the seventeenth century and the emergence of the modern order in the work of Descartes, Newton, and company, a spirit that was, in fairness, animated by a desperate desire to quell the violence that engulfed post-Reformation Europe. As I summarized Toulmin’s argument in 2019, the quest for certainty “took objectivity, abstraction, and neutrality as methodological pre-conditions for both the progress of science and politics, that is for re-emergence of public knowledge. The right method, the proper degree of alienation from the particulars of our situation, translations of observable phenomena into the realm mathematical abstraction—these would lead us away from the uncertainty and often violent contentiousness that characterized the dissolution of the premodern world picture. The idea was to reconstitute the conditions for the emergence of public truth and, hence, public order.”
In that same essay three years ago, I wrote, “The general progression has been to increasingly turn to technologies in order to better achieve the conditions under which we came to believe public knowledge could exist [i.e., objectivity, disinterestedness, impartiality, etc]. Our crisis stems from the growing realization that our technologies themselves are not neutral or objective arbiters of public knowledge and, what’s more, that they may now actually be used to undermine the possibility of public knowledge.” Is it fair to say that these lines have aged well?
Of course, the reason I characterize this as the dark side of the argument is that it raises the following question: What happens when the systems and strategies deployed to channel often violent clashes within a population deeply, possibly intractably divided about substantive moral goods and now even about what Arendt characterized as the publicly accessible facts upon which competing opinions could be grounded—what happens when these systems and strategies fail?
It is possible to argue that they failed long ago, but the failure was veiled by an unevenly distributed wave of material abundance. Citizens became consumers and, by and large, made peace with the exchange. After all, if the machinery of government could run of its own accord, what was their left to do but enjoy the fruits of prosperity. But what if abundance was an unsustainable solution, either because it taxed the earth at too high a rate or because it was purchased at the cost of other values such as rootedness, meaningful work and involvement in civic life, abiding friendships, personal autonomy, and participation in rich communities of mutual care and support? Perhaps in the framing of that question, I’ve tipped my hand about what might be the path forward.
At the heart of technological modernity there was the desire—sometimes veiled, often explicit—to overcome the human condition. The myth of the machine concealed an anti-human logic: if the problem is the failure of the human to conform to the pattern of the machine, then bend the human to the shape of the machine or eliminate the human altogether. The slogan of the one of the high-modernist world’s fairs of the 1930s comes to mind: “Science Finds, Industry Applies, Man Conforms.” What is now being discovered in some quarters, however, is that the human is never quite eliminated, only diminished.
Welcome to the Convivial Society, a newsletter about technology, culture, and the moral life. In this installment you’ll find the audio version of the previous essay, “The Face Stares Back.” And along with the audio version you’ll also find an assortment of links and resources. Some of you will remember that such links used to be a regular feature of the newsletter. I’ve prioritized the essays, in part because of the information I have on click rates, but I know the links and resources are useful to more than a few of you. Moving forward, I think it makes sense to put out an occasional installment that contains just links and resources (with varying amounts of commentary from me). As always, thanks for reading and/or listening.
Links and Resources
* Let’s start with a classic paper from 1965 by philosopher Hubert Dreyfus, “Alchemy and Artificial Intelligence.” The paper, prepared for the RAND Corporation, opens with a long epigraph from the 17th-century polymath Blaise Pascal on the difference between the mathematical mind and the perceptive mind.
* On “The Tyranny of Time”: “The more we synchronize ourselves with the time in clocks, the more we fall out of sync with our own bodies and the world around us.” More: “The Western separation of clock time from the rhythms of nature helped imperialists establish superiority over other cultures.”
* Relatedly, a well-documented case against Daylight Saving Time: “Farmers, Physiologists, and Daylight Saving Time”: “Fundamentally, their perspective is that we tend to do well when our body clock and social clock—the official time in our time zone—are in synch. That is, when noon on the social clock coincides with solar noon, the moment when the Sun reaches its highest point in the sky where we are. If the two clocks diverge, trouble ensues. Startling evidence for this has come from recent findings in geographical epidemiology—specifically, from mapping health outcomes within time zones.”
* Jasmine McNealy on “Framing and Language of Ethics: Technology, Persuasion, and Cultural Context.”
* Interesting forthcoming book by Kevin Driscoll: The Modem World: A Prehistory of Social Media.
* Great piece on Jacques Ellul by Samuel Matlack at The New Atlantis, “How Tech Despair Can Set You Free”: “But Ellul rejects it. He refuses to offer a prescription for social reform. He meticulously and often tediously presents a problem — but not a solution of the kind we expect. This is because he believed that the usual approach offers a false picture of human agency. It exaggerates our ability to plan and execute change to our fundamental social structures. It is utopian. To arrive at an honest view of human freedom, responsibility, and action, he believed, we must confront the fact that we are constrained in more ways than we like to think. Technique, says Ellul, is society’s tightest constraint on us, and we must feel the totality of its grip in order to find the freedom to act.”
* Evan Selinger on “The Gospel of the Metaverse.”
* Ryan Calo on “Modeling Through”: “The prospect that economic, physical, and even social forces could be modeled by machines confronts policymakers with a paradox. Society may expect policymakers to avail themselves of techniques already usefully deployed in other sectors, especially where statutes or executive orders require the agency to anticipate the impact of new rules on particular values. At the same time, “modeling through” holds novel perils that policymakers may be ill equipped to address. Concerns include privacy, brittleness, and automation bias, all of which law and technology scholars are keenly aware. They also include the extension and deepening of the quantifying turn in governance, a process that obscures normative judgments and recognizes only that which the machines can see. The water may be warm, but there are sharks in it.”
* “Why Christopher Alexander Still Matters”: “The places we love, the places that are most successful and most alive, have a wholeness about them that is lacking in too many contemporary environments, Alexander observed. This problem stems, he thought, from a deep misconception of what design really is, and what planning is. It is not “creating from nothing”—or from our own mental abstractions—but rather, transforming existing wholes into new ones, and using our mental processes and our abstractions to guide this natural life-supporting process.”
* An interview with philosopher Shannon Vallor: “Re-envisioning Ethics in the Information Age”: “Instead of using the machines to liberate and enlarge our own lives, we are increasingly being asked to twist, to transform, and to constrain ourselves in order to strengthen the reach and power of the machines that we increasingly use to deliver our public services, to make the large-scale decisions that are needed in the financial realm, in health care, or in transportation. We are building a society where the control surfaces are increasingly automated systems and then we are asking humans to restrict their thinking patterns and to reshape their thinking patterns in ways that are amenable to this system. So what I wanted to do was to really reclaim some of the literature that described that process in the 20th century—from folks like Jacques Ellul, for example, or Herbert Marcuse—and then really talk about how this is happening to us today in the era of artificial intelligence and what we can do about it.”
* From Lance Strate in 2008: “Studying Media AS Media: McLuhan and the Media Ecology Approach.”
* Japan’s museum of rocks that look like faces.
* I recently had the pleasure of speaking with Katherine Dee for her podcast, which you can listen to here.
* I’ll leave you with an arresting line from Simone Weil’s notebooks: “You could not have wished to be born at a better time than this, when everything is lost.”
Welcome to the Convivial Society, a newsletter about technology and culture. The pace of the newsletter has been slow of late, which I regret, but I trust it will pick up just a touch in the coming weeks (also please forgive me if you’ve been in touch over the past month or so and haven’t heard back). For starters, I’ll follow up this installment shortly with another that will include some links and resources. In this installment, I’m thinking about attention again, but from a slightly different perspective—how do we become the objects of the attention for others? If you’re a recent subscriber, I’ll note that attention is recurring theme in my writing, although it may be awhile before I revisit it again (but don’t hold me to that). As per usual, this is an exercise in thinking out loud, which seeks to clarify some aspect of our experience with technology and explore its meaning. I hope you find it useful. Finally, I’m playing with formatting again, driven chiefly by the fact that this is a hybrid text meant to be both read and/or listened to in the audio version. So you’ll note my use of bracketed in-text excursuses in this installment. If it degrades your reading or listening, feel free to let me know.
Objects of Attention
A recent email exchange with Dr. Andreas Mayert got me thinking about attention from yet another angle. Ordinarily, I think about attention as something I have, or, as I suggested in a recent installment, something I do. I give my attention to things out there in the world, or, alternatively, I attend to the world out there. Regardless of how we formulate it, what I am imagining in these cases is how attention flows outward from me, the subject, to some object in the world. And there’s much to consider from that perspective: how we direct our attention, for example, or how objects in the world beckon and reward our attention. But, as Dr. Mayert suggested to me, it’s also worth considering how attention flows in the opposite direction. That is to say, considering not the attention I give, but the attention that bears down on me.
[First excursus: The case of attending to myself is an interesting one given this way of framing attention as both incoming and outgoing. If I attend to my own body—by minding my breathing, for example—I’d say that my attention still feels to me as if it is going outward before then focusing inward. It’s the mind’s gaze upon the body. But it’s a bit different if I’m trying to attend to my own thoughts. In this case I find it difficult to assign directionality to my attention. Moreover, it seems to me that the particular sense I am using to attend to the world matters in this regard, too. For example, closing my eyes seems to change the sense that my attention is flowing out from my body. As I listen while my eyes are shut, I have the sense that sounds are spatially located, to my left rather than to my right, but also that the sound is coming to me. I’m reminded, too, of the ancient understanding of vision, which conceived of sight as a ray emanating from the eye to make contact with the world. The significance of these subtle shifts in how we perceive the world and how media relate to perception should not be underestimated.]
There are several ways of thinking about where this attention that might fix on us as its object originates. We can consider, for example, how we become an object of attention for large, impersonal entities like the state or a corporation. Or we can contemplate how we become the object of attention for another person—legibility in the former case and the gaze in the latter. There are any number of other possibilities and variations within them, but given my exchange with Mayert I found myself considering what happens when a machine pays attention to us. By “machine” in this case, I mean any of the various assemblages of devices, sensors, and programs through which data is gathered about us and interpretations are extrapolated from that data, interpretations which purport to reveal something about us that we ourselves may not otherwise recognize or wish to disclose.
I am, to be honest, hesitant to say that the machine (or program or app, etc.) pays attention to us or, much less, attends to us. I suppose it is better to say that the machine mediates the attention of others. But there is something about the nature of that mediation that transforms the experience of being the object of another’s attention to such a degree that it may be inadequate to speak merely of the attention of another. By comparison, if I discover that someone is using a pair of binoculars to watch me at a distance, I would still say, with some unease to be sure, that it is the person and not the binoculars that are attending to me although of course their gaze is mediated by the binoculars. If I’m being watched on a recording of cctv footage, even though someone is attending to me asynchronously through the mediation of the camera, I’d still say that it is the person is paying attention to me although I might hesitate to say that it is me they are paying attention to.
However, I’m less confident of putting it quite that way when, say, data about me is being captured, interpreted, and filtered to another who attends to me through that data and its interpretation. It does seem as if the primary work of attention, so to speak, is done not by the person but the machine, and this qualitatively changes the experience of being noted and attended to. Perhaps one way to say this is that when we are attended to by (or through) a machine we too readily become merely an object of analysis stripped of depth and agency, whereas when we are attended to more directly, although not necessarily in unmediated fashion, it may be harder—but not impossible, of course—to be similarly objectified.
I am reminded, for example, of the unnamed protagonist of Graham Greene’s The Power and the Glory, a priest known better for his insobriety than his piety, who, while being jailed alongside one of his tormentors, thinks to himself, “When you visualized a man or woman carefully, you could always begin to feel pity … that was a quality God’s image carried with it … when you saw the lines at the corners of the eyes, the shape of the mouth, how the hair grew, it was impossible to hate.” There’s much that may discourage us from attending to another in this way, but the mediation of the machine seems to remove the possibility altogether.
I am reminded of Clive Thompson’s intriguing analysis of captcha images, that grid of images that sometimes appears when you are logging in to a site and from which you are to select squares that contain things like buses or traffic lights. Thompson set out to understand why he found captcha images “overwhelmingly depressing.” After considering several factors, here’s what he concluded:
“They weren’t taken by humans, and they weren’t taken for humans. They are by AI, for AI. They thus lack any sense of human composition or human audience. They are creations of utterly bloodless industrial logic. Google’s CAPTCHA images demand you to look at the world the way an AI does.”
The uncanny and possibly depressing character of the captcha images is, in Thompson’s compelling argument, a function of being forced to see the world from a non-human perspective. I’d suggest that some analogous unease emerges when we know ourselves to be perceived or attended to by a non-human agent, something that now happens routinely. In one way or another we are the objects of attention for traffic light cameras, smart speakers, sentiment analysis tools, biometric sensors, doorbell cameras, proctoring software, on-the-job motion detectors, and algorithms used to ostensibly discern our credit worthiness, suitability for a job, or proclivity to commit a crime. The list could go on and on. We navigate a field in which we are just as likely to be scanned, analyzed, and interpreted by a machine as we are to enjoy the undisturbed attention of another human being.
Digital Impression Management
To explore these matters a bit more concretely, I’ll finally come to the subject of my exchange with Dr. Mayert, which was a study he conducted examining how some people experience the attention of a machine bearing down on them.
Mayert’s research examined how employees reasoned about systems, increasingly used in the hiring process, which promise to “create complex personality profiles from superficially innocuous individual social media profiles.” You’ll find an interview with Dr. Mayert and a link to the study, both in German, here, and you can use your online translation tool of choice if, like me, you’re not up on your German. With permission, I’ll share portions of what Mayert discussed in our email exchange.
The findings were interesting. On the one hand, Mayert found that “employees have no problem at all with companies taking a superficial look at their social media profiles to observe what is in any case only a mask in Goffman's sense.”
Erving Goffman, you may recall, was a mid-twentieth century sociologist, who, in The Presentation of the Self in Everyday Life, developed a dramaturgical model of human identity and social interactions. The basic idea is that we can understand social interactions by analogy to stage performance. When we’re “on stage,” we’re involved in the work of “impression management.” Which is to say that we carefully manage how we are perceived by controlling the impressions we’re giving off. (Incidentally, media theorist Joshua Meyrowitz usefully put Goffman’s work in conversation with McLuhan’s in No Sense of Place: The Impact of Electronic Media on Social Behavior, an underrated work of media theory published in 1986.)
So the idea here is that social media platforms are Goffmanesque stages, and, after we came to terms with context collapse, we figured out how to manage the impressions given off by our profiles. Indeed, from this perspective we might say that social media just made explicit (and quantifiable) dimensions of human behavior which, hitherto, had been mostly implicit. You’d be forgiven for thinking that this picture is just a bit too tidy. In practice, impressions, like most human dynamics, cannot be perfectly managed. We always “give off” more than we imagine, for example, and others may read our performances more astutely than we suppose.
But this was not the whole story. Mayert reported that employees had a much stronger negative reaction when the systems claimed to “infer undisclosed personal information” from their carefully curated feeds. It is one thing, from their perspective, to have data used anonymously for the purpose of placing ads, for example—that is when people are “ultimately anonymous objects of the data economy”—and quite another when the systems promise to disclose something about them as a particular person, something they did not intend to reveal. Whether the systems can deliver on this promise to know us better than we would want to be known is another question, and I think we should remain skeptical of such claims. But the claim that they could do just that elicited a higher degree of discomfort among participants in the study.
The underlying logic of these attitudes uncovered by Mayert’s research is also of interest. The short version, as I understand it, goes something like this. Prospective employees have come to terms with the idea that employers will scan their profiles as part of the hiring process, so they have conducted themselves accordingly. But they are discomfited by the possibility that their digital “impression management” can be seen through to some truer level of the self. As Mayert put it, “respondents believed that they could nearly perfectly control how they were perceived by others through the design of their profiles, and this was of great importance to them.”
[Second excursus: I’m curious about whether this faith in digital impression management is a feature of the transition from televisual culture to digital culture. Impression management seems tightly correlated with the age of the image, specifically the televisual image. My theory is that social media initially absorbed those older impulses to mange the image (the literal image and our “self image”). We bring the assumptions and practices of the older media regime with us to new media, and this includes assumptions about the self and its relations. So those of us who grew up without social media brought our non-digital practices and assumptions to the use of social media. But practices and assumptions native to the new medium will eventually win out, and I think we’ve finally been getting a glimpse of this over the last few years. One of these assumptions is that the digital self is less susceptible to management, another may be that we now manage not the image but the algorithm, which mediates our audience’s experience of our performance. Or to put it another way, that our impression management is in the service of both the audience and the algorithm.]
Mayert explained, however, that there was yet another intriguing dimension to his findings:
“when they were asked about how they form their own image of others through information that can be found about them on the Net, it emerged that they superficially link together unexciting information that can be found about other persons and they themselves do roughly what is also attempted in applicant assessment through data analysis: they construct personality profiles from this information that, in terms of content, were strongly influenced by the attitudes, preferences or prejudices of the respondents.”
So, these participants seemed to think they could, to some degree, see through or beyond the careful “impression management” of others on social media, but it did not occur to them that others might do likewise with their own presentations of the self.
Mayert again: “Intended external representation and external representation perceived by others were equivalent for the respondents as long as it was about themselves.”
“This result,” he adds, “explains their aversion to more in-depth [analysis] of their profiles in social media. From the point of view of the respondents, this is tantamount to a loss of control over their external perception, which endangers exactly what is particularly important to them.”
The note of control and agency seems especially prominent and raises the question, “Who has the right to attend to us in this way?”
I think we can approach this question by noting that our techno-social milieu is increasingly optimized for surveillance, which is to say for placing each of us under the persistent gaze of machines, people, or both. Evan Selinger, among others, has long been warning us about surveillance creep, and it certainly seems to be the case that we can now be surveilled in countless ways by state actors, corporations, and fellow citizens. And, in large measure, ordinary people have been complicit in adopting and deploying seemingly innocuous nodes in the ever expanding network of surveillance technologies. Often, these technologies promise to enhance our own ability to pay attention, but it is now the case that almost every technology that acts as an extension of our senses designed to enhance our capacity to pay attention to the world is also an instrument through which the attention of others can flow back toward us, bidden or unbidden.
Data-driven Platonism
Hiring algorithms are but one example of a larger set of technologies which promise to disclose some deeper truth about the self or the world that would be otherwise unnoticed. Similar tools are deployed in the realms of finance, criminal justice, and health care among others. The underlying assumption, occasionally warranted, is that analyzing copious amounts of data can disclose significant patterns or correlations, which would have been missed without these tools. As I noted a few years back, we can think about this assumption by analogy to Plato’s allegory of the cave. We are, in this case, led out of the cave by data analysis, which reveals truths that are inaccessible not only to the senses but even to unaided human reason. I remain fascinated by the idea that we’ve created tools designed to seek out realities that exist only as putative objects of quantification and prediction. They exist, that is, only in the sense that someone designed a technology to discover them and the search amounts to a pursuit of immanentized Platonic forms.
With regard to the self, I wonder whether the participants in Mayert’s study had any clear notion of what might be discovered about them. In other words, in their online impression management, were they consciously suppressing or obscuring particular aspects of their personality or activities, which they now feared the machine would disclose, or was their unease itself a product of the purported capacities of the technology? Were they uneasy because they came to suspect that the machine would disclose something about them which they themselves did not know? Or, alternatively, was their unease grounded in the reasonable assumption that they would have no recourse should the technology disqualify them based on opaque automated judgments?
I was reminded of ImageNet Roulette created by Kate Crawford and Trevor Paglan in 2019. The app was trained on the ImageNet database’s labels for classifying persons and was intended to demonstrate the limits of facial recognition software. ImageNet Roulette invited you to submit a picture to see how you would be classified by the app. Many users found that they were classified with an array of mistaken and even offensive labels. As Crawford noted in a subsequent report,
“Datasets aren’t simply raw materials to feed algorithms, but are political interventions. As such, much of the discussion around 'bias' in AI systems misses the mark: there is no 'neutral,' 'natural,' or 'apolitical' vantage point that training data can be built upon. There is no easy technical 'fix' by shifting demographics, deleting offensive terms, or seeking equal representation by skin tone. The whole endeavor of collecting images, categorizing them, and labeling them is itself a form of politics, filled with questions about who gets to decide what images mean and what kinds of social and political work those representations perform.”
At the time, I was intrigued by another line of thought. I wondered what those who were playing with the app and posting their results might have been feeling about the prospects of getting labeled by the machine. My reflections, which I wrote about briefly in the newsletter, were influenced by the 20th century diagnostician of the self, Walker Percy. Basically, I wondered if users harbored any implicit hopes or fears in getting labeled by the machine. It is, of course possible and perhaps likely that user’s brought no such expectations to the experience, but maybe some found themselves unexpectedly curious about how they would be categorized. Would we hope that the tool validates our sense of identity, suggesting that we craved some validation of our own self-appraisals? Would we hope that the result would be obviously mistaken, suggesting that the self was not so uncomplicated that a machine could discern its essence? Or would we hope that it revealed something about us that had escaped our notice, suggesting that we’ve remained, as Augustine once put it, a question to ourselves?
Efforts to size-up the human through the gaze of the machine trade on the currency of a vital truth: we look for ourselves in the gaze of the other. When someone gives us their attention, or, better, when someone attends to us, they bestow upon us a gift. As Simone Weil has put it, “Attention is the rarest and purest form of generosity.”
When we consider how attention flows out from us, we are considering, among other things, what constitutes our bond to the world. When we consider how attention bears down on us, we are considering, among other things, what constitutes the self.
One of the assumptions I bring to my writing about attention is that we desire it and we’re right to do so. To receive no one’s attention would be a kind of death. There are, of course, disordered ways of seeking attention, but we need the attention of the other even if only to know who we are. This is why I recently wrote that “the problem of distraction can just as well be framed as a problem of loneliness.” Digital media environments hijack our desire to be known in order to fuel the attention economy. And it’s in this light that I think it may be helpful to reconsider much of what we’ve recently glossed as surveillance capitalism through the frame of attention, but not just the attention we give but that which we receive.
From this perspective, one striking feature of our techno-social milieu is that it has become increasingly difficult both to receive the attention of our fellow human beings and to refuse the attention of the machines. The exchange of one for the other is, in certain cases, especially disheartening, as, for example, when surveillance becomes, in Alan Jacobs’s memorable phrase, the normative form of care. And, as I suggested earlier, the attention frame also has the advantage of capturing the uncanny dimensions of being subject to the nonhuman gaze and rendered a quantifiable object of analysis, not so much seen as seen through, appraised without being known.
In a rather well known poem from 1967, Richard Brautigan wrote hopefully of a cybernetic paradise in which we, and the other non-human animals, would be “watched over by machines of loving grace.” He got the watching over part right, but there are no machines of loving grace. To be fair, it is also a quality too few humans tend to exhibit in our attention to others.
Welcome to the Convivial Society, a newsletter that is ostensibly about technology and culture but more like my effort to make some sense of the world taking shape around us. For many of you, this will be the first installment to hit your inbox—welcome aboard. And my thanks to those of you who share the newsletter with others and speak well of it. If you are new to the Convivial Society, please feel free to read this orientation to new readers that I posted a few months ago.
0. Attention discourse is my term for the proliferation of articles, essays, books, and op-eds about attention and distraction in the age of digital media. I don’t mean the label pejoratively. I’ve made my own contributions to the genre, in this newsletter and elsewhere, and as recently as May of last year. In fact, I tend to think that attention discourse circles around immensely important issues we should all think about more deliberately. So, here then, is yet another entry for the attention files presented as a numbered list of loosely related observations for you consideration, a form in which I like to occasionally indulge and which I hope you find suggestive and generative.
1. I take Nick Carr’s 2008 piece in The Atlantic, “Is Google Making Us Stupid?”, to be the ur-text of this most recent wave of attention discourse. If that’s fair, then attention and distraction have been the subject of intermittent public debate for nearly fifteen years, but this sustained focus appears to have yielded little by way of improving our situation. I say the “the most recent wave” because attention discourse has a history that pre-dates the digital age. The first wave of attention discourse can be dated back to the mid-nineteenth century, as historian Jonathan Crary has argued at length, especially in his 1999 book, Suspensions of Perception: Attention, Spectacle, and Modern Culture. “For it is in the late nineteenth century,” Crary observed,
within the human sciences and particularly the nascent field of scientific psychology, that the problem of attention becomes a fundamental issue. It was a problem whose centrality was directly related to the emergence of a social, urban, psychic, and industrial field increasingly saturated with sensory input. Inattention, especially within the context of new forms of large-scale industrialized production, began to be treated as a danger and a serious problem, even though it was often the very modernized arrangements of labor that produced inattention. It is possible to see one crucial aspect of modernity as an ongoing crisis of attentiveness, in which the changing configurations of capitalism continually push attention and distraction to new limits and thresholds, with an endless sequence of new products, sources of stimulation, and streams of information, and then respond with new methods of managing and regulating perception […] But at the same time, attention, as a historical problem, is not reducible to the strategies of social discipline. As I shall argue, the articulation of a subject in terms of attentive capacities simultaneously disclosed a subject incapable of conforming to such disciplinary imperatives.”
Many of the lineaments of contemporary attention discourse are already evident in Crary’s description of its 19th century antecedents.
2. One reaction to learning that modern day attention discourse has longstanding antecedents would be to dismiss contemporary criticisms of the digital attention economy. The logic of such dismissals is not unlike that of the tale of Chicken Little. Someone is always proclaiming that the sky is falling, but the sky never falls. This is, in fact, a recurring trope in the wider public debate about technology. The seeming absurdity of some 19th-century pundit decrying the allegedly demoralizing consequences of the novel is somehow enough to ward off modern day critiques of emerging technologies. Interestingly, however, it’s often the case that the antecedents don’t take us back indefinitely into the human past. Rather, they often have a curiously consistent point of origin: somewhere in the mid- to late-nineteenth century. It’s almost as if some radical techno-economic re-ordering of society had occurred, generating for the first time a techno-social environment which was, in some respects at least, inhospitable to the embodied human person. That the consequences linger and remain largely unresolved, or that new and intensified iterations of the older disruptions yield similar expressions of distress should not be surprising.
3. Simone Weil, writing in Oppression and Liberty (published posthumously in 1955):
“Never has the individual been so completely delivered up to a blind collectivity, and never have men been less capable, not only of subordinating their actions to their thoughts, but even of thinking. Such terms as oppressors and oppressed, the idea of classes—all that sort of thing is near to losing all meaning, so obvious are the impotence and distress of all men in the face of the social machine, which has become a machine for breaking hearts and crushing spirits, a machine for manufacturing irresponsibility, stupidity, corruption, slackness and, above all, dizziness. The reason for this painful state of affairs is perfectly clear. We are living in a world in which nothing is made to man’s measure; there exists a monstrous discrepancy between man’s body, man’s mind and the things which at present time constitute the elements of human existence; everything is in disequilibrium […] This disequilibrium is essentially a matter of quantity. Quantity is changed into quality, as Hegel said, and in particular a mere difference in quantity is sufficient to change what is human in to what is inhuman. From the abstract point of view quantities are immaterial, since you can arbitrarily change the unit of measurement; but from the concrete point of view certain units of measurement are given and have hitherto remained invariable, such as the human body, human life, the year, the day, the average quickness of human thought. Present-day life is not organized on the scale of all these things; it has been transported into an altogether different order of magnitude, as though men were trying to raise it to the level of the forces outside of nature while neglecting to take his own nature into account.”
4. Nicholas Carr began his 2008 article with a bit of self-disclosure, which I suspect now sounds pretty familiar to most of us if it didn’t already then. Here’s what he reported:
Over the past few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn’t going—so far as I can tell—but it’s changing. I’m not thinking the way I used to think. I can feel it most strongly when I’m reading. Immersing myself in a book or a lengthy article used to be easy. My mind would get caught up in the narrative or the turns of the argument, and I’d spend hours strolling through long stretches of prose. That’s rarely the case anymore. Now my concentration often starts to drift after two or three pages. I get fidgety, lose the thread, begin looking for something else to do. I feel as if I’m always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle.
At the time, it certainly resonated with me, and what may be most worth noting about this today is that Carr, and those who are roughly his contemporaries in age, were in the position of living before and after the rise of the commercial internet and thus had a point of experiential contrast to emerging digital culture.
5. I thought of this paragraph recently while I was reading the transcript of Sean Illing’s interview with Johann Hari about his new book, Stolen Focus: Why You Can’t Pay Attention—And How to Think Deeply Again. Not long after reading the text of Illing’s interview, I also read the transcript of his conversation with Ezra Klein, which you can read or listen to here. I’m taking these two conversations as an occasion to reflect again on attention, for it’s own sake but also as an indicator of larger patterns in our techno-social milieu. I’ll dip into both conversations to frame my own discussion, and, as you’ll see, my interest isn’t to criticize Hari’s argument but rather to pose some questions and take it as a point of departure.
Hari, it turns out, is, like me, in his early-40s. So we, too, lived a substantial chunk of time in the pre-commercial internet era. And, like Carr, Hari opens his conversation with Illing by reporting on his own experience:
I noticed that with each year that passed, it felt like my own attention was getting worse. It felt like things that require a deep focus, like reading a book, or watching long films, were getting more and more like running up and down an escalator. I could do them, but they were getting harder and harder. And I felt like I could see this happening to most of the people I knew.
But, as the title of his book suggests, Hari believes that this was not just something that has happened but something that was done to him. “We need to understand that our attention did not collapse,” he tells Illing, “our attention has been stolen from us by these very big forces. And that requires us to think very differently about our attention problems.”
Like many others before him, Hari argues that these “big forces” are the tech companies, who have designed their technologies with a view to capturing as much of our attention as possible. In his view, we live in a technological environment that is inhospitable to the cultivation of attentiveness. And, to be sure, I think this is basically right, as far as it goes. This is not a wholly novel development, as we noted at the outset, even if its scope and scale have expanded and intensified.
6. There’s another dimension to this that’s worth considering because it is often obscured by the way we tend to imagine attention and distraction as solitary or a-social phenomena. What we meet at the other end of our digital devices is not just a bit of information or an entertaining video clip or a popular game. Our devices do not only mediate information and entertainment, they mediate relationships.
As Alan Jacobs put it writing in “Habits of Mind in an Age of Distraction”:
“[W]e are not addicted to any of our machines. Those are just contraptions made up of silicon chips, plastic, metal, glass. None of those, even when combined into complex and sometimes beautiful devices, are things that human beings can become addicted to […] there is a relationship between distraction and addiction, but we are not addicted to devices […] we are addicted to one another, to the affirmation of our value—our very being—that comes from other human beings. We are addicted to being validated by our peers.”
This is part of what lends the whole business a tragic aspect. The problem of distraction can just as well be framed as a problem of loneliness. Sometimes we turn thoughtlessly to our devices for mere distraction, something to help us pass the time or break up the monotony of the day, although the heightened frequency with which we may do so may certainly suggests the signs of compulsive behavior. Perhaps it is the case in such moments that we do not want to be alone with our thoughts. But perhaps just as often we simply don’t want to be alone.
We desire to be seen and acknowledged. To exercise meaningful degrees of agency and judgment. In short, to belong and to matter. Social media trades on these desires, exploits them, deforms them, and never truly satisfies them, which explains a good deal of the madness.
7. In her own thoughtful and moving reflections on the ethical dimensions of attention, Jasmine Wang cited the following observations from poet David Whyte:
“[T]he ultimate touchstone of friendship is not improvement, neither of the other nor of the self. The ultimate touchstone is witness, the privilege of having been seen by someone, and the equal privilege of being granted the sight of the essence of another, to have walked with them, and to have believed in them, and sometimes, just to have accompanied them, for however brief a span, on a journey impossible to accomplish alone.”
8. Perhaps the first modern theorist of distraction, 17th century polymath Blaise Pascal had a few things to say about diversions in his posthumously published Pensées:
“What people want is not the easy peaceful life that allows us to think of our unhappy condition, nor the dangers of war, nor the burdens of office, but the agitation that takes our mind off it and diverts us.”
“Nothing could be more wretched than to be intolerably depressed as soon as one is reduced to introspection with no means of diversion.”
“The only thing that consoles us for our miseries is diversion. And yet it is the greatest of our miseries. For it is that above all which prevents us thinking about ourselves and leads us to destruction. But for that we should be bored, and boredom would drive us to seek some more solid means of escape, but diversion passes our time and brings us imperceptibly to our death.”
9. Pascal reminds us of something we ought not to forget, which is that there may be technology-independent reasons for why we crave distractions. Weil had a characteristically religious and even mystical take on this. “There is something in our soul,” she wrote, “that loathes true attention much more violently than flesh loathes fatigue. That something is much closer to evil than flesh is. That is why, every time we truly give our attention, we destroy some evil in ourselves.”
We should not, in other words, imagine that the ability to focus intently or to give one’s sustained attention to some matter was the ordinary state of affairs before the arrival of digital technologies or even television beforehand. But this does not mean that new technologies are of no consequence. Quite the contrary. It is one thing to have a proclivity, it is another to have a proclivity and inhabit a material culture that is designed to exploit your proclivity, and in a manner that is contrary to your self-interest and well-being.
10. Human beings have, of course, always lived in information rich environments. Step into the woods, and you’re surrounded by information and stimuli. But the nature of the information matters. Modern technological environments present us with an abundance of symbolically encoded information, which is often designed with a view to hijacking or soliciting our attention. Which is to say that our media environments aggressively beckon us in a way that an oak tree does not. The difference might be worth contemplating.
Natural, which is to say non-human environments, can suddenly demand our attention. At one point, Klein and Hari discuss a sudden thunder clap, which is one example of how this can happen. And I can remember once hearing the distinctive sound of a rattlesnake while hiking on a trail. In cases like these, the environment calls us decidedly to attention. It seems, though, that, ordinarily, non-human environments present themselves to us in a less demanding manner. They may beckon us, but they do not badger us or overwhelm our faculties in a manner that generates an experience of exhaustion or fatigue.
In a human-built environment rich with symbolically encoded information—a city block, for example, or a suburban strip mall—our attention is solicited in a more forceful manner. And the relevant technologies do not have to be very sophisticated to demand our attention in this way. Literate people are compelled to read texts when they appear before them. If you know how to read and an arrangement of letters appears before you, you can hardly help but read them if you notice them (and, of course, they can be designed so as to lure or assault your attention). By contrast, naturally encoded information, such as might be available to us when we attend to how a clump of trees has grown on a hillside or the shape a stream has cut in the landscape does not necessarily impress itself upon us as significant in the literal sense of the word, as having meaning or indicating something to us. From this perspective, attention is bound up with forms of literacy. I cannot be hailed by signs I cannot recognize as such, as meaning something to me. So then, we might say that our attention is more readily elicited by that which presents itself as being somehow “for me,” by that which, as Thomas de Zengotita has put it, flatters me by seeming to center the world on me.
If I may press into this distinction a bit further, the question of purpose or intent seems to matter a great deal, too. When I hike in the woods, there’s a relative parity between my capacity to direct my attention, on the one hand, and capacity of the world around me to suddenly demand it of me on the other. I am better able to direct my attention as I desire, and to direct it in accord with my purpose. I will seek out what I need to know based on what I have set out to do. If I know how to read the signs well, I will seek those features of the landscape that can help me navigate to my destination, for example. But in media-rich human-built environments, my capacity to direct my attention in keeping with my purposes is often at odds with features of the environment that want to command my attention in keeping with purposes that are not my own. It is the difference between feeling challenged to rise to an occasion that ultimately yields an experience of competence and satisfaction, and feeling assaulted by an environment explicitly designed to thwart and exploit me.
11. Thomas de Zengotita, writing in Mediated: How the Media Shapes Your World and the Way You Live In It (2005):
“Say your car breaks down in the middle of nowhere—the middle of Saskatchewan, say. You have no radio, no cell phone, nothing to read, no gear to fiddle with. You just have to wait. Pretty soon you notice how everything around you just happens to be there. And it just happens to be there in this very precise but unfamiliar way […] Nothing here was designed to affect you. It isn’t arranged so that you can experience it, you didn’t plan to experience it, there isn’t any screen, there isn’t any display, there isn’t any entrance, no brochure, nothing special to look at, no dramatic scenery or wildlife, no tour guide, no campsites, no benches, no paths, no viewing platforms with natural-historical information posted under slanted Plexiglas lectern things—whatever is there is just there, and so are you […] So that’s a baseline for comparison. What it teaches us is this: in a mediated world, the opposite of real isn’t phony or illusional or fiction—it’s optional […] We are most free of mediation, we are most real, when we are at the disposal of accident and necessity. That’s when we are not being addressed. That’s when we go without the flattery intrinsic to representation.”
12. It’s interesting to me that de Zengotita’s baseline scenario would not play out quite the same way in a pre-modern cultural setting. He is presuming that nature is mute, meaningless, and literally insignificant. But—anthropologists please correct me—this view would be at odds with most if not all traditional cultures. In the scenario de Zengotita describes, premodern people would not necessarily find themselves either alone or unaddressed, and I think this indirectly tells us something interesting about attention.
Attention discourse tends to treat attention chiefly as the power to focus mentally on a text or task, which is to say on what human beings do and what they make. Attention in this mode is directed toward what we intend to do. We might say that it is attention in the form of actively searching rather than receiving, and this makes sense if we don’t have an account of how attention as a form of openness might be rewarded by our experience in the world. Perhaps the point is that there’s a tight correlation between what I conceive of as meaningful and what I construe as a potential object of my attention. If, as Arendt for example has argued, in the modern world we only find meaning in what we make, then we will neglect forms of attention that presuppose the meaningfulness of the non-human world.
13. Robert Zaretsky on “Simone Weil’s Radical Conception of Attention”:
Weil argues that this activity has little to do with the sort of effort most of us make when we think we are paying attention. Rather than the contracting of our muscles, attention involves the canceling of our desires; by turning toward another, we turn away from our blinding and bulimic self. The suspension of our thought, Weil declares, leaves us “detached, empty, and ready to be penetrated by the object.” To attend means not to seek, but to wait; not to concentrate, but instead to dilate our minds. We do not gain insights, Weil claims, by going in search of them, but instead by waiting for them: “In every school exercise there is a special way of waiting upon truth, setting our hearts upon it, yet not allowing ourselves to go out in search of it… There is a way of waiting, when we are writing, for the right word to come of itself at the end of our pen, while we merely reject all inadequate words.” This is a supremely difficult stance to grasp. As Weil notes, “the capacity to give one’s attention to a sufferer is a very rare and difficult thing; it is almost a miracle; it is a miracle. Nearly all those who think they have this capacity do not possess it.”
14. As I see it, there is a critical question that tends to get lost in the current wave of attention discourse: What is attention for? Attention is taken up as a capacity that is being diminished by our technological environment with the emphasis falling on digitally induced states of distraction. But what are we distracted from? If our attention were more robust or better ordered, to what would we give it? Pascal had an answer, and Weil did, too, it seems to me. I’m not so sure that we do, and I wonder whether that leaves us more susceptible to the attention economy. Often the problem seems to get framed as little more than the inability read long, challenging texts. I enjoy reading long, challenging texts, and I do find that, like Carr and Hari, this has become more challenging. But I don’t think reading long, challenging texts is essential to human flourishing nor the most important end toward which our attention might be ordered.
We have, it seems, an opportunity to think a bit more deeply not only about the challenges our techno-social milieu presents to our capacity to attend to the world, challenges I suspect many of us feel keenly, but also about the good toward which our attention ought to be directed. What deserves our attention? What are the goods for the sake of which we ought to cultivate our capacity for attention?
On this score, attention discourse often strikes me as an instance of a larger pattern that characterizes modern society: a focus on means rather than ends. I’d say it also illustrates the fact that it is far easier to identify the failures and disorders of contemporary society than it is to identify the goods that we ought to be pursuing. In “Tradition and the Modern Age,” Hannah Arendt spoke of the “ominous silence that still answers us whenever we dare to ask, not ‘What are we fighting against’ but ‘What are we fighting for?’”
As I’ve suggested before, may be the problem is not that our attention is a scarce resource in a society that excels in generating compelling distractions, but rather that we have a hard time knowing what to give our attention to at any given moment. That said, I would not want to discount the degree to which, for example, economic precarity also robs people of autonomy on this front. And I also appreciated Hari’s discussion of how our attention is drained not only by the variegated media spectacle that envelops us throughout our waking hours, but also by other conditions, such as sleeplessness, that diminish the health of our bodies taken whole.
15. Hari seems convinced that the heart of the problem is the business model. It is the business model of the internet, driven by ad revenue, that pushes companies to design their digital tools for compulsive engagement. This is, I think, true enough. The business model has certainly exacerbated the problem. But I’m far less sanguine than Hari appears to be about whether changing the business model will adequately address the problem, much less solve it. When asked by Sean Illing about what would happen if internet companies moved to a different business model Hari’s responses were not altogether inspiring. He imagines that under alternative models, such as subscription based services for example, companies would be incentivized to offer better products: “Facebook and other social media companies have to ask, ‘What does Sean want?’ Oh, Sean wants to be able to pay attention. Let’s design our app not to maximally hack and invade his attention and ruin it, but to help him heal his attention.” In my view, this overestimates the power of benevolent design and underestimates the internal forces that lead us to seek out distraction. Something must, at the end of the day, be asked of us, too.
16. Subtle shifts in language can sometimes have surprising consequences. The language of attention seems particularly loaded with economic and value-oriented metaphors, such as when we speak of paying attention or imagine our attention as a scarce resource we must either waste or horde. However, to my ears, the related language of attending to the world does not carry these same connotations. Attention and attending are etymologically related to the Latin word attendere, which suggested among other things the idea of “stretching toward” something. I like this way of thinking about attention, not as a possession in limited supply, theoretically quantifiable, and ready to be exploited, but rather as a capacity to actively engage the world—to stretch ourselves toward it, to reach for it, to care for it, indeed, to tend it.
Hari and other critics of the attention economy are right to be concerned, and they are right about how our technological environment tends to have a corrosive effect on our attention. Right now, I’m inclined to put it this way: our dominant technologies excel at exploiting our attention while simultaneously eroding our capacity to attend to the world.
Klein and Illing, while both sympathetic to Hari’s concerns, expressed a certain skepticism about his proposals. That’s understandable. In this case, as in so many others, I don’t believe that policy tweaks, regulations, shifting economic models, or newer technologies built on the same assumptions will solve the most fundamental challenges posed by our technological milieu. Such measures may have their role to play, no doubt. But I would characterize these measures as grand but ultimately inadequate gestures that appeal to us exactly to the degree that they appear to require very little of us while promising to deliver swift, technical solutions. For my part, I think more modest and seemingly inadequate measures, like tending more carefully to our language and cultivating ways of speaking that bind us more closely to the world, will, in the admittedly long, very long run, prove more useful and more enduring.
Postscript: In his opening comments, Klein makes the following observation: “And the strangest thing to me, in retrospect, about the education I received growing up — the educations most of us receive — is how little attention they give to attention.”
Around 2014 or so, I began to think that one of my most important roles as a teacher was to help students think more deliberately about how they cultivated their attention. I was helped in thinking along these lines by a 2013 essay by Jennifer Roberts, “The Power of Patience.” In it, Roberts wrote the following:
During the past few years, I have begun to feel that I need to take a more active role in shaping the temporal experiences of the students in my courses; that in the process of designing a syllabus I need not only to select readings, choose topics, and organize the sequence of material, but also to engineer, in a conscientious and explicit way, the pace and tempo of the learning experiences. When will students work quickly? When slowly? When will they be expected to offer spontaneous responses, and when will they be expected to spend time in deeper contemplation?
I want to focus today on the slow end of this tempo spectrum, on creating opportunities for students to engage in deceleration, patience, and immersive attention. I would argue that these are the kind of practices that now most need to be actively engineered by faculty, because they simply are no longer available “in nature,” as it were. Every external pressure, social and technological, is pushing students in the other direction, toward immediacy, rapidity, and spontaneity—and against this other kind of opportunity. I want to give them the permission and the structures to slow down.
As promised, here is the audio version of the last installment, “The Dream of Virtual Reality.” To those of you who find the audio version helpful, thank you for your patience!
And while I’m at it, let me pass along a few links, a couple of which are directly related to this installment.
Links
Just after I initially posted “The Dream of Virtual Reality,” Evan Selinger reached out with a link to his interview of David Chalmers about Reality+. I confess to thinking that I might have been a bit uncharitable in taking Chalmers’s comments in a media piece as the basis of my critique, but after reading the interview I now feel just fine about it.
And, apropos my comments about science fiction, here’s a discussion between philosophers Nigel Warburton and Eric Schwitzgebel about the relationship between philosophy and science fiction. It revolves around a discussion of five specific sci-fi texts Schwitzgebel recommends.
Unrelated to virtual reality, let me pass along this essay by Meghan O’Gieblyn: “Routine Maintenance: Embracing habit in an automated world.” It is an excellent reflection on the virtues of habit. It’s one of those essays I wish I had written. In fact, I have a draft of a future installment that I had titled “From Habit to Habit.” It will aim at something a bit different, but I’m sure that it will now incorporate some of what O’Gieblyn has written. You may also recognize O’Gieblyn as the author of God, Human, Animal, Machine, which I’ve got on my nightstand and will be reading soon.
In a brief discussion of Elon Musk’s Neuralink in his newsletter New World Same Humans, David Mattin makes the following observation:
A great schism is emerging. It’s between those who believe we should use technology to transcend all known human limits even if that comes at the expense of our humanity itself, and those keen to hang on to recognisably human forms of life and modes of consciousness. It may be a while yet until that conflict becomes a practical and widespread reality. But as Neuralink prepares for its first human trials, we can hear that moment edging closer.
I think this is basically right, and I’ve been circling around this point for quite some time. But I would put the matter a bit differently: I’m not so sure that it will be a while until that conflict becomes a practical and widespread reality. I think it has been with us for quite some time, and, in my less hopeful moments, I tend to think that we have already crossed some critical threshold. As I’ve put it elsewhere, transhumanism is the default eschatology of the modern technological project.
Podcasts
Lastly, I’ve been neglecting to pass along links to some podcasts I’ve been on recently. Let me fill you in on a couple of these. Last fall, I had the pleasure of talking to historian Lee Vinsel for his new podcast, People and Things. We talked mostly Illich and it was a great conversation. I commend Vinsel’s whole catalog to you. Peruse at your leisure. Certainly be sure to catch the inaugural episode with historian Ruth Schwartz Cowan, the author of one of the classic texts in the history of technology, More Work For Mother: The Ironies Of Household Technology From The Open Hearth To The Microwave.
And just today my conversation with the Irish economist David McWilliams was posted. We talked mostly about the so-called meta verse, and while we focused on my early “Notes From the Metaverse,” it also pairs nicely with the latest installment. My thanks to David and his team for their conviviality. And to the new set of readers from Ireland, the UK, and beyond—welcome!
Postscript
I can not neglect to mention that it was brought to my attention that the promo video for Facebook’s VR platform, Horizons, from which I took a screenshot for the essay, has a comical disclaimer near the bottom of the opening shot. As philosopher Ariel Guersenzvaig noted on Twitter, “‘Virtual reality is genuine reality’? Be that as it may, the VR footage is not genuine VR footage!”
Welcome to a special installment of the Convivial Society featuring my conversation with Andrew McLuhan. I can’t recall how or when I first encountered the work of Marshall McLuhan, I think it might’ve been through the writing of one of his most notable students, Neil Postman. I do know, however, that McLuhan, and others like Postman and Walter Ong who built on his work, became a cornerstone of my own thinking about media and technology. So it was a great pleasure to speak with his grandson Andrew, who is now stewarding and expanding the work of his grandfather and his father, Eric McLuhan, through the McLuhan Institute, of which he is the founder and director.
I learned a lot about McLuhan through this conversation and I think you’ll find it worth your time. A variety of resources and sites were mentioned throughout the conversation, and I’ve tried to provide links to all of those below. Above all, make sure you check out the McLuhan Institute and consider supporting Andrew’s work through his Patreon page.
Links
McLuhan Institute’s Twitter account and Instagram account
Andrew McLuhan’s Twitter account
The image of McLuhan and Edmund Carpenter on the beach which Andrew mentions can be seen at the :30 mark of this YouTube video featuring audio of Carpenter describing his friendship with McLuhan
Eric McLuhan’s speech, “Media Ecology in the 21st Century,” on the McLuhan Institute’s YouTube page (the setting is a conference in Bogota, Columbia, so McLuhan is introduced in Spanish, but he delivers his talk in English)
Laws of Media: The New Science by Marshall McLuhan and Eric McLuhan
Marshall McLuhan/Norman Mailer exchange
Marshall McLuhan/W.H. Auden/Buckminster Fuller exchange
Jeet Heer’s essay on McLuhan from 2011
Understanding Media: The Extensions of Man (Critical Edition)
Understanding Me: Lectures and Interviews
Welcome to the Convivial Society, a newsletter about technology and culture. In this installment I write a bit about burnout, exhaustion, and rest. It doesn’t end with any neat solutions, but that’s kind of the point. However, I’ll take up the theme again in the next installment, and will hopefully end on a more promising note.
As many of you know, the newsletter operates on a patronage model. The writing is public, there is no paywall, but I welcome the support of readers who value the work. Not to be too sentimental about it, but thanks to those who have become paying subscribers this newsletter has become a critical part of how I make my living. And for that I’m very grateful. Recently, a friend inquired about one-time gifts as the year draws to a close, however this platform doesn’t allow that option. So for those who would like to support the Convivial Society but for whom the usual subscription rates are a bit too steep, here’s a 30% discounted option that works out to about $31 for the year or about $2.50 a month. The option is good through the end of December. Cheers!
Several years ago, I listened to Terry Gross interview the son of a prominent religious leader, who had publicly broken with his father’s legacy and migrated to another, rather different branch of the tradition. Gross asked why he had not simply let his faith go altogether. His reply has always stuck with me. He explained that it was, in part, because he was the kind of person whose first instinct, upon deciding to become an atheist, would be to ask God to help him be an atheist.
I thought about his response recently when I encountered an article with the following title: “The seven types of rest: I spent a week trying them all. Could they help end my exhaustion?”
My first, admittedly ill-tempered reaction was to conclude that Betteridge’s Law had been validated once again. In case this is the first time you’re hearing of Betteridge’s Law, it states that any headline ending in a question can be answered with no. I think you’ll find that it holds more often than not.
With the opening anecdote in mind, my second, slightly more considered response was to conclude that some of us have become the kind of people whose first instinct, upon deciding to break loose from the tyranny of productivity and optimization, would be to make a list. Closely related to this thought was another: some of us have become the kind of people whose first instinct, upon deciding to reject pathological consumerism, would be to buy a product or service which promised to help us do so.
And I don’t think we should necessarily bracket the religious context of the original formulation in the latter two cases. The structure is altogether analogous: a certain pattern of meaning, purpose, and value has become so deeply engrained that we can hardly imagine operating without it. This is why the social critic Ivan Illich called assumptions of this sort “certainties” and finally concluded that they needed to be identified and challenged before any meaningful progress on social ills could be made.
As it turned out, that article on the different forms of rest takes a recent book as its point of departure. The book identified and explored the seven forms of rest—physical, emotional, mental, social, and so on—which the author of the article sampled for a day a piece. Probably not what the book’s author had in mind. Whatever one makes of the article, or the book upon which it is based, the problem to which it speaks, a sense of permanent burnout or chronic exhaustion, is, as far as I can tell, real and pervasive, and it is a symptom of a set of interlocking disorders afflicting modern society, which have been exacerbated and laid bare over the last two years.
Others have written about this phenomenon perceptively and eloquently, particularly if we consider discussions of rest, exhaustion, and burnout together with similar discussions about the changing nature and meaning of work. The writing of Jonathan Malesic and Anne Helen Petersen comes immediately to mind. I won’t do the matter justice in the way they and others do, but this is a subject I’ve been thinking about a good bit lately so I’ll offer some brief observations for your consideration.
And I think I’ll break these reflections up into two or three posts beginning with this one. As I think about what we might variously describe as the exhaustion, fatigue, or burnout that characterizes our experience, several obvious sources come to mind, chief among them economic precarity and a global pandemic. The persistent mental and emotional tax we pay to use social media doesn’t help, of course. But my attention is drawn to another set of factors: a techno-economic milieu that is actively hostile to human well-being, for example, or a certain programmed aimlessness that may undermine the experience of accomplishment or satisfaction. So let me take aim at that first point in this installment and turn to the second in a forthcoming post.
Let’s start by acknowledging that we’re talking about a longstanding problem, which likely varies in intensity from time to time. I’ve mentioned on more than a few occasions that the arc of digital culture bends toward exhaustion, but it does so as part of a much longer cultural trajectory. Here’s another passage that has stayed with me years after first encountering it. It is from Patrick Leigh Fermor’s A Time To Keep Silent, the famed travel writer’s account of his stays at several monasteries across Europe and Turkey circa 1950. Early on, Fermor recounted the physical effects of his first stay in a monastery after recently having been in Paris. “The most remarkable preliminary symptoms,” Fermor began, “were the variations of my need of sleep.” “After initial spells of insomnia, nightmare and falling asleep by day,” he continues,
I found that my capacity for sleep was becoming more and more remarkable: till the hours I spent in or on my bed vastly outnumbered the hours I spent awake; and my sleep was so profound that I might have been under the influence of some hypnotic drug. For two days, meals and the offices in the church — Mass, Vespers and Compline — were almost my only lucid moments. Then began an extraordinary transformation: this extreme lassitude dwindled to nothing; night shrank to five hours of light, dreamless and perfect sleep, followed by awakenings full of energy and limpid freshness.
If your experience is anything like mine, that last line will be the most unrelatable bit of prose you’ll read today. So to what did Fermor attribute this transformation? “The explanation is simple enough:” he writes,
the desire for talk, movements and nervous expression that I had transported from Paris found, in this silent place, no response or foil, evoked no single echo; after miserably gesticulating for a while in a vacuum, it languished and finally died for lack of any stimulus or nourishment. Then the tremendous accumulation of tiredness, which must be the common property of all our contemporaries, broke loose and swamped everything. No demands, once I had emerged from that flood of sleep, were made upon my nervous energy: there were no automatic drains, such as conversation at meals, small talk, catching trains, or the hundred anxious trivialities that poison everyday life. Even the major causes of guilt and anxiety had slid away into some distant limbo and not only failed to emerge in the small hours as tormentors but appeared to have lost their dragonish validity.”
There’s a lot that’s worth lingering over in that paragraph—how digital devices have multiplied the automatic drains, for example—but I want to focus our attention on this one phrase: “the tremendous accumulation of tiredness, which must be the common property of all our contemporaries.”
Now there’s a relatable sentiment. I emphasize it only to make the point that while, as Petersen wrote in a 2019 essay, burnout may be the “permanent residence” of the millennial generation, it can also be characterized as a more recent iteration of a longstanding condition. And the reason for this is that the dominant techno-social configuration of modern society demands that human beings operate at a scale and pace that is not conducive to their well-being—let alone rest, rightly understood—but by now most of us have been born into this state of affairs and take it more or less for granted.
For example, in a recent installment of her newsletter, Petersen discussed how existing social and economic structures make it so we always pay, in one way or another, for taking time to rest, and, of course, that’s if we are among those who are fortunate enough to do so. In the course of her discussion she makes the following pointed observation:
The ideal worker, after all, is a robot. A robot never tires, never needs rest, requires only the most basic of maintenance. When or if it collapses, it is readily replicated and replaced. In 24/7: Late Capitalism and the Ends of Sleep, Jonathan Crary makes the haunting case that we’re already training our bodies for this purpose. The more capable you are of working without rest of any form — the more you can convince your body and yourself to labor as a robot — the more valuable you become within the marketplace. We don’t turn off so much as go into “sleep mode”: ready, like the machines we’ve forced our bodies to approximate, to be turned back on again.
This is yet another example of the pattern I sought to identify in a recent installment: the human-built world is not built for humans. In that essay, I was chiefly riffing on Illich, who argued that “contemporary man attempts to create the world in his image, to build a totally man-made environment, and then discovers that he can do so only on the condition of constantly remaking himself to fit it.”
Illich is echoing the earlier work of the French polymath Jacques Ellul, to whom Illich acknowledged his debt in a 1994 talk I’ve cited frequently. In his best known book, The Technological Society, Ellul argued that by the early 20th century Western societies had become structurally inhospitable to human beings because technique had become their ordering principle. These days I find it helpful to gloss what technique meant for Ellul as the tyrannical imperative to optimize everything.
So, recall Petersen’s observation about the robot being the ideal worker. It’s a remarkably useful illustration of Ellul’s thesis. It’s not that any one technology has disordered the human experience of work. Rather, it’s that technique, the ruthless pursuit of efficiency or optimization, as an ordering principle has determined how specific technologies and protocols are to be developed and integrated into the work environment. The resulting system, reflecting the imperatives of technique, is constructed in such a way that the human being qua human being becomes an impediment, a liability to the functioning of the system. He or she must become mechanical in their performance in order to fit the needs of the system, be it a warehouse floor or a byzantine bureaucracy. It’s the Taylorite fantasy of scientific management now abetted by a vastly superior technical apparatus. The logic, of course, finally suggests the elimination of the human element. When we design systems that work best the more machine-like we become, we shouldn’t be surprised when the machines ultimately render us superfluous.
But only under certain circumstances can the human element be eliminated. For the most part, we carry on in techno-social environments that are either indifferent to a certain set of genuine human needs or altogether hostile to them. For this reason, Ellul argued, a major subset of technique emerges. Ellul referred to these as human techniques because their aim was to continually manage the human element in the technological system so that it would function adequately.
“In order that he not break down or lag behind (precisely what technical progress forbids),” Ellul believed, “[man] must be furnished with psychic forces he does not have in himself, which therefore must come from elsewhere.” That “elsewhere” might be pharmacology, propaganda, or, to give some more recent examples, mindfulness apps or seven techniques for finding rest.
“The human being,” Ellul writes,
is ill at ease in this strange new environment, and the tension demanded of him weighs heavily on his life and being. He seeks to flee—and tumbles into the snare of dreams; he tries to comply and falls into the life of organizations; he feels maladjusted—and becomes a hypochondriac. But the new technological society has foresight and ability enough to anticipate these human reactions. It has undertaken, with the help of techniques of every kind, to make supportable what was not previously so, and not, indeed, by modifying anything in man's environment but by taking action upon man himself.
In his view, human techniques are alway undertaken in the interest of preserving the system and adapting the human being to its demands. Ellul explained the problem at length, but here’s a relatively condensed expression of the argument:
[W]e hear over and over again that there is ‘something out of line’ in the technical system, an insupportable state of affairs for a technician. A remedy must be found. What is out of line? According to the usual superficial analysis, it is man that is amiss. The technician thereupon tackles the problem as he would any other […] Technique reveals its essential efficiency in discerning that man has a sentimental and moral life which can have great influence on his material behavior and in proposing to do something about such factors on the basis of its own ends. These factors are, for technique, human and subjective; but if means can be found to act upon them, to rationalize them and bring them into line, they need not be a technical drawback. Of course, man as such does not count.
One recurring rejoinder to critiques of new or emerging technologies, particularly when it is clear that they are unsettling existing patterns of life for some, usually those with little choice in the matter, is to claim that human beings are remarkably resilient and adaptable. The fact that this comes off as some sort of high-minded compliment to human nature does a lot of work, too. But this claim tells us very little of merit because it does not address the critical issue: is it good for human beings to adapt to the new state of affairs. After all, as Ellul noted, human beings can be made to adapt to all manner of inhumane conditions, particularly in wartime. The fact that they do so may be to the credit of those who do, but not necessarily to the circumstances to which they must adapt. From this perspective, praise of humanity’s adaptability can look either like a bit of propaganda or, more generously, a case of Stockholm syndrome.
So let’s come back to where we started with Ellul’s insights in mind. There are two key points. First, our exhaustion—in its various material and immaterial dimensions—is a consequence of the part we play in a techno-social milieu whose rhythms, scale, pace, and demands are not conducive to our well-being, to say nothing of the well-being of other creatures and the planet we share. Second, the remedies to which we often turn may themselves be counterproductive because their function is not to alter the larger system which has yielded a state of chronic exhaustion but rather to keep us functioning within it. Moreover, not only do the remedies fail to address the root of the problem, but there’s also a tendency to carry into our efforts to find rest the very same spirit which animates the system that left us tired and burnt out. Rest takes on the character of a project to be completed or an experience to be consumed. In neither case do we ultimately find any sort of meaningful and enduring relief or renewal.
Welcome again to the Convivial Society. This installment follows relatively quickly on the last, and you may be forgiven for not having yet made your way through that one, which came in at 4,500 words (sorry). But, we have some catching up to do, and this essay is only half as long. Read on.
In her testimony before the Senate, Facebook whistleblower Frances Haugen made an observation that caught my attention.
When asked by Senator Todd Young of Indiana to “discuss the short and long-term consequences of body image issues on these platforms,” Haugen gave the following response, emphasis mine:
The patterns that children establish in their teenage years live with them for the rest of their lives. The way they conceptualize who they are, how they conceptualize how they interact with other people, are patterns and habits that they will take with them as they become adults, as they themselves raise children. I’m very scared about the upcoming generation because when you and I interact in person, and I say something mean to you, and I see wince or I see you cry, that makes me less likely to do it the next time, right? That’s a feedback cycle. Online kids don’t get those cues and they learn to be incredibly cruel to each other and they normalize it. And I’m scared of what will their lives look like, where they grow up with the idea that it’s okay to be treated badly by people who allegedly care about them? That’s a scary future.
There is much that is worth discussing in Haugen’s testimony, but these comments stood out to me because they resonated with themes I’ve been working with for some time. Specifically, I’ve been thinking about the relationship among the virtue of pity, embodied presence, and online interactions, which, it seems to me, is precisely what Haugen has in view here.
Back in May I wrote a short post that speculated about a tendency to become forgetful of the body in online situations. “If digital culture tempts us to forget our bodies,” I wondered, “then it may also be prompting us to act as if we were self-sufficient beings with little reason to care or expect to be cared for by another.”
I wrote those lines with Alasdair MacIntyre’s book, Dependent Rational Animals, in mind. In it, MacIntyre seeks to develop an account of the virtues and virtuous communities that takes our bodies, and thus our dependence, as a starting point. So, for example, he writes,
What matters is not only that in this kind of community children and the disabled are objects of care and attention. It matters also and correspondingly that those who are no longer children recognize in children what they once were, that those who are not yet disabled by age recognize in the old what they are moving towards becoming, and that those who are not ill or injured recognize in the ill and injured what they often have been and will be and always may be.
From this starting point, MacIntyre makes the case for what he calls the virtues of acknowledged dependence, explaining that “what the virtues require from us are characteristically types of action that are at once just, generous, beneficent, and done from pity.” “The education of dispositions to perform just this type of act,” he continues, “is what is needed to sustain relationships of uncalculated giving and graceful receiving.”
Among this list of characteristics, pity was the one that most caught my attention. It is a quality that may strike many as ambivalent at best. The phrase, “I don’t want your pity,” is a common trope in our stories and it is often declared in defiantly heroic cadences. And, indeed, even when the quality has been discussed as a virtue in the tradition, writers have seen the need to distinguish it from counterfeits bearing a surface resemblance but which are often barely veiled expressions of condescension.
MacIntyre, wanting to avoid just this association with condescension, uses instead the Latin word misericordia. Thus MacIntyre, drawing on Aquinas writes, “Misericordia is grief or sorrow over someone else’s distress […] just insofar as one understands the other’s distress as one’s own. One may do this because of some preexisting tie to the other—the other is already one’s friend or kin—or because in understanding the other’s distress one recognizes that it could instead have been one’s own.”
This latter observation suggests the universalizing tendency of the virtue of pity, that it can recognize itself in the other regardless of whether the other is a personal relation or kin or a member of the same tribe. For this reason, pity can, of course, be the source of misguided and even oppressive actions. I say “of course,” but maybe it is not, in fact, obvious. I’m thinking, for instance, of Ivan Illich warning that something like pity turned into a rule or an institutional imperative can eventually lead to bombing the neighbor for his own good. And it’s worth noting that, in MacIntyre’s view, pity must work in harmony with justice, benevolence, and generosity—each of these virtues informing and channeling the others.
Illich, however, says as much in discussing his interpretation of the parable of the good Samaritan. In his view, the parable teaches the freedom to care for the other regardless of whether they are kin or members of the same tribe. Indeed, given the ethics of the day, the Samaritan (Illich sometimes called him the Palestinian to drive the point home to a modern audience) had little reason to care for the Jew, who had been beaten and left by the side of the road. Certainly he had much less reason to do so than the priest and Levite who callously pass him by. And in Illich’s telling, as I read him, it is precisely the flesh-to-flesh nature of the encounter that constitutes the conditions of the call the Samaritan answers to see in the other someone worthy of his attention and care.
Which again makes me wonder about the degree to which pity is activated or called forth specifically in the context of the fully embodied encounter, whether this context is not the natural habitat of pity, and what this means for online interactions where embodied presence cannot be fully realized.
I thought, too, of a wise passage from Tolkien in The Fellowship of the Ring. Many you of will know the story well. For those who don’t, one of the principal characters, Frodo, refers back to a moment, many years in the story’s past, when another character, Bilbo, passed up an opportunity to kill Gollum, who is a complicated character responsible for much mischief.
“What a pity that Bilbo did not stab that vile creature when he had a chance!” Frodo declares in conversation with the wizard Gandalf.
“Pity? It was Pity that stayed his hand. Pity, and Mercy: not to strike without need,” the wizard replies.
The exchange continues thus:
“I am sorry,” said Frodo. “But I am frightened; and I do not feel any pity for Gollum.”
“You have not seen him,” Gandalf broke in.
“No, and I don’t want to,” said Frodo. “I can’t understand you. Do you mean to say that you, and the Elves, have let him live on after all those horrible deeds? Now at any rate he is as bad as an Orc, and just an enemy. He deserves death.”
“Deserves it! I daresay he does. Many that live deserve death. And some that die deserve life. Can you give it to them? Then do not be too eager to deal out death in judgement. For even the very wise cannot see all ends. I have not much hope that Gollum can be cured before he dies, but there is a chance of it. And he is bound up with the fate of the Ring. My heart tells me that he has some part to play yet, for good or ill, before the end; and when that comes, the pity of Bilbo may rule the fate of many — yours not least.”
There are many things worth noting in this exchange but I’ll draw your attention to just three of them.
First, Frodo justifies his lack of pity by explaining that he is afraid. And, indeed, if it were possible to measure such things, I suspect we would find that fear rather than hate is at the root of many of our social disorders. Fear distorts our capacity to see the other well. It frames them merely as a threat and allows us to rationalize our lack of regard for their well-being under the irrefutable sign of self-preservation.
Second, Gandalf seems to believe that Frodo might change his tune once he has seen Gollum. Somehow the sight of Gollum, which depends on their bodily presence before one another, would be conducive to the experience of pity. This is the critical point for my purposes here.
Third, we might say, with MacIntyre in mind, that through Gandalf’s speech Tolkien frames pity not only as a virtue of acknowledged dependence but also as a virtue of acknowledged ignorance. “For even the very wise cannot see all ends.” Perhaps ignorance is merely another form dependence. When we do not know, we must depend on others who do. But it may be worth distinguishing ignorance from dependence. Even the strong can be ignorant. Either the ignorance is acknowledged or it is not. But it is true that in the end failing to acknowledge either our dependence or our ignorance may amount to the same thing: the reckless exercise of what Albert Borgmann has called regardless power.
There is one other literary case study of the link between bodies and pity that came to mind. It is found in the Iliad, Homer’s tale of the wrath of Achilles set during the Trojan War. Near the end of the epic, Achilles has allowed his wrath, arising from the killing of his friend Patroclus, to drive him into a murderous frenzy during which he has lashed out at the gods themselves, killed Hector, and, disregarding the moral obligation to honor the body of the dead, dragged Hector’s body from the back of a chariot. Through it all he has refused food and drink, seeming to forget his bond with other mortals, as if violence alone could sustain him. In all of this, he illustrates a pattern: those who act without regard to the moral and physical limits implicit in the human condition do not become as the gods but rather descend into an inhuman, bestial state.
In the climactic scene of the story, Hector’s father, King Priam, with the aid of the gods and at great personal risk, makes a clandestine nighttime visit to Achilles’s tent. He is there to beseech Achilles for the body of Hector his son. When he is alone with Achilles, Priam entreats him to “‘Remember your own father, great godlike Achilles.” He goes on:
Revere the gods, Achilles! Pity me in my own right,remember your own father! I deserve more pity . . .I have endured what no one on earth has ever done before—I put to my lips the hands of the man who killed my son.’”Those words stirred within Achilles a deep desireto grieve for his own father. Taking the old man’s handhe gently moved him back. And overpowered by memoryboth men gave way to grief. Priam wept freelyfor man-killing Hector, throbbing, crouchingbefore Achilles’ feet as Achilles wept himself,now for his father, now for Patroclus once again,and their sobbing rose and fell throughout the house.
Pity again, and again the face-to-face encounter. It is this encounter that draws Achilles back to the mortal realm—the realm of limits, sorrow, memory, custom, and death. And signaling this reentry into the common human condition, Achilles says to Priam, “So come—we too, old king, must think of food.”
It is worth noting the obvious at this point: the fullness of embodied presence is no guarantee that we will take pity on one another or recognize ourselves in the other. People can be horrendously cruel to one another even when confronted with the body of another, something the Iliad also teaches us if we had not yet learned the lesson in more bitter fashion.
Simone Weil, who wrote a remarkable meditation on the epic title “The Iliad: Or, the Poem of Force,” knew this well. “The true hero, the true subject, the center of the Iliad is force,” Weil declares in the opening lines. “Force employed by man, force that enslaves man, force before which man’s flesh shrinks away. In this work, at all times, the human spirit is shown as modified by its relations with force, as swept away, blinded, by the very force it imagined it could handle, as deformed by the weight of the force it submits to.”
“To define force —” she writes, “it is that x that turns anybody who is subjected to it into a thing. Exercised to the limit, it turns man into a thing in the most literal sense: it makes a corpse out of him.”
Later in the essay, she observes that “the man who is the possessor of force seems to walk through a non-resistant element; in the human substance that surrounds him nothing has the power to interpose, between the impulse and the act, the tiny interval that is reflection. Where there is no room for reflection, there is none either for justice or prudence.” Several lines further on she writes again of “that interval of hesitation, wherein lies all our consideration for our brothers in humanity.”
These conditions can obviously manifest themselves in contexts far removed from online forums and social media. But a simulation of the experience of power, understood as the collapse of the space between impulse and act, may be more generalized in online environments where a forgetfulness of the body is also a default setting.
The interval of hesitation is not unlike what Haugen described, in very different language, as part of the embodied feedback cycle of human interaction, where a wince and a tear are visible to the one who elicits them from the other. And in this way the idealized frictionless quality of online actions, particularly in the absence of the body, can be understood as an inducement to cruelty. Although, inducement may not be quite right. Perhaps it is better to say that in online environments, certain critical impediments to cruelty, fragile and tenuous as they already are in the course of human affairs, are lifted.
Looking at these dynamics from another perspective, and with Weil’s analysis in mind, we might also say that in online environments we may be tempted by the illusion of force or power. We are inclined to be forgetful of our bodies and hence also of the virtues of acknowledge dependence, especially pity. And the interval of reflection, which is also the fleeting, ephemeral ground in which the seed of virtue may yield its fruit, is collapsed by design. The result is a situation in which it is easier to imagine the other as an object susceptible to our manipulations and to mistake the absence of friction with the possession of power. Regrettably, these are mutually reinforcing tendencies, which, it should be noted, have little to do with anonymity, and for which there are no obvious technical solutions.
Contexts that sever the relationship between action and presence make it difficult for pity to emerge. Consequently, in her testimony Frances Haugen worried that children whose habits and sensibilities were shaped in online contexts would come to accept, or even expect, cruelty and then carry this acceptance over into the rest of their lives. This is certainly plausible, but it also opens up another possibility: that we reverse or at least complicate the flow of influence. Online platforms are morally formative, but, despite their ubiquity and their Skinner box quality, they are not the only morally formative realities in our lives, or that of our children, unless, of course, we altogether cede that power to them.
Welcome to the Convivial Society, a newsletter about technology and culture. It’s been a bit longer than usual since our last installment, but I’m glad to report that this has been in part a function of some recent developments, which I’m delighted to tell you about. Many of you are reading this because you caught my interview with Ezra Klein back in August. That interview was based on an installment of this newsletter in which I offered 41 questions through which to explore the moral dimensions of a given technology. Well, as it turns out, I’ve sold my first book, based on those questions, to Avid Reader Press, an imprint of Simon & Schuster. As you might imagine, I’m thrilled and immensely grateful.
Naturally, I’ll keep you posted as I write the book and publication day approaches. There’s more than a little work to be done before then, of course. And, in case you were wondering, the newsletter will continue as per usual. In fact, I’ve got about five drafts going right now, so stay tuned.
For now, here is a rather long and meandering, but certainly not comprehensive, discussion of models and metaphors for ordering knowledge, memory, the ubiquity of search, the habits and assumptions of medieval reading, and how information loses its body. I won’t claim this post is tightly argued. Rather, it’s an exercise in thinking about how media order and represent the world to us and how this ordering and representation interacts with our experience of the self.
Here’s a line that struck me recently: “It’s an idea that’s likely intuitive to any computer user who remembers the floppy disk.”
Is that you? It’s definitely me. I remember floppy disks well. But, then, it occurred to me that the author might have the 3.5-inch variety in mind, while I remember handling the older 8-inch disks as well.
In fact, I am even old enough to remember this thing below, although I never did much but store a few lines of code to change the color of the TV screen to which my Tandy was hooked up.
In any case, the idea that is supposedly intuitive to anyone who remembers floppy disks is the directory structure model, or, put otherwise, “the hierarchical system of folders that modern computer operating systems use to arrange files.” In a recent article for The Verge, “File Not Found: A generation that grew up with Google is forcing professors to rethink their lesson plans,” Monica Chin explored anecdotal evidence suggesting that, by somewhere around 2017, some significant percentage of college students found this mental model altogether foreign.
The essay opens with a couple of stories from professors who, when they instructed their students to locate their files or to open a folder, were met with incomprehension, and then proceeds to explore some possible causes and consequences. So, for example, she writes, that “directory structure connotes physical placement — the idea that a file stored on a computer is located somewhere on that computer, in a specific and discrete location.” “That’s a concept,” she goes on to add,
that’s always felt obvious to Garland [one of the professor’s supplying the anecdotal evidence] but seems completely alien to her students. “I tend to think an item lives in a particular folder. It lives in one place, and I have to go to that folder to find it,’ Garland says. ‘They see it like one bucket, and everything’s in the bucket.”
This suggests, of course, the reality that metaphors make sense of things by explaining the unknown (tenor) by comparison to the known (vehicle), but, when the known element itself becomes unknown, then the meaning-making function is lost. Which is to say, that files and folders are metaphors that help users navigate computers by reference to older physical artifacts that would’ve been already familiar to users. But, then, what happens when those older artifacts themselves become unfamiliar? I happen to have one of these artifacts sitting in front of me in my office, but, in truth, I never use it.
Of course, even though I don’t use it myself now, I once did and I haven’t forgotten the logic. I suspect that for others, considerably younger than myself, the only file folder they’ve seen is the one that appears as an icon on their computers. So, perhaps it is the case that the metaphor has simply broken down in the way so many other metaphors do over time when the experiences upon which they depended are lost due to changes in material culture.
Chin points in this direction when she writes, “It’s possible that the analogy multiple professors pointed to — filing cabinets — is no longer useful since many students Drossman’s age spent their high school years storing documents in the likes of OneDrive and Dropbox rather than in physical spaces.” Although, I think she undersells the significance of the observation because she thinks of it as an analogy rather than as a metaphor.
The point seems like a crucial one. Mental categories tend to be grounded in embodied experiences in a material world. Tactile facility with files, folders, and filing cabinets grounded the whole array of desktop metaphors that appeared in the 1980s to organize the user’s experience of a computer. And I think we ought to take this as a case in point of a more general pattern: technological change operates on the shared material basis of our mental categories and, yes, the “metaphors we live by.” Consequently, technological change not only transforms the texture of everyday life, it also alters the architecture and furniture of our mental spaces. Hold on to that point for a moment. We’ll come back to it again. But first, let’s come back to Chin’s article.
Even when students who have some understanding of the logic of the directory structure attempt to use it to organize their files, Chin’s sources suggest that they are unlikely to stick with it. Reporting the case of one graduate student, she writes,
About halfway through a recent nine-month research project, he’d built up so many files that he gave up on keeping them all structured. “I try to be organized, but there’s a certain point where there are so many files that it kind of just became a hot mess,” Drossman says. Many of his items ended up in one massive folder.
In passing, I’ll note that I was struck by the phrase “a hot mess,” if only because the same phrase occurs in the comments from another student in the article. I realize, of course, that it is a relatively popular expression, but I do wonder whether we might be justified in reading something of consequence into it. How do our mental models for organizing information intersect with our experience of the world?
Whatever the case on that score, Chin goes on to put her finger on one more important factor. Writing about why the directory structure is no longer as familiar as it once was, she observes, “But it may also be that in an age where every conceivable user interface includes a search function, young people have never needed folders or directories for the tasks they do.” A bit further on she adds, “Today’s virtual world is largely a searchable one; people in many modern professions have little need to interact with nested hierarchies.”
Similar arguments have been made to explain how some people think about their inboxes. While some are quite adept at using labels, tags, and folders to manage their emails, others will claim that there’s no need to do because you can easily search for whatever you happen to need. Save it all and search for what you want to find. This is, roughly speaking, the hot mess approach to information management. And it appears to arise both because search makes it a good-enough approach to take and because the scale of information we’re trying to manage makes it feel impossible to do otherwise. Who’s got the time or patience?
A Scribal Revolution and the Emergence of the Text
Okay, now it’s time for an 800-year detour. I’ll confess at this point that my interest in this topic is fueled, in part, by my reading of the last book Ivan Illich ever wrote, In the Vineyard of the Text. If you know Illich chiefly as the author of Deschooling Society or Tools for Conviviality, then this book will, I think, catch you off guard. It’s written in a different style and takes up a rather different set of concerns. Its focus is a relatively forgotten revolution in technologies of the written word that occurred around the 12th century and which, in Illich’s view, transformed the intellectual culture of the west and contributed to the rise of the modern individual. If this sounds a bit far fetched, I’d just invite you to consider the power of media. Not the media, of course, but media of human communication, from language itself to the alphabet and the whole array of technologies built upon the alphabet. Media do just that: they mediate. A medium of communication mediates our experience of the world and the self at a level that is so foundational to our thinking it is easy to lose sight of it altogether. Thus technologies of communication shape how we come to understand both the world and the self. They shape our perception, they supply root metaphors and symbols, they alter the way we experience our senses, they generate social hierarchies of value, and they structure how we remember. I could go on, but you get the point.
So, to keep our starting point in view, the apparent fading of the directory structure model doesn’t matter because it is making STEM education more challenging for college professors. If it matters, it matters as a clue to some deeper shifts in the undercurrents of our cultural waters. As I read about these students who had trouble grokking directory structures, for example, I remembered Walter Ong’s work on Peter Ramus (or in Latin, Petrus Ramus, but, in fact, Pierre de La Ramée). Ramus was a sixteenth century scholar, who in Ong’s telling, despite his unspectacular talents, became a focal point of the then current debates about knowledge and teaching. Ong frames him as a transitional figure with one foot in the late medieval scholastic milieu and another in the “modern” world emerging in the wake of the Renaissance. Ong, who is best remembered today for his work on orality and literacy, cut his teeth on Ramus, his research focusing on how Ramus, in conjunction with the advent of printing, pushed the culture of Western learning further away from the world of the ear (think of the place of dialog in the Platonic tradition) toward the world of the eye. His search for a universal method and logic, which preceded and may have prepared the way for Descartes, yielded a decidedly visual method and logic, complete with charts and schemas. Perhaps a case could be made, and maybe has been made, that this reorientation of human learning and knowing around sight finds its last iteration in the directory structure of early personal computers, whose logic is fundamentally visual. Your own naming of files and folders may presume another kind of logic, but there is no logic to the structure itself other than the one you visualize, which may be why it was so difficult for these professors to articulate the logic to students. In any case, the informational milieu the student’s describe is one that is not ordered at all. It is a hot mess navigated exclusively by the search function.
Back to Illich. I don’t have a carefully worked out theory of how newer mental models for organizing information will change the way we think about the world and ourselves, but I believe that revisiting some of Illich’s observations about this earlier transition will prove fruitful. Illich himself wrote in the hope that this would be the case.
In the Vineyard of the Text, which is itself a careful analysis of a medieval guide to the art of reading written by Hugh of St. Victor, sets out to make one principle argument that goes something like this: In the 12th century, a set of textual innovations transformed how reading was experienced by the intellectual class. Illich describes it as a shift from monkish reading focused on the book as a material artifact to scholastic reading focused on the text as a mental construct that floats above its material anchorage in a book. (I’m tempted to say manuscript or codex to emphasize the difference between the artifact we call a book and what Hugh of St. Victor would’ve handled.) A secondary point Illich makes throughout this fascinating book is that this profound shift in the culture of the book that shaped Western societies for the rest of the millennium was also entangled with the emergence of a new experience of the self.
So, let’s start with the changing nature of the reader’s relationship to the book and then come back to the corresponding cultural and existential changes.
The Sounding Pages
It’s commonly known that the invention of printing in the 15th century was a momentous development in the history of European culture. Elizabeth Eisenstein’s work, especially, made the case that the emergence of printing revolutionized European society. Without it, it seems unlikely that we get the Protestant Reformation, modern science, or the modern liberal order. Illich was not interested in challenging this thesis, but he did believe that the print revolution had an important antecedent: the differentiation of the text from the book.
To make his case, Illich begins by detailing, as well as the sources allow, what had been the experience of reading prior to the 12th century, what Illich calls monkish reading. This kind of reading was grounded not just in the book generically, but in a particular book. Remember, of course, that books were relatively scarce artifacts and that reproducing them was a laborious task, although often one lovingly undertaken. This much is well known. What might not be as well known is that many features that we take for granted when we read a book had not yet been invented. These include, for example, page numbers, chapter headings, paragraph breaks, and alphabetical indexes. These are some of the dozen or so textual innovations that Illich had in mind when he talks about the transformation of the experience of reading in the 12th century. What they provide are multiple paths into a book. If we imagine the book as an information storage technology (something we can do only on the other side of this revolution) then what these new tools do is solve the problems of sorting and access. They help organize the information in such a way that readers can now dip in and out of what now can be imagined as a text independent of the book.
I’ve found it helpful to think about this development by recalling how Katherine Hayles phrased one of the themes of How We Became Posthuman. She sought to show, in her words, “how information lost its body.” Illich is here doing something very similar. The text is information that has lost its body, i.e. the book. According to Illich, until these textual innovations took hold in the 12th century, it was very hard to imagine a text apart from its particular embodiment in a book, a book that would’ve born the marks of its long history—in the form, for example, of marginalia accruing around the main text.
I’ve also thought about this claim by analogy to the photograph. The photograph is to the book as the image is to the text. This will likely make more sense if you are over 35 or thereabouts. Today, one can have images that live in various devices: a phone, a laptop, a tablet, a digital picture frame, the cloud, an external drive, etc. Before digital photography, we did not think in terms of images but rather of specific photographs, which changed with age and could be damaged or lost altogether. Consequently, our relationship to the artifact has changed. Roland Barthes couldn’t be brought to include the lone photograph he possessed of his mother in his famous study of photography, Camera Lucida published in 1980. The photograph was too private, his relationship to it too intimate. This attitude toward a photographic image is practically unintelligible today. Or, alternatively, imagine the emotional distance between tearing a photograph and deleting an image. This is an important point to grasp because Illich is going to suggest that there’s another analogous operation happening in the 12th century as the individual detaches from their community. But we’ll come back to that in the last section.
Some of these innovations also made it easier to read the book silently—something that was unusual in the scriptoriums of early medieval monasteries, which could be rather noisy places. And, of course, this reminds us that the transition from orality to literacy was not accomplished by the flipping of a switch. As Illich puts it, the monkish book was still understood as recorded sound rather than as a record of thought. Just as we thought of the web in terms of older textual technologies and spoke of web pages and scrolling, readers long experienced the act of reading by reference to oral forms of communication.
So, here is one of a handful of summary paragraphs where Illich lays out his case:
This [technical] breakthrough consisted in the combination of more than a dozen technical inventions and arrangements through which the page was transformed from score to text. Not printing, as is frequently assumed, but this bundle of innovations, twelve generations earlier, is the necessary foundation for all stages through which bookish culture has gone since. This collection of techniques and habits made it possible to imagine the ‘text’ as something detached from the physical reality of a page. It both reflected and in turn conditioned a revolution in what learned people did when they read — and what they experienced reading to mean.”
Elsewhere, he wrote,
The text could now be seen as something distinct from the book. It was an object that could be visualized even with closed eyes [….] The page lost the quality of soil in which words are rooted. The new text was a figment on the face of the book that lifted off into autonomous existence [….] Only its shadow appeared on the page of this or that concrete book. As a result, the book was no longer the window onto nature or god; it was no longer the transparent optical device through which a reader gains access to creatures or the transcendent.
I’m going to resist the temptation to meticulously unpack for you everyone of those claims, but the last sentence deserves a bit of attention, particularly when coupled with the last sentence of the previously quoted paragraph. Together they remind us that what we think we’re doing when we’re reading evolves over time. We don’t read with the same set of assumptions, habits, and expectations that the medieval monks or modern scholastic readers brought to the text. As Illich put it in the early 1990s, “Quite recently reading-as-a-metaphor has been broken again.” And a little further on, “The book has now ceased to be the root-metaphor of the age; the screen has taken its place. The alphabetic text has become but one of many modes of encoding something, now called ‘the message.’”
Part of the charm of In the Vineyard of the Text lies in its careful attention to what monastic readers thought they were doing when they read a book, and not just a sacred book. The book was a source of wisdom and a window onto the true order of things. Through it the reader made contact not with the thoughts of a person but with reality itself. The reader’s vision, conceived of as a searchlight emanating from the eyes, searched the book, often an illuminated manuscript, for the light of truth. In the book, the reader sought to order their soul. “‘To order’” as Illich observed, “means neither to organize and systematize knowledge according to preconceived subjects, nor to manage it. The reader’s order is not imposed on the story, but the story puts the reader into its order. The search for wisdom is a search for the symbols of order that we encounter on the page.” The presumption of order makes for a striking contrast to the experience of a hot mess, of course, and the search for wisdom is rather different than what we do when we are doing what we call searching.
The reader sought ultimately to order his soul in accord with the order of things he discovered through the book. But to do so, the reader had to first be trained in the arts of memory. The student would, according to the ancient art, fashion within themselves a memory palace to store and readily access the wisdom he encountered in the book. Interestingly, search at this stage was primarily a mental technique designed to readily access the treasures kept in the mind’s storehouse. As St. Augustine, a trained rhetorician undoubtedly adept at the arts of memory, put it nearly 700 years earlier, “I come to fields and vast palaces of memory, where are the treasures of innumerable images of all kinds of objects brought in by sense-perception.”
Monastic reading, as Illich describes it, was taken to be “an ascetic discipline focused on a technical object.” That technical object was the book as it was known prior to the twelfth century. It was a tool through which the reader’s soul was finely tuned to the true order of things. This approach to reading was not sustainable when technical innovations transformed the experience of the book into that of a scholastic text read for the sake of engaging with the recorded the thoughts of an author.
Perhaps you’ve gotten this far and are wondering what exactly the point of all of this might be. To be honest, I delight in this kind of encounter with the past for its own sake. But I also find that these encounters illuminate the present by giving us a point of contrast. The meaning and significance of contemporary technologies become clearer, or so it seems to me, when I have some older form of human experience I can hold it up against. This is not to say that one form is necessarily better than the other, of course. Only that the nature of each becomes more evident.
It’s striking, for instance, that in another age there existed the presumption of an order of things that could be apprehended through books—not as repositories of information and arguments but as windows onto the real—and that the role of reading was to help order the soul accordingly. That the meaning of what, on the surface, appears to be the very same activity could alter so dramatically is remarkable. And it prompts all sorts of questions for us today. What do we imagine we are doing when we are reading? How have our digital tools—the ubiquity of the search function, for example—changed the way we relate to the written word? Is there a relationship between our digital databases and the experience of the world as a hot mess? How has the digital environment transformed not only how we encounter the word, but our experience of the world itself?
I’d say at this juncture that we are reeling under the burdens of externalized memory. Hugh’s students labored to construct elaborate interior structures to order their memories and enjoy ready access to all the knowledge they accumulated. And these imagined structures were built so as to mirror the order of knowledge. We do not strive to interiorize knowledge. We build external rather than internal archives. And we certainly don’t believe that interiorizing knowledge is a way of fitting the soul to the order of things. In part, because the very idea of an order of things is implausible to those of us whose primary encounter with the world is mediated by massive externalized databases of variously coded information.
There comes a point when our capacity to store information outpaces our ability to actively organize it, no matter how prodigious our effort to do so. Consider our collections of digital images. No one pretends to order their collections. I’m not sure what the number might be, maybe 10,000, at which our efforts to organize images falters. Of course, our apps do this for us. They can self-sort by a number of parameters: date, file size, faces, etc. And Apple or Google Photos offer a host of other algorithmically curated collections to make our image databases meaningful. We outsource not only remembering but also the ordering function.
Bearing in mind the current length of this post, let me draw things to a close by briefly taking up the other salient feature of Illich’s discussion: the relationship between the emergence of the text and the emergence of the modern self.
“I am not suggesting that the ‘modern self’ is born in the twelfth century, nor that the self which here emerges does not have a long ancestry,” Illich remarks at one point. But he certainly believes something of significance happens then and that it bears some relationship to the emergence of the text. As he goes on to say,
We today think of each other as people with frontiers. Our personalities are as detached from each other as are our bodies. Existence at an inner distance from the community, which the pilgrim who set out to Santiago or the pupil who studied the Didascalicon had to discover on their own, is for us a social reality, something so obvious that we would not think of wishing it away. We were born into a world of exiles …
What Illich is picking up on here is the estrangement of the self from the community that was analogous in his view to the estrangement of the text from the book. “What I want to stress here,” Illich claims at one point, “is a special correspondence between the emergence of selfhood understood as a person and the emergence of ‘the’ text from the page.”
Illich goes on at length about how Hugh of St. Victor likened the work of the monk to a kind of intellectual or spiritual pilgrimage through the pages of the book. Notice the metaphor. One did not search a text, but rather walked deliberately through a book. At one point Illich writes, “Modern reading, especially of the academic and professional type, is an activity performed by commuters or tourists; it is no longer that of pedestrians and pilgrims.”
So, to summarize this point, as the text detaches from the book, or the image from the photograph, so the self detaches from the community. There is one point, though, at which I think I might build on Illich’s analysis. Illich believed that in 12th century the self begins to detach from the community. I wonder whether there is not a case to be made that the self was also detaching from the body. I think, for example, of the mind/body or soul/body dualism that characterizes the tradition of Cartesian thought. It’s tempting to imagine that this dualism was just a standard feature of medieval thought as well. But I’m not sure this is true. Thomas Aquinas, born roughly 80 years after Hugh, could still write, “Since the soul is part of the body of a human being, the soul is not the whole human being and my soul is not I.” There’s a lot one could get into here, of course, but it’s worth considering not only the wilder transhumanist dreams of immortality achieved by uploading our minds to the machine but also how we’ve dispersed the self through digital media. The self is no longer rooted to the experience of the body. It lives in various digitally mediated manifestations and iterations. As such it is variously coded and indexed. We can search not only the text but the archives of the self. And perhaps, like other forms of information that have lost their body, it becomes unmanageable. Or, at least it takes on that aspect, when we understand it through the primary experience of a digitally dispersed self. While the early instantiations of social media were characterized by the well-managed performance, more recent iterations seem to scoff at the idea. Better to simply embrace the hot mess.
A week after the fact, here is the audio version of the last installment: “Notes from the Metaverse.” Ordinarily, the audio version accompanies the essay, but in this case you’re getting it a bit later. Nothing new in this version, so if you’ve already read the essay, feel free to disregard. But for those of you who do prefer listening, here you go, with my apologies for not getting this out sooner.
I will add in this short note that since I posted “Notes from the Metaverse” last week, Facebook and Ray-Ban announced the release of a set of smart-glasses. They don’t do much, at least they appear to have disappointed some. The camera might appear to be their main feature, but, in fact, I’d argue that their main feature is that they manage to look pretty normal and perhaps even stylish depending on your tastes. This, it seems to me, is the point at this juncture. Smart glasses, especially their camera, need to be normalized before they can become critical metaverse infrastructure. In that light, it’s worth noting, too, that the glasses bear absolutely no Facebook branding. We’ll see if they fare any better with the pubic than Google Glass did several years ago. Needless to say, much of the same criticism about the way that a camera enables surreptitious recording, thus more completely objectifying others as fodder for the digital spectacle, applies here as well.
Cheers,
Michael
Welcome to The Convivial Society. In this installment, you’ll find some thoughts on the cultural consequences of digital media. A big chunk of new readers means that I’m being a bit more careful not to presume familiarity with themes that I’ve written about before, so some of what follows may be old news for some of you. Either way, you can file this installment under my ongoing effort to think about the social and political fallout of digital media. And it really is an effort to think out loud. Your feedback and even pushback are welcome.
“I can’t express how useless these old school Sunday shows are,” Sean Illing recently tweeted, “and it blows my mind that a single human person still watches any of them.” Illing is here referring to the gamut of Sunday morning political talk shows—This Week, Meet the Press, etc.—and I can’t imagine that many people would step up to bat for the value of these shows. But their waning status may be worth a moment’s reflection as an entry point into a discussion about the ongoing digitization of our media ecosystem. And, to be clear, I’m not thinking here of “the media,” as in journalists, newspapers, and cable talk shows. What I have in mind is media as the plural of a medium of communication such as writing, print, telephony, radio, television, etc. Sometimes the advent of a new medium of communication can have striking social and psychic consequences. Here’s how Neil Postman once put it:
A new medium does not add something; it changes everything. In the year 1500, after the printing press was invented, you did not have old Europe plus the printing press. You had a different Europe. After television, America was not America plus television. Television gave a new coloration to every political campaign, to every home, to every school, to every church, to every industry, and so on.
So, with this approach in mind, here was my initial response to Illing’s tweet. Regarding the Sunday morning shows, I suggested that “their use, such as it is, is as content or fodder for the newer medium, rather than as an important medium of information in their own right. The assumptions and values of their form are irrelevant compared to the assumptions and values of the new form into which they are absorbed.”
This is basically just media ecology 101. For those of you still getting your bearings around here, I’ll mention that along with channelling the strain of old tech criticism that includes Lewis Mumford, Jacques Ellul, Ivan Illich, and the like, I also occasionally slip into media ecology mode. Marshall McLuhan, the aforementioned Neil Postman, Walter Ong, and Harold Innis are some of the leading lights, and you can read more about media ecology on the website of the Media Ecology Association. The few lines above from Postman give you a good sense of the approach. The general idea is that a medium of communication matters as much as, if not sometimes more than, the content that is being communicated through it. McLuhan, in another memorable formulation, put it this way: “Our conventional response to all media, namely that it is how they are used that counts, is the numb stance of the technological idiot. For the ‘content’ of a medium is like the juicy piece of meat carried by the burglar to distract the watchdog of the mind.”
This can be a counter-intuitive claim if we’re used to thinking about a medium of communication (and technology in general) as an essentially neutral form. Spoiler alert: It’s not. As McLuhan famously put it, “The medium is the message,” which, as he went on to explain, means, for example, that “the ‘message’ of any medium or technology is the change of scale or pace or pattern that it introduces into human affairs.” A change, I want to stress, that is largely irrespective of the nature of the content that the medium is used to communicate.
Media ecology, then, offers a helpful set of critical tools we can apply to questions about digital media and the public sphere. So, for example, in saying that the Sunday talk shows are fodder for the new medium, I’m recalling McLuhan’s observation that “the ‘content’ of any medium is always another medium.” In this case, digital forms of communication constitute the newer medium. And the point applies not only to the Sunday shows, but really to all pre-digital media. Television, film, radio, print — all are taken up and absorbed by digital media either because they themselves become digital artifacts (digital files rather than, say, rolls of film) or because they become the content of our digital media feeds.
This means, for example, that a movie is not only something to be taken whole and enjoyed on its own terms, it is also going to be dissected and turned into free-floating clips and memes and gifs. What’s more, the meaning and effect of these clips, memes, and gifs may very well depend more on their own formal structure and their use in our feeds than on their relation to the film that is their source. It means, too, that both the nature of the television show changes when it is made for digital contexts and that we watch Ted Lasso, in part, so that we can post about it. And, it means that every other Sunday morning or so, Chuck Todd is trending on Twitter not because he is an authoritative and influential media figure, but because he is not. It means the more savvy guests know the point of their appearance is not to inform or debate, but to generate a potentially viral clip, or, alternatively, to assiduously avoid doing so. It means, ultimately, that the habits, standards, and norms of the older medium are displaced by the habits, standards, and norms of the newer medium. However seriously the Sunday morning hosts take their job, it just doesn’t matter in the way they think it does. It all starts to feel a bit like a modern version of Don Quixote. The ostensible protagonists have not realized that the world has moved on, and they unwittingly strike a tragicomic note. Woe to those who fail to grasp this point, and woe to all of us when the point is best grasped by the most unscrupulous, as is too often the case.
In other words, as a good friend of mine is fond of asking, “what frames what?” It’s a good diagnostic question, and it applies nicely here. So we might ask, “What frames what, the televisual medium or the digital?” I’d argue that the answer is pretty straightforward: increasingly, digital media frames all older forms, and it is the habits, assumptions, and total effect of the digital medium that matters most. I won’t pretend to know all of what this will mean, of course. Digital media is a complex, multi-faceted, ever-shifting phenomenon, and we’re still sorting out the “symbolic fallout.” But I can tell you that many of the disorders of our moment, particularly with regard to what we sometimes quaintly call the public sphere, will make more sense if we see them, at least in part, as a function of this ongoing transition into a digital media ecosystem.
Let me take another angle on this theme by commenting on an observation Ari Schulman, the editor of the The New Atlantis, recently made when he tweeted that he was “struck by the odd sense that America is post-pseudo-event. If the same disastrous scenes of Afghanistan pullout happen ten years ago, you can practically feel them preparing the reels for a Newseum exhibit. Now events are events, yet it doesn't feel like a relief.”
I think I understand what Schulman is getting at here. It’s the sense, especially pronounced the more online one happens to be, that events can’t sustain the kind of attention that was once lavished on them in an older televisual media environment. Or, to put it another way, I might say that it’s the sense that nothing seems to get any durable traction in our collective imagination. I’ll provisionally suggest that this is yet another example of the medium being the message. In this case, I would argue that the relevant medium is the social media timeline. The timeline now tends to be the principal point of mediation between the world out there and our perception of it. It refracts the world, rather than reflecting it. And it’s relentless temporality structures our perception of time and with it our sense of what is enduring and significant. And, at the risk of becoming pedantic, let me say again this happens regardless of the nature of the content we encounter or even how that content is moderated. But let’s come back to the idea of being post-pseudo-event.
“Pseudo-event” was a term coined by Daniel Boorstin in his 1962 book, The Image: A Guide to Pseudo-events in America. They were a function of a regime of the imagination ruled by the image, or, more specifically, the manufactured image. According to Boorstin, the pseudo-event “is not spontaneous, but comes about because someone has planned, planted, or incited it.” It was Boorstin, too, who in this same book gave us the category of someone who is famous for being famous, or, in his words, “a person who is known for his well-knownness,” a similarly artificial and vacuous dynamic. An event, by contrast, has a certain of integrity to it; it lacks the artificial character of the pseudo-event that exists chiefly to be noted and commented upon.
So, are we, as Schulman suggested, post-pseudo-event? Yes and no, I’d say, but first a little bit of clarification may be in order. It’s important to note that on the ground, of course, the withdrawal from Afghanistan is not a pseudo-event; it is the site of very real danger, desperation, and suffering. It’s also worth distinguishing between the cultural power of the image and a pseudo-event. Images are the currency of the pseudo-event, but an event captured by an image does not thereby necessarily become a pseudo-event.
The questions we’ve been hovering around have to do with how different media of documentation and dissemination translate a concrete event into the realm of symbolic cultural exchange. The power of a medium lies in this work of mediation, which in turn shapes the public sphere and the experience of the self in relation to it. It is in this way that the advent of new media technology—from writing to print and then radio, television, and the internet—can have far-reaching consequences.
So let’s come back to Schulman’s observation. I’d say that we are not quite post-pseudo-event. In fact, from a certain perspective they have multiplied exponentially in the so-called attention economy. But something is different, and one way to see this is to recognize what has happened to the image, which, again, is the substrate of the pseudo-event. Simply put, images aren’t what they used to be, and, as a consequence, the character of the pseudo-event has changed as well.
As we approach the 20th anniversary of the September 11th attacks, I’m tempted to suggest that the image of the towers burning might be the last example of an iconic public image with longstanding cultural currency. There are undoubtedly some more recent examples of powerful and memorable images, but their power is subverted by the digital media environment that now frames them. I’m tempted to argue that 9/11 marked the beginning of the end for the age of the image in the sense that Boorstin meant it. Specifically, it is the end of the age of the manufactured image that speaks compellingly to a broad swath of society.
If this is generally near the mark, then one explanation is simply that not long after 9/11, the image economy began to collapse when the market was flooded with digital representations. As a simple thought experiment, consider how different the documentary evidence of 9/11 would be if the event had occurred ten years later after smartphones had saturated the market. There’s another observation by McLuhan that applies here. McLuhan, together with his son Eric, developed a tetrad of media effects as a useful tool for analyzing the consequences of new technologies. The four elements of the tetrad were enhancement, retrieval, obsolescence, and reversal. Or, to put these as questions:
What does a technology enhance?What does it make obsolete?What does it retrieve that had been obsolesced earlier?What does it flip into when pushed to extremes?
In response to Ari’s observation, the fourth of those effects came to mind. “When pushed to the limits of its potential,” McLuhan claimed, “the new form will tend to reverse what had been its original characteristics.” In this case, the massive proliferation of images leads to a degradation of the cultural power of the image. Boorstin himself noted the roots of the pseudo-event in the “graphic revolution” of the late 19th century, and, indeed, it was a revolution powered by new tools and techniques. But the ability to generate and circulate digital images represents yet another revolution in scale, although scale is not the only factor.
One characteristic of the digital image is the ease with which it can be not only reproduced and disseminated but also manipulated. I don’t mean this only in the more nefarious sense, I mean simply that just about anyone with internet access and a computer can set about to fiddle with an image in countless ways, yielding everything from artifacts of what used to be called re-mix culture to the more notorious case of deepfakes. (Beyond this, of course, there is the more recent development of synthetic images generated by a variety of high-powered computing processes.)
With the same “graphic revolution” that Boorstin referenced in mind, Walter Benjamin argued in a well-known early 20th-century essay that the work of art lost its aura, or a kind of authority grounded in its unique materiality, in the age of its mechanical reproducibility. The Mona Lisa, in other words, loses something of its cultural power when any of us can slap a passable reproduction over our toilet if we so desired. Perhaps we can extend this by saying that, in turn, the image loses its own cultural standing in the age of its digital manipulability. Mechanical reproduction, photography especially, collapsed the distance necessary for a work of art to generate its aura, its commanding presence. Digital manipulability has had an analogous effect on the image. It’s no longer received from a class of professional story tellers who have monopolized the power to produce and interpret the symbolic currency of cultural exchange. The image-making tools have been democratized. The image itself has been demystified. Every image we encounter now invites us to manipulate it to whatever end strikes our fancy.
I took to Twitter while I was writing this to do a little bit of unscientific research about the power of images. I asked what the more recent examples of iconic images might be, suggesting that I wondered whether there were such images. I don’t remember the exact wording because I deleted the tweet after it was quote tweeted into the white supremacist corners of the platform. But, while it was up, the responses were instructive. While some genuine examples came up, most notably, I think, the image of the Syrian child who tragically drowned fleeing his homeland, most were, in fact, memes or decidedly ironic images.
It’s probably too simplistic to put it this way, but perhaps we might say that the age of the image has yielded to the age of the meme. Again, this is not to say that powerful, moving images no longer appear, but they are framed by the ethos of digital media, which has, presently at least, given us the meme as its ideal form. The image could sustain a degree of earnestness, the meme is much too self-aware for that. The image could inspire, the meme, powerful in its own right, cannot. The image could be managed, the meme resists all such efforts. And as the image has yielded to the meme, the pseudo-event now manifests chiefly as the Discourse—ceaseless, self-referential, demoralizing, and ultimately untethered from the events themselves.
Ultimately, the image as it fed the pseudo-event became a tool to manage public opinion, and now it is broken. Some people get this, others obviously do not.
“What is fundamental to a convivial society is not the total absence of manipulative institutions and addictive goods and services, but the balance between those tools which create the specific demands they are specialized to satisfy and those complementary, enabling tools which foster self-realization. The first set of tools produces according to abstract plans for men in general; the other set enhances the ability of people to pursue their own goals in their unique way.”
— Ivan Illich, Tools for Conviviality
Welcome to the Convivial Society, especially the many of you for whom this is the first actual installment to hit your inbox. If you signed up in the last week or so, you may want to check out the brief orientation to the newsletter I sent out recently to new readers. Below you’ll find a full installment of the newsletter, which contains an essay followed by links to a variety of items, some of them with a bit of additional commentary from me, and a closing note. Read on. Share promiscuously.
In lines he composed for a play in the mid-1930s, T. S. Eliot wrote of those who
“constantly try to escape From the darkness outside and within By dreaming of systems so perfect that no one will need to be good.”
That last line has always struck me as a rather apt characterization of a certain technocratic impulse, which presumes that techno-bureaucratic structures and processes can eliminate the necessity of virtue, or maybe even human involvement altogether. We might just as easily speak of systems so perfect that no one will need to be wise or temperate or just. Just adhere to the code or the technique with unbending consistency and all will be well.
This dream, as Eliot put it, remains explicitly compelling in many quarters. It is also tacitly embedded in the practices fostered by many of our devices, tools, and institutions. So it’s worth thinking about how this dream manifests itself today and why it can so easily take on a nightmarish quality.
In Eliot’s age, increasingly elaborate and Byzantine bureaucracies automated human decision making in the pursuit of efficiency, speed, and scale, thus outsourcing human judgment and, consequently, responsibility. One did not require virtue or good judgment, only a sufficiently well-articulated system of rules. Of course, under these circumstances, bureaucratic functionaries might become “papier-mâché Mephistopheles,” in Conrad’s memorable phrase, and they may abet forms of what Arendt later called banal evil. But the scale and scope of modern societies also seem to require such structures in order to operate reasonably well, although this is certainly debatable. Whether strictly necessary or not, these systems introduce a paradox: in order to ostensibly serve human society, they must eliminate or displace elements of human experience. Of course, what becomes evident eventually is that the systems are not, in fact, serving human ends, at least not necessarily so.
To take a different class of example, we might also think of the modern fixation with technological fixes to what may often be irreducibly social and political problems. In a prescient 2020 essay about the pandemic, Ed Yong observed that “instead of solving social problems, the U.S. uses techno-fixes to bypass them, plastering the wounds instead of removing the source of injury—and that’s if people even accept the solution on offer.” No need for good judgment, responsible governance, self-sacrifice, or mutual care if there’s an easy technological fix to ostensibly solve the problem. No need, in other words, to be good, so long as the right technological solution can be found.
Likewise, there’s no shortage of examples involving algorithmic tools intended to outsource human judgment. Most recently, I encountered the case of NarxCare reported in Wired. NarxCare is “an ‘analytics tool and care management platform’ that purports to instantly and automatically identify a patient’s risk of misusing opioids.” The article details the case of a 32-year-old woman suffering from endometriosis, whose pain medications were cut off, without explanation or recourse, because she triggered a high-risk score from the proprietary algorithm. You can read the article for further details, which are both fascinating and disturbing, but here’s the pertinent part for my purposes:
“Appriss is adamant that a NarxCare score is not meant to supplant a doctor’s diagnosis. But physicians ignore these numbers at their peril. Nearly every state now uses Appriss software to manage its prescription drug monitoring programs, and most legally require physicians and pharmacists to consult them when prescribing controlled substances, on penalty of losing their license.”
This is an obviously complex and sensitive issue, but it’s hard to escape the conclusion that the use of these algorithmic systems exacerbates the same demoralizing opaqueness, evasion of responsibility, and CYA dynamics that have characterized analog bureaucracies. It becomes difficult to assume responsibility for a particular decision made in a particular case. Or, to put it otherwise, it becomes too easy to claim the algorithm made me do it, and it becomes so, in part, because the existing bureaucratic dynamics all but require it.
This technocratic impulse is alive and well and we’ll come back to it in a moment, but it occurs to me that we might also profitably invert Eliot’s claim and apply it to our digital media environment in which we experience systems so imperfect that it turns out everyone will need to be really, really good. Let me explain what I mean by this. The thought occurred to me when I read yet another tweet advocating for the cultivation of digital media literacy. You should know that, at one level, I think this is fine and possibly helpful under certain circumstances. However, I also think it underestimates or altogether ignores the non-intellectual elements of the problem. It seems unrealistic, for example, to expect that someone, who is likely already swamped by the demands of living in a complex, fast-paced, and precarious social milieu, will have the leisure and resources to thoroughly “do their own research” about every dubious or contested claim they encounter online, or to adjudicate the competing claims made by those who are supposed to know what they are talking about. There’s a lot more to be said about this dynamic, of course. It raises questions about truth, certainty, trust, authority, expertise, and more, but here I simply want to highlight the moral demands, because searching for the truth, or a sufficient approximation, is more than a merely intellectual activity. It involves, for example, humility, courage, and patience. It presumes a willingness to break with one’s tribe or social network with all the risks that may entail. In short, you need to be not just clever but virtuous, and, depending on the degree to which you lived online, you would need to do this persistently over time, and, recently, of course, during a health crisis that has generated an exhausting amount of uncertainty and a host of contentious debates about private and public actions.
This is but one case, the one which initially led me to invert Eliot’s line. It doesn’t take a great deal of imagination to conjure up other similar examples of the kind of virtue our digital devices and networks tacitly demand of us. Consider the discipline required to responsibly direct one’s attention from moment to moment rather than responding with Pavlovian alacrity when our devices beckon us. Or the degree of restraint necessary to avoid the casual voyeurism that powers so much of our social media feeds. Or, how those same platforms can be justly described as machines for the inducement of petty vindictiveness and less-than-righteous indignation. Or, alternatively, as carefully calibrated engines of sloth, greed, envy, despair, and self-loathing. The point is not that our digital media environment necessarily generates vice, rather it’s that it constitutes an ever-present field of temptation, which can require, in turn, monastic degrees of self-discipline to manage. I’m reminded, for example, of how years ago Evgeny Morozov described buying a timed safe in which to lock his smartphone, and how, when he discovered he could unscrew the timing mechanism, he locked the screwdriver in there, too. Under certain circumstances and for certain people, maintaining a level of basic human decency or even psychic well-being, may feel like an exercise in moral sainthood. Perhaps this explains the recent interest in stoicism, although, we do well to remember Pascal’s pointed criticism of the stoics: “They conclude that we can always do what we can sometimes do.”
We alternate, then, between environments that seek to render virtue superfluous and environments that tacitly demand a high degree of virtue in order to operate benignly. Both engender their own set of problems, and, not surprisingly, there’s a reciprocal relationship between these two dynamics. Failure to exhibit the requisite virtue creates a demand for the enhancement of rule-based systems to regulate human behavior. Speech on social media platforms is a case in point that comes readily to mind. The scale and speed of communication on social media platforms generate infamously vexing issues related to speech and expression, which are especially evident during a volatile election season or a global pandemic. These issues do not, in my view, admit of obvious solutions beyond shutting down the platforms altogether. That not being a presently viable option, companies and law makers are increasingly pressured to apply ever more vigilant and stringent forms of moderation, often with counterproductive results. This is yet another complex problem, of course, but it also illustrates the challenge of governing by codes that seek to manage human behavior by generating rules of conduct with attendant consequences for their violation, which, again, may be the only viable way of governing human behavior at the numeric, spatial, and temporal scale of digital information environments. In any case, the impulse is to conceive of moral and political challenges as technical problems admitting of engineered solutions.
To be clear, it’s not that codes and systems are useless. They can have their place, but they require sound judgment in their application, precisely to the degree that they fail to account for the multiplicity of meaningful variables and goods at play in human relations. Trouble arises when we are tempted to make the code and its application coterminous, which would require a rule to cover every possible situation and extenuating circumstance, ad infinitum. This is the temptation that animates the impulse to apply a code with blind consistency as if this would be equivalent to justice itself. The philosopher Charles Taylor has called this tendency in modern liberal societies “code fetishism,” and it ought to be judiciously resisted. According to Taylor, code fetishism “tends to forget the background which makes sense of any code—the variety of goods which rules and norms are meant to realize—as well as the vertical dimension which arises above all of these.” Code fetishism in this sense is not unlike what Jacques Ellul called technique, a relentless drive toward efficiency that eventually became an end in itself having lost sight of the goods for the sake of which efficiency was pursued in the first place.
As an aside, I’ll note that code fetishism may be something like a default setting for modern democratic societies, which have a tendency to tilt toward technocracy (while, of course, also harboring potent counter-tendencies). The tilting follows from a preference for proceduralism, or the conviction that an ostensibly neutral set of rules and procedures are an adequate foundation for a just society, particularly in the absence of substantive agreement about the nature of a good society. In this way, there is a longstanding symbiosis between modern politics and modern technology. They both traffic in the ideal of neutrality—neutral tools, neutral processes, and neutral institutions. It should not be surprising, then, that contemporary institutions turn toward technological tools to shore up the ideal of neutrality. The presumably neutral algorithm will solve the problem of bias in criminal sentencing or loan applications or hiring, for example. And neither should it be surprising to discover that what we think of as modern society, built upon this tacit pact between ostensibly neutral political and technological structures, begins to fray and lose its legitimacy as the supposed neutrality of both becomes increasingly implausible. (Okay, I realize this paragraph calls for a book of footnotes, but it will have to do for now.)
As it turns out, Charles Taylor also wrote the Foreword to Ivan Illich’s Rivers North of the Future. (And—caveat lector, new readers—at the Convivial Society, we eventually come around to Illich at some point.) In his Foreword, Taylor explored Illich’s seemingly eccentric arguments about the origins of modernity in the corruption of the Christian church. It’s an eccentric but compelling argument, however, I’ll leave its merits to one side here in order to hone in on Taylor’s comments about code fetishism, or, to recall where we began, the impulse to build systems so perfect no one will need to be good.
[There’s an excellent discussion of Taylor, code fetishism, and Illich in Jeffrey Bilbro’s wonderful guide to the work of Wendell Berry, Virtues of Renewal: Wendell Berry’s Sustainable Forms.]
“We think we have to find the right system of rules, of norms, and then follow them through unfailingly,” Taylor wrote. “We cannot see any more,” he continued, “the awkward way these rules fit enfleshed human beings, we fail to notice the dilemmas they have to sweep under the carpet [….]”
These codes often spring from decent motives and good intentions, but they may be all the worse for it. “Ours is a civilization concerned to relieve suffering and enhance human well-being, on a universal scale unprecedented in history,” Taylor argued, “and which at the same time threatens to imprison us in forms that can turn alien and dehumanizing.” “Codes, even the best codes,” Taylor concludes, “can become idolatrous traps that tempt us to complicity in violence.” Or, as Illich argued, if you forget the particular, bodily, situated context of the other, then the freedom to do good by them exemplified in the story of the good Samaritan can become the imperative to impose the good as you imagine it on them. “You have,” as Illich bluntly put it, “the basis on which one might feel responsible for bombing the neighbour for his own good.”
In Taylor’s reading, Illich “reminds us not to become totally invested in the code … We should find the centre of our spiritual lives beyond the code, deeper than the code, in networks of living concern, which are not to be sacrificed to the code, which must even from time to time subvert it.” “This message,” Taylor acknowledges, “comes out of a certain theology, but it should be heard by everybody.” And, for what it’s worth, I second Taylor on that note. My chief aim in this post as been to suggest that the code fetishism Taylor described manifests itself both intellectually and materially. Which is to say that it can be analyzed as a principle animating formal legal codes, and it can be implicit in our material culture, informing the technologies that shape our habits and assumptions. To put it another way, dealing with humanity’s imperfections through systems, tools, and techniques is a longstanding strategy. It has its benefits, but we need to be mindful of its limitations, especially when ignoring those limitations can lead to demoralizing and destructive consequences.
As I was wrapping up this post, I caught a tweet from Timothy Burke that rather nicely sums this up, and I’ll give him the last word. Commenting on an article arguing that “student engagement data” should replace student recommendations, Burke observed, “This is one of those pieces that identifies a problem that's rooted in the messy and flawed humanity of the systems we make and then imagines that there is some metric we could make that would flush that humanity out--in order to better judge some kind of humanity.”
It will be worth pondering this impulse to alleviate the human condition by eliminating elements of human experience.
News and Resources
* Clive Thompson (I almost typed Owen!) on “Why CAPTCHA Pictures Are So Unbearably Depressing”: “They weren’t taken by humans, and they weren’t taken for humans. They are by AI, for AI. They thus lack any sense of human composition or human audience. They are creations of utterly bloodless industrial logic. Google’s CAPTCHA images demand you to look at the world the way an AI does.”
* And here is Thompson again on productivity apps in an article titled “Hundreds of Ways to Get S#!+ Done—and We Still Don’t”: “To-do lists are, in the American imagination, a curiously moral type of software. Nobody opens Google Docs or PowerPoint thinking ‘This will make me a better person.’ But with to-do apps, that ambition is front and center. ‘Everyone thinks that, with this system, I’m going to be like the best parent, the best child, the best worker, the most organized, punctual friend,’ says Monique Mongeon, a product manager at the book-sales-tracking firm BookNet and a self-admitted serial organizational-app devotee. ‘When you start using something to organize your life, it’s because you’re hoping to improve it in some way. You’re trying to solve something.’”There’s a lot I’m tempted to say in response to the subject of this piece. I’m reminded, for example, of a quip from Umberto Eco, “We make lists because we don’t want to die.” I think, too, of Hartmut Rosa describing how modernity turns the human experience of the world into “a series of points of aggression.” And then all sorts of Illichian responses come to mind. At one point Thompson mentions how “quite apart from one’s paid toil, there’s been an increase in social work—all the messaging and posts and social media garden-tending that the philosopher and technologist Ian Bogost calls “‘hyperemployment,’” and I’m immediately reminded of what Illich called shadow work, a “form of unpaid work which an industrial society demands as a necessary complement to the production of goods and services.” So here we are dealing with digitized shadow work, except that we’re now serving an economy based on the accumulation of data. And, finally, I’m tempted to ask, quite seriously, why anyone should think that they need to be productive at all. Of course, I know some of the answers that are likely to be given, that I would give. But, honestly, that’s just the sort of question that I think is worth taking seriously and contemplating. What counts as productivity anyway? Who defines it? Who imposes the standard? Why have I internalized it? What is the relationship among productivity and purpose and happiness? The problem with productivity apps, as Thompson suggests at one point, is the underlying set of assumptions about human well-being and purpose that are themselves built into the institutions and tools of contemporary society.
* Speaking of shadow work, here is a terrific piece on some of the lesser known, but actually critical themes in Ivan Illich’s later work written by Jackie Brown and Philippe Mesly for Real Life: “Meanwhile, the economy’s incessant claims on our time and energy diminishes our engagement in non-commodified activities. According to Illich, it is only the willing acceptance of limits — a sense of enoughness — that can stop monopolistic institutions from appropriating the totality of the Earth’s available resources, including our identities, in their constant quest for growth.”
* From an essay by Shannon Vallor on technology and the virtues (about which she quite literally wrote the book): “Humanity’s greatest challenge today is the continued rise of a technocratic regime that compulsively seeks to optimise every possible human operation without knowing how to ask what is optimal, or even why optimising is good.”
* Thoughtful piece by Deb Chachra on infrastructure as “Care at Scale”:“Our social relationships with each other—our culture, our learning, our art, our shared jokes and shared sorrow, raising our children, attending to our elderly, and together dreaming of our future—these are the essence of what it means to be human. We thrive as individuals and communities by caring for others, and being taken care of in turn. Collective infrastructural systems that are resilient, sustainable, and globally equitable provide the means for us to care for each other at scale. They are a commitment to our shared humanity.”I confess, however, that I did quibble with this line: “Artificial light compensates for our species’ poor night vision and gives us control over how we spend our time, releasing us from the constraints of sunrise and sunset.” Chiefly, perhaps with the implications of “control” and “constraints.” Nonetheless, this was in many ways a model for how to make a public case for moral considerations in regards to technical systems.
* Podcast interview with Zachary Loeb on “Tech criticism before the Techlash” (which is the best tech criticism), focusing on Lewis Mumford and Joseph Weizenbaum. Loeb knows the tradition well, and I commend his work.
* A 2015 piece from Adam Elkus exploring the relationship between algorithms and bureaucracies: “If computers implementing some larger social value, preference, or structure we take for granted offends us, perhaps we should do something about the value, preference, or structure that motivates the algorithm.”
* An excerpt in Logic from Predict and Surveil: Data, Discretion, and the Future of Policing by Sarah Brayne looking at the use of Palantir in policing: “Because one of Palantir’s biggest selling points is the ease with which new, external data sources can be incorporated into the platform, its coverage grows every day. LAPD data, data collected by other government agencies, and external data, including privately collected data accessed through licensing agreements with data brokers, are among at least nineteen databases feeding Palantir at JRIC. The data come from a broad range of sources, including field interview cards, automatic license plate readings, a sex offender registry, county jail records (including phone calls, visitor logs, and cellblock movements), and foreclosure data.”
* With data collection, facial-recognition technology, and questions of bias in mind, consider this artifact discussed in a handout produced by Jim Strickland of the Computer History Museum. It is a rail ticket with a primitive facial recognition feature: “punched photographs,” generic faces to be punched by the conductor according to their similarity to the ticket holder. These inspired the Hollerith machine, which was used to tabulate census data from 1890 to the mid-20th century.
* “Interior view of the Central Social Insurance Institution showing men working in mobile work stations used to access the card catalog drawers, Prague, Czechoslovakia.” Part of a 2009 exhibition, “Speed Limits.”
* A review of Shannon Mattern’s new collection of essays, A City Is Not a Computer: Other Urban Intelligences. Mattern’s work is always worth reading. If you recognize the name but are not sure why, it might be because I’ve shared her work in the newsletter on a number of occasions.
* “In Ocado's grocery warehouses, thousands of mechanical boxes move on the Hive”:
* For your amusement, I was amused anyway, an historian of naval warfare rates nine Hollywood battle scenes for accuracy. The professor’s deadpan style makes the video.
Re-framings
— “Another Time,” by W. H. Auden (1940):
For us like any other fugitive,Like the numberless flowers that cannot numberAnd all the beasts that need not remember,It is today in which we live.
So many try to say Not Now,So many have forgotten howTo say I Am, and would beLost, if they could, in history.
Bowing, for instance, with such old-world graceTo a proper flag in a proper place,Muttering like ancients as they stump upstairsOf Mine and His or Ours and Theirs.
Just as if time were what they used to willWhen it was gifted with possession still,Just as if they were wrongIn no more wishing to belong.
No wonder then so many die of grief,So many are so lonely as they die;No one has yet believed or liked a lie,Another time has other lives to live.
— I stumbled upon an essay by Wendell Berry written circa 2002 titled “A Citizen’s Response to the National Security Strategy.” It struck me as a piece worth revisiting:
AND SO IT IS NOT WITHOUT REASON or precedent that a citizen should point out that, in addition to evils originating abroad and supposedly correctable by catastrophic technologies in “legitimate” hands, we have an agenda of domestic evils, not only those that properly self aware humans can find in their own hearts, but also several that are indigenous to our history as a nation: issues of economic and social justice, and issues related to the continuing and worsening maladjustment between our economy and our land.
Thanks for reading this latest installment of the newsletter, which was, I confess, a bit tardy. As always, my hope is that you found something useful, encouraging, or otherwise helpful in the foregoing.
In case you’ve not seen it yet, my first essay with Comment Magazine is now online: “The Materiality of Digital Culture.” The key point more or less is this: “The problem with digital culture, however, is not that it is, in fact, immaterial and disembodied, but that we have come to think of it is as such.”
By way of reminder, comments are open to paid subscribers, but all are always welcome to reach out via email. Depending on what happens to be going on when you do, I’ll try to get back to you in relatively short order.
Finally, I read a comment recently about the guilt someone felt unsubscribing from newsletters, especially if they thought the author would be notified. Life’s too short, folks. I’d be glad for you to give the newsletter time to prove itself, but I absolve you of any guilt should you conclude that this just isn’t for you. Besides I’ve turned that notification off, as any sane person would. I’ll never know!
Trust you all are well.
Cheers,
Michael
I sent out an installment titled “Ill With Want” a couple of days ago, but was unable at the time to include the audio version. I know that many of you find the audio useful, so, now that I’ve been able to record it, I wanted to get that to you.
Nothing new here if you have already read the text version.
I will, however, take the opportunity to pass along a link to a podcast I recorded (not the one referenced in the essay) with Justin Murphy and Nina Power on the life and work of Ivan Illich. You can listen to it here. The occasion for the conversation was an upcoming 8-week course on Illich, which Power will be teaching in a couple of weeks. You can learn more about that course here. My thanks to both Justin and Nina for delightful conversation.
As per usual, my thanks for reading. If you find the newsletter valuable, consider becoming a subscriber or sharing the Convivial Society with others who may also find it helpful.
Welcome to the Convivial Society, a newsletter about technology and culture, broadly speaking. This post began as part of a recent feature I’ve titled “Is this anything?”: one idea for your consideration in less than 500 words. It spilled over 500 words, however, so just consider it a relatively brief dispatch. My writing is an exercise in thinking out loud, so I’m never quite sure where it will lead. Of course, I do hope my thinking out loud is helpful to more than just myself. Finally, the newsletter is public by design, but the support of those who are able to give it is encouraged and appreciated.
In ordinary conversation, I’d say that the word artificial tends to be used pejoratively. To call something artificial is usually to suggest its inferiority to some ostensibly natural alternative. For example, the boast “No artificial sweeteners!” come to mind. And when applied to people, the word suggests a lack of authenticity or sincerity. But if we recall the word’s semantic relation to artifice or art, then we might come to see artificiality in a different light. In one sense, artificiality is just another way of speaking about what historian Thomas Hughes simply called the “human-built world.”
So, for example, in Orality and Literacy Walter Ong wrote, “To say writing is artificial is not to condemn it but to praise it.” A bit further on he added, “Technologies are artificial, but – paradox again – artificiality is natural to human beings. Technology, properly interiorized, does not degrade human life but on the contrary enhances it.”
I’d phrase that last line a bit differently. It would be better to say, “Certain technologies, properly interiorized, do not degrade human life but on the contrary enhance it.” In other words, technology is not one thing, and we should take care to discriminate. There are various forms of artificiality, and they are not all equal. Listening to a trained human voice, a musical instrument, a recording of a musical instrument, a recording of AI-generated sounds—these are all distinct activities. Alternatively, human artifice can work with humble regard for the non-human world or it can operate with what Albert Borgmann has called “regardless power,” that is power that takes no thought of how it disrupts the non-human or, for that matter, the human world. Historically, there have been (and will be) a variety of techno-social configurations.
I often cite writers such Jacques Ellul and Ivan Illich, who are often (poorly) read as reactionary romantics pining for some lost pre-technological idyll. Ellul, it is true, was rather explicit about the eclipse of nature in modern technological societies. But neither of them are opposed to human artifice or technology per se. Indeed, Illich, especially, sought to encourage the development of what he called convivial tools. Illich also supplied us with the eminently useful concept of thresholds or limits beyond which practices, technologies, or institutions become counterproductive and even destructive. This seems like a useful concept to apply to the question of artificiality.
So I find myself wondering if there is a threshold of artificiality beyond which human artifice becomes counterproductive and destructive. I’m not thinking principally of particular technologies, which might be turned toward destructive ends. I’m thinking, rather, of an aggregate degree of artificiality distancing us from the non-human world to such an extent that — paradox again (!) — our capacity to flourish as human beings is diminished. What are the consequences of so structuring our necessarily artificial environment that we find ourselves largely indifferent to the rhythms, patterns, and textures of the non-human world? What are the physical consequences? What are the emotional or psychological consequences? At what cost to the earth is our artificial world purchased?
I found myself also reflecting on the Prologue to Hannah Arendt’s The Human Condition. “The earth is the very quintessence of the human condition,” Arendt observed in the mid-20th century,
“and earthly nature, for all we know, may be unique in the universe in providing human beings with a habitat in which they can move and breathe without effort and without artifice. The human artifice of the world separates human existence from all mere animal environment, but life itself is outside this artificial world, and through life man remains related to all other living organisms.”
“For some time now,” Arendt went on to say, “a great many scientific endeavors have been directed toward making life also ‘artificial,’ toward cutting the last tie through which even man belongs among the children of nature.”
She was not sanguine about the prospects. “This future man” she observed, “[…] seems to be possessed by a rebellion against human existence as it has been given, a free gift from nowhere (secularly speaking), which he wishes to exchange, as it were, for something he has made himself.”
Arendt’s reflections were spurred by the launch of Sputnik, the first artificial satellite to orbit the earth. She noted that scientists on both sides of the Cold War had already speculated about humanity’s destiny being extra-terrestrial (anticipating more recent pronouncements by notable tech moguls). She also speculated about the impact of automation for human labor and the consequences of biological engineering. In other words, her concerns have aged well.
I think, for example, of the recent announcement about the first human-monkey chimeras, a rather striking example of what Bruno Latour described as the modern constitution. Latour argued that modernity consisted of a double movement of purification and hybridization. On the surface, the modern world is constructed through a series of differentiations, which Latour calls purifications. Science is purified of faith, politics of religion. We might also add the separations of body and mind, nature and the human. Of course, Latour’s point was that we have never been modern in this sense. All the while, under the cover of this project of purification, all manner of hybridizations were underway. Human beings must first be distinguished from nature in order to then have their way with nature.
Now, while these hybridizations continue apace and the artificiality Arendt feared is alive and well, digital culture presents us with novel forms of artificiality that pose a different set of challenges. Consider, for example, Marc Andreessen’s recent response to a question about the possibly detrimental consequences of “constant, instantaneous contact” enabled by digital technology.
“Your question is a great example of what I call Reality Privilege,” Andreessen claimed. He went on to elaborate as follows:
This is a paraphrase of a concept articulated by Beau Cronin: "Consider the possibility that a visceral defense of the physical, and an accompanying dismissal of the virtual as inferior or escapist, is a result of superuser privileges." A small percent of people live in a real-world environment that is rich, even overflowing, with glorious substance, beautiful settings, plentiful stimulation, and many fascinating people to talk to, and to work with, and to date. These are also *all* of the people who get to ask probing questions like yours. Everyone else, the vast majority of humanity, lacks Reality Privilege — their online world is, or will be, immeasurably richer and more fulfilling than most of the physical and social environment around them in the quote-unquote real world.
The Reality Privileged, of course, call this conclusion dystopian, and demand that we prioritize improvements in reality over improvements in virtuality. To which I say: reality has had 5,000 years to get good, and is clearly still woefully lacking for most people; I don't think we should wait another 5,000 years to see if it eventually closes the gap. We should build -- and we are building -- online worlds that make life and work and love wonderful for everyone, no matter what level of reality deprivation they find themselves in.
There’s obviously a great deal worth contesting in these two paragraphs, but, setting most of that aside, consider it in light of Arendt’s observations. Much of this seems to be quite different than the concerns that animated Arendt’s thinking nearly 70 years ago, but, in fact, I’d say the pattern is similar, except that Andreessen is defending a degree of digital artificiality that Arendt would almost certainly find questionable. In both framings, human artifice risks attenuating the relationship between the earth and the human condition. What is striking in both cases, however, may be how they reveal a structurally similar double movement as the one Latour described: one story veils another.
The story of a human retreat from this world, either to the stars above or the virtual realm within, can mask a disregard for or resignation about what is done with the world we do have, both in terms of the structures of human societies and the non-human world within which they are rooted. Put another way, we might say that imagining the digital sphere as a realm hermetically sealed off from the so-called “real world” gave cover to momentous analog-digital hybridizations that were already well underway throughout human society. The digital world is not the analog world; neither is it separate from it.
That seems like a good way to frame the broader question of artificiality. The trick is not to collapse the apparent paradox or tension. The human-built world is not equivalent to the non-human world, but neither is it separate from it. It is critical that we recognize both the distinctive features of each realm while also reckoning with their myriad points of interrelationship and interdependence.
I would argue that there are, in fact, thresholds of artificiality beyond which human artifice becomes counterproductive, but also that we ought to think about this in more than merely human terms. It often seems that a critique of artificiality generates a desire for the “natural.” Most of the time in these discussions, “nature” remains in the realm of standing reserve, raw material for the sake of human use. More to the point, it is commodified. When human artifice, in the modern techno-capitalist mode, has enclosed the non-human world, nature is always returned to us at a price, one that increasingly few are able to afford.
A few days ago, a handful of similar stories or anecdotes about technology came to my attention. While they came from different sectors and were of varying degrees of seriousness, they shared a common characteristic. In each case, there was either an expressed bewilderment or admission of obliviousness about the possibility that a given technology would be put to destructive or nefarious purposes. Naturally, I tweeted about it … like one does.
I subsequently clarified that I was not subtweeting anyone in particular just everything in general. Of course, naiveté, hubris, and recklessness don’t quite cover all the possibilities—nor are they mutually exclusive.
In response, someone noted that “people find it hard to ‘think like an *-hole’, in @mathbabedotorg's phrase, because most aren’t.” That handle belongs to Cathy O’Neil, best known for her 2016 book, Weapons of Math Destruction: How Big Data Increases Inequality And Threatens Democracy.
There’s something to this, of course, and, as I mentioned in my reply, I truly do appreciate the generosity of this sentiment. I suggested that the witness of history is helpful on this score, correcting and informing our own limited perspectives. But I was also reminded of a set of questions that I had put together back in 2016 in a moment of similar frustration.
The occasion then was the following observation from Om Malik:
“I can safely say that we in tech don’t understand the emotional aspect of our work, just as we don’t understand the moral imperative of what we do. It is not that all players are bad; it is just not part of the thinking process the way, say, ‘minimum viable product’ or ‘growth hacking’ are.”
Malik went on to write that “it is time to add an emotional and moral dimension to products,” by which he seems to have meant that tech companies should use data responsibly and make their terms of service more transparent. In my response at the time, I took the opportunity to suggest that we needn’t add an emotional and moral dimension to tech, it was already there. The only question was as to its nature. As Langdon Winner had famously inquired “Do artifacts have politics?” and answered in the affirmative, I likewise argued that artifacts have ethics. I then went on to produce a set of 41 questions that I drafted with a view to helping us draw out the moral or ethical implications of our tools. The post proved popular at the time and I received a few notes from developers and programmers who had found the questions useful enough to print out post in their workspaces.
This was all before the subsequent boom in “tech ethics,” and, frankly, while my concerns obviously overlap to some degree with the most vocal and popular representatives of that movement, I’ve generally come at the matter from a different place and have expressed my own reservations with the shape more recent tech ethics advocacy has taken. Nonetheless, I have defended the need to think about the moral dimensions of technology against the notion that all that matters are the underlying dynamics of political economy (e.g., here and here).
I won’t cover that ground again, but I did think it might be worthwhile to repost the questions I drafted then. It’s been more than six years since I first posted them, and, while some you reading this have been following along since then, most of you picked up on my work in just the last couple of years. And, recalling where we began, trying to think like a malevolent actor might yield some useful insights, but I’d say that we probably need a better way to prompt our thinking about technology’s moral dimensions. Besides, worst case malevolent uses are not the only kinds of morally significant aspects of our technology worth our consideration, as I hope some of these questions will make clear.
This is not, of course, an exhaustive set of questions, nor do I claim any unique profundity for them. I do hope, however, that they are useful, wherever we happen to find ourselves in relation to technological artifacts and systems. At one point, I had considered doing something a bit more with these, possibly expanding on each briefly to explain the underlying logic and providing some concrete illustrative examples or cases. Who knows, may be that would be a good occasional series for the newsletter. Feel free to let me know what you think about that.
Anyway, without further ado, here they are:
* What sort of person will the use of this technology make of me?
* What habits will the use of this technology instill?
* How will the use of this technology affect my experience of time?
* How will the use of this technology affect my experience of place?
* How will the use of this technology affect how I relate to other people?
* How will the use of this technology affect how I relate to the world around me?
* What practices will the use of this technology cultivate?
* What practices will the use of this technology displace?
* What will the use of this technology encourage me to notice?
* What will the use of this technology encourage me to ignore?
* What was required of other human beings so that I might be able to use this technology?
* What was required of other creatures so that I might be able to use this technology?
* What was required of the earth so that I might be able to use this technology?
* Does the use of this technology bring me joy? [N.B. This was years before I even heard of Marie Kondo!]
* Does the use of this technology arouse anxiety?
* How does this technology empower me? At whose expense?
* What feelings does the use of this technology generate in me toward others?
* Can I imagine living without this technology? Why, or why not?
* How does this technology encourage me to allocate my time?
* Could the resources used to acquire and use this technology be better deployed?
* Does this technology automate or outsource labor or responsibilities that are morally essential?
* What desires does the use of this technology generate?
* What desires does the use of this technology dissipate?
* What possibilities for action does this technology present? Is it good that these actions are now possible?
* What possibilities for action does this technology foreclose? Is it good that these actions are no longer possible?
* How does the use of this technology shape my vision of a good life?
* What limits does the use of this technology impose upon me?
* What limits does my use of this technology impose upon others?
* What does my use of this technology require of others who would (or must) interact with me?
* What assumptions about the world does the use of this technology tacitly encourage?
* What knowledge has the use of this technology disclosed to me about myself?
* What knowledge has the use of this technology disclosed to me about others? Is it good to have this knowledge?
* What are the potential harms to myself, others, or the world that might result from my use of this technology?
* Upon what systems, technical or human, does my use of this technology depend? Are these systems just?
* Does my use of this technology encourage me to view others as a means to an end?
* Does using this technology require me to think more or less?
* What would the world be like if everyone used this technology exactly as I use it?
* What risks will my use of this technology entail for others? Have they consented?
* Can the consequences of my use of this technology be undone? Can I live with those consequences?
* Does my use of this technology make it easier to live as if I had no responsibilities toward my neighbor?
* Can I be held responsible for the actions which this technology empowers? Would I feel better if I couldn’t?
“It appears to me that we cannot neglect the disciplined recovery, an asceticism, of a sensual praxis in a society of technogenic mirages. This reclaiming of the senses, this promptitude to obey experience, the chaste look that the Rule of St. Benedict opposes to the cupiditas oculorum (lust of the eyes), seems to me to be the fundamental condition for renouncing that technique which sets up a definitive obstacle to friendship.”— Ivan Illich, “To Honor Jacques Ellul,” (1993)
I don’t usually write multi-part posts, but I did conclude an earlier installment with the promise of addressing one more development in the way I’ve come to think about attention. The essay here will (finally) pick up that thread. This post is a long one, so here’s the executive summary. Attending to the world is an embodied practice involving our senses, and how we experience our senses has a history. The upshot is that we might be able to meet the some of the challenges of the age by cultivating an askesis of perception.
As I explained last time around, I’ve been rethinking some aspects of how I talk about attention, a topic that has generated a great deal of interest in the age of digital media. I argued that we ought to reconsider the way the problem of attention tends to be framed by the logic of scarcity, which naturally lends itself to economic categories, and I suggested, too, that we proceed on the assumption that we have all the attention we need so long as that attention is properly ordered. What I have to say in this installment doesn’t exactly depend on those earlier reflections, but it’s probably worth mentioning that what follows picks up where I had left off in that essay.
The additional line of thought, which I want to pursue here, involves the relationship between attention and the body. The reflections that led me down this path began with the realization that attention discourse tends to abstract attention away from the body. When we talk about attention, in other words, we tend to talk as if this faculty had no particular relationship to the activities of the body.
This is not altogether surprising if we think about attention as the capacity to focus our thinking on a particular object. In this mode, attention is a strictly mental activity. We might even close our eyes in order to do it well. In this sense, attention becomes nearly synonymous with the activity of thinking itself. Or, alternatively, with prayer. It was Simone Weil, for example, who claimed that “absolutely unmixed attention is prayer.”
But this is not the only mode of attention, of course. More often than not, when we talk about attention in relation to digital media we are talking about our capacity to attend to something or someone out there in the world. What we are doing in such cases is somewhat different than what we do when we attempt to think deliberately about a problem, say, or when we are concentrating on a memory of the past. When we attend to the world beyond our head, to borrow the title of Matthew Crawford’s book from a few years back, we are doing so through the mediation of our perceptual apparatus: we are looking with our eyes, smelling with our noses, listening with our ears, feeling with our fingertips, or tasting with our mouths. In other words, attention discourse tends to make a mental abstraction out of an ordinarily embodied practice. (And, I’ll mention in passing that it’s probably worth reflecting on the fact that attention, if we link it to the senses at all, is almost always linked to either seeing or hearing.)
Consider, for example, this rather well-known paragraph from the American philosopher William James’s Principles of Psychology published in 1890:
“Everyone knows what attention is. It is taking possession of the mind, in clear and vivid form, of one out of what seems several simultaneously possible objects or trains of thought. Focalization, concentration of consciousness are of its essence. It implies a withdrawal from some things in order to deal effectively with others.”
This is all fine and good, of course. It accords with how most of us think about attention. I’m not suggesting that attention is anything less than this, only that we might improve our understanding of what is happening when we attend to the world if we also attend to what our senses are up to when we do so. A determination to see may only get us so far if we do not also think about how we see or, and this will be a critical point, recognize that our seeing can be trained. Moreover, James’s definition of attention leaves little room for the role that beauty or desire or ethics might play in the dance between our consciousness and the world. Attention is reduced to the exercise of raw mental will-power.
As with my earlier discussion of the rhetoric of scarcity that treats attention as a resource, I don’t want to overstate my point here. I’m not suggesting that one should never speak about “attention” per se or that connecting attention more closely to our bodily senses will resolve our issues with attention. But, I do think there may be something to be gained in both cases. And here again I’m going to proceed with a little help from Ivan Illich, who, in the last phase of his intellectual journey, devoted a great deal of attention to the cultural history of the body and, specifically, sense experience.
I’ll start with a proposal Illich wrote for David Ramage, who was then president of McCormick Theological Seminary in Chicago. What I’m calling a proposal, Illich titled “ASCESIS. Introduction, etymology and bibliography.” The short seven-page document details a plan for a five-year sequence of lectures that Illich wanted to give on the role ascetic disciplines might play in higher education. The proposed courses would each take up a focal point of bodily sense experience. To my knowledge, these lectures were never delivered. Nonetheless, the proposal and some of Illich’s other work around this time remains instructive.
To be clear at the outset, Illich was not calling for a return to the old ascetic disciplines we might associate with earlier monastic traditions, for example. “The asceticism which can be practiced at the end of the 20th century,” Illich explained, “is something profoundly different from any previously known.” Nor is it a specifically religious mode of asceticism that Illich has in mind. In his view the tradition he is reviving and re-working includes pagan philosophers as well as monastic scholars.
Illich thought that a rupture in this tradition had occurred sometime around the 12th century. This rupture had regrettably obscured the importance to learning of the body, including the affections.
“Learning presupposes both critical and ascetical habits; habits of the right and habits of the left,” Illich claimed. He added: “I consider the cultivation of learning as a dissymmetrical but complementary growth of both these sets of habits.” It wasn’t immediately obvious to me what Illich meant by habits of the right and the left, but in the same paragraph he goes on to mention “habits of the mind,” or the critical habits celebrated and cultivated in the humanist tradition of learning, and the consequent neglect of the “heart’s formation.” Interestingly, the latter task, in his estimation, had been lately relegated to “the media.”
“I want to explore the demise of intellectual asceticism as a characteristic of western learning since the time it became academic,” Illich went on to explain. “In this historical perspective,” he continued,
I want to argue for the possibility of a new complementarity between critical and ascetical learning. I want to reclaim for ascetical theory, method and discipline a status equal to that the University now assigns to critical and technical disciplines.
Reading Illich’s proposal from 1989, it strikes me as all the more relevant thirty years later. Confronted with the challenges of information superabundance, a plague of misinformation and digital sophistry, the collapse of public trust in traditional institutions, and algorithmically manipulated feeds, the “solutions” proffered, such as increased fact checking, warning labels, or media literacy training, seem altogether inadequate.
From Illich’s perspective, we might say that they remain exclusively committed to the habits of the mind or the critical habits. What difference might it make for us to take Illich’s suggestion and consider the ascetic habits or habits of the body, holistically conceived? Might we do better to think about attention not as a resource that we pay or squander at the behest of the attention economy and its weaponized digital tools but rather as a bodily skill that we can cultivate, train, and hone?
To explore these questions, let’s walk through parts of two other pieces by Illich, “Guarding the Eye in the Age of Show” and “To Honor Jacques Ellul.” The former is one of the last things Illich wrote just two years before his death in 2002. It is a 50-page distillation of his research in the cultural history of visual perception or the ethics of the gaze. The latter was, as the title suggests, a brief 1993 talk given in honor of the great critic of modern technology whose work had deeply informed Illich’s perspective.
“Guarding the Eye in the Age of Show,” were it written today, might be classified as a contribution to attention discourse, except that the word attention is used in this sense only once, and the same is true of its nemesis distraction. Instead, Illich speaks of the ethical gaze or of what we do with our eyes and also how we conceive of vision itself, which as it turns out has a very interesting history (for more about that history, see another paper by Illich, “The Scopic Past and the Ethics of the Gaze”).
So, here is Illich’s mention of attention in a way that echoes contemporary attention discourse: “Even today, I feel guilty if I find my attention distracted from a medieval Latin text by the afterglow of the MTV to which I exposed my eyes.” (Who amongst us has not …) Setting aside the question of why Illich was watching MTV, let’s consider a bit more carefully what Illich has to say here.
He goes on to explain how “until quite recently, the guard of the eyes was not looked upon as a fad, nor written off as internalized repression. Our taste was trained to judge all forms of gazing on the other. Today, things have changed. The shameless gaze is in.” Illich was quick to add that he was not speaking of gazing at pornographic images. He was interested in recovering the idea that seeing was an action and not merely a passive activity on the model of a lens receiving visual data. And, as an action, it had an ethical dimension (Illich was very much indebted to the philosopher Emmanuel Levinas on these matters).
He was concerned, too, with the way the gaze was captured or trapped by what he termed “the show.” Illich used show to distinguish the object of perception from the image, which had played such a critical if evolving role in traditional western philosophy and religion. At one point, he puts the question he wants to address this way: “What can I do to survive in the midst of the show?” A question that, I suspect, still resonates today.
Surviving the Show
So what exactly was the show? I’m tempted to say that we can think of it simply as what we take in when we glance at any of our screens. I don’t think Illich would disagree with that assessment, but that’s obviously not a very helpful definition and it’s clear that Illich thought the show was a broader phenomenon. The truth is that I find it difficult to precisely pin down what Illich had in mind, but let me at least try to fill out the concept a bit more.
Illich says at one point that “the distinction between image and show in the act of vision, though subtle, is fundamental for any critical examination of the sensual "I-thou" relationship. To ask how I, in this age and time, still can see you face-to-face without a medium, the image, is something different from asking how I can deal with the disembodying experience of ‘your’ photographs and telephone calls, once I have accepted reality sandwiched between shows.”
Here, two things are clear. Illich is striving to preserve the possibility of “seeing” the person before our eyes, and the show, as he understands it, the pervasive field of technological mediations that shape our perception of the other threatens to obscure our ethical vision. I think of the way this very language has emerged to describe the act of beholding the person in a morally substantive way. We hear, for example, of the desire to “be seen,” by which, of course, something much deeper is in view than merely appearing within someone’s visual field. Or, somewhat less seriously, we joke about “feeling seen,” which is to say that some meme has come uncomfortably close to capturing some aspect of our personality.
Further on in the paper Illich writes, “I argue that ‘show’ stands for the transducer or program that enables the interface between systems, while ‘image’ has been used for an entity brought forth by the imagination. Show stands for the momentary state of a cybernetic program, while image always implies poiesis. Used in this way, image and show are the labels for two heterogeneous categories of mediation.”
My sense, deriving from the passing reference to cybernetics, is that the distinction between the image and the show tracks with the distinction, critical to Illich’s later work, which he drew between the age of instruments and the age of systems. While I don’t think that Illich rigorously developed this distinction anywhere in writing, one key element involved the manner in which the system, as opposed to the mere instrument, enveloped the user. It was possible to stand apart from the instrument and thus to attain a level of mastery over it. It was not possible to likewise stand apart from the system. Which may explain why Illich, as we’ll see shortly, concluded, “There can be rules for exposure to visually appropriating pictures; exposure to show may demand a reasoned stance of resistance.”
Elsewhere he says that in our present media ecosystem our gaze is sometimes “solicited by images, but at other times it is mesmerized by show.” The difference between solicitation and mesmerization seems critical. It is in this context that he also writes, “An ethics of vision would suggest that the user of TV, VCR, McIntosh and graphs protect his imagination from overwhelming distraction, possibly leading to addiction.”
Extrapolating a bit, then, and even taking the word show at face value, we might say that there was something dynamic and absorbing about the show that distinguished it from the image. (Let me say at this point that I’m not getting into Illich’s discussion of the image, which takes up classical philosophy and medieval theology. Click through to read the whole paper for that discussion.)
Things then get a bit more interesting toward the tail end of the article as Illich brings his historical survey of the gaze into the modern era. If we thought that Illich was connecting the show exclusively to the age of electronic media or even the proliferation of images in the late nineteenth century, we might be taken aback when he claims that “the replacement of the picture by the show can be traced back into the anatomical atlases of the late eighteenth century.” It’s a good reminder that, as eclectic as Illich’s talents and interests were, he always remained, in some fundamental sense, a historian.
“With the transition from the age of pictures to the age of show,” Illich had just written, “step by step, the viewer was taken off his feet. We were trained to do without a common ground between the registering device, the registered object and the viewer.” I hear echoes in these lines of Baudrillard, but not quite. It is not an image that has no referent in the world but rather a way of relating to the world that has no referent in our experience, a mediation of the world that displaces the ordinary or even carefully trained mediation of the human sensorium.
Citing the work of his frequent collaborator, Barbara Duden, Illich writes, “the anatomists looked for new drawing methods to eliminate from their tables the perspectival ‘distortion’ that the natural gaze inevitably introduces into the view of reality.” “They want a blueprint of the object” he adds, and “They want measures, not views. They look at the world, more architectonico, according to the layout of architectural drawings.” The new scopic regime, as he calls it, spills out from anatomy to geology and zoology. “Thanks to the new printing techniques,” Illich concludes, “the study of nature increasingly becomes the study of scientific illustrations.”
This incipient form of the show appears to involve a means of representation that abstracts perception from the bodily frame of reference. It presents us with a view of the world that, while highly generative in many respects, may nonetheless leave us ill prepared to see the world as it is available to us through sense experience.
One of the more compelling bits of evidence Illich marshals in his historical investigations involves the shrinking diversity of words to designate varieties of sense experience. Summarizing the work of a variety of scholars, Illich noted,
Dozens of words for shades of perception have disappeared from usage. For what the nose does, someone has counted the victims: Of 158 German words that indicate variations of smell, which Dürer's contemporaries used, only thirty-two are still in use. Equally, the linguistic register for touch has shriveled. The see-words fare no better.
Sadly, this accords with my own experience, and I wonder whether it rings true for you. Upon reflection, I have a paucity of words at my disposal with which to name the remarkable variety of experiences and sensations that the world offers.
Picking up the storyline again, the incipient form of the show was then reinforced or perhaps Illich would say popularized by the advent of the stereoscope (pictured below). The stereoscope is one of several widely used nineteenth century devices for manipulating visual experience. The Claude glass is another example that comes to mind. Illich described the stereoscope as follows:
“Two simultaneous exposures are made next to each other on the same photographic plate through two lenses distanced from each other by a few inches. The developed picture postcard is placed into a box and viewed through a pair of special spectacles. The result is a ‘surrealist’ dimensionality. The foreground and background that lie outside the focus are fuzzy, while the focused object floats in unreal plasticity.”
He noted that his grandmother, in the early twentieth century was still bringing back these stereo cards from her travels.
Speaking of the emergence of the scopic regime of the show in the early nineteenth century, Illich concluded, “New optical techniques were used to remove the picture of reality from the space within which the fingers can handle, the nose can smell and the tongue can taste it, and show it in a new ‘objective’ isometric space into which no sentient being can enter.”
It may be helpful to draw in Illich’s evolving critique of medicine for an example of the show that is not directly related to what we typically think of media technology. During the 1980s, as the consequences of the shift from instruments to systems was dawning on Illich, he came to see that one of the harms of modern institutionalized medicine was the implicit displacement of the lived body by the body that is a system apprehended by diagnostic tools. It is the body reduced to one’s chart, health as conformity to statistical averages and patterns. The individual and the particularities of their body are lost. I don’t know that Illich ever puts it this way, but it seems clear to me that this can be understood as medicine in thrall of the show. It is not, of course, that such information is useless, rather it is that something is lost when our vision of the human is thus reduced to data flows, and that loss, difficult or perhaps impossible to quantify, can result in profound consequences. It can, for example, in the case of medicine, generate, paradoxically, greater forms of suffering.
Ocular Askesis
As he made clear at the outset of this paper, Illich undertook this examination of the history of visual perception in order to explore the ethics of the gaze and how “seeing and looking is shaped by personal training (the Greek word would be askesis), and not just by contemporary culture.” Or, as he also put it, “My motive for studying the gaze of the past is a wish to rediscover the skills of an ocular askesis.”
In other words, Illich invites us to consider what it might mean to discipline our vision, and I’m inviting us to consider whether this is not a better way of framing our relationship to the digital media ecosystem. The upshot is recognizing the additional dimensions of what is often framed as a merely intellectual problem and thus met with laughably inadequate techniques. Perceptual askesis would involve our body, our affections, our desires, and our moral character as well as our intellect.
The first step would be to recognize that vision is, in fact, subject to training, that it is more than the passive reception of sensory data. In fact, of course, our vision is always being disciplined. Either it happens unwittingly as a function of our involvement with the existing cultural structures and patterns, or we assume a measure of agency over the process. Illich’s historical work, then, denaturalizes vision in order to awaken us to the possibility of training our eyes. Our task, then, would be to cultivate an ethos of seeing or new habits of vision ordered toward the good. And, while the focus here has fallen on sight, Illich knew and we should remember that all the senses can be likewise trained.
“Guarding the Eyes in the Age of Show” is a long and scholarly article. Illich’s comments in honor of Jacques Ellul, delivered a few years earlier, cover much of the same ground in a more direct, urgent, and prophetic style. It will be worth our time, I think, to close by considering some of these comments because they might give us a better idea of the nature of the good, in Illich’s view, toward which a perceptual askesis should be ordered.
In one of the clearest statements of the concerns that were animating his work during this time, Illich declared that “existence in a society that has become a system finds the senses useless precisely because of the very instruments designed for their extension. One is prevented from touching and embracing reality.” And, what’s more, “it is this radical subversion of sensation that humiliates and then replaces perception.”
In a similar dire vein, Illich continued: “We submit ourselves to fantastic degradations of image and sound consumption in order to anesthetize the pain resulting from having lost reality.” He then added: “To grasp this humiliation of sight, smell, touch, and not just hearing, it was necessary for me to study the history of bodily acts of perception.”
What is evident here is that Illich wanted to defend a way of being in the world that took the body as its focal point. He spoke of this as a matter of “touching and embracing reality.” While I’m deeply sympathetic to Illich’s point of view here, I think I might put things a bit differently. Reality is a bit too elastic and protean a term to be helpful in this case. Better, in my view, to make the case that we risk missing out on a fuller experience of the depth and diversity of things, along with the pleasures and satisfactions such an experience might yield.
It seems to me relatively uncontroversial to observe that, for example, looking can be distinguished from seeing. I might look at a painting, for instance, and fail to see it for what it is. The same is true of a landscape, a single tree, a bird, an elegant building, or, most significantly, a person. If my vision is trained by the show, will I be able to see the person before me who cannot match the show’s dynamic, mesmerizing quality? And, from Illich’s perspective, it is not only that I would fail to accord my neighbor the honor they are owed but that I would lose myself in the process, too. Eyes trained by the show would be unable “to find joy in the only mirror in which I can discover myself, the pupil of the other.”
News and Resources
* A review of Frank Pasquale’s New Laws of Robotics, “A World Ruled by Persons, Not Machines”: “Frank Pasquale’s thought-provoking and deeply humanist New Laws of Robotics: Defending Human Expertise in the Age of AI pledges that another story is possible. We can achieve inclusive economic prosperity and avoid both traps of mass technological unemployment and low labor productivity. His central premise is that technology need not dictate our values, but instead can help bring to life the kind of future we want. But to get there we must carefully plan ahead while we still have time; we cannot afford a ‘wait-and-see’ approach.”
* Douglas Rushkoff on “why people distrust ‘the Science’”: “By disconnecting science from the broader, systemwide realities of nature, human experience, and emotion, we rob it of its moral power. The problem is not that we aren’t investing enough in scientific research or technological answers to our problems, but that we’re looking to science for answers that ultimately require human moral intervention.”
* Chad Wellmon is always insightful on the history of higher education. Here he is answering the question, “What must one believe in to be willing to borrow tens of thousands of dollars in order to pursue a certification of completion — a B.A.?”: “Atop this pyramid scheme sit institutions like my own, the University of Virginia, which masks its constant competition for more — more money, more status, more prestige — as a belief in higher learning. Given the goals they set for themselves, UVA and other wealthy institutions need the system of higher education to continue just as it is. They profess to do so out of a faith that meritocracy’s hidden hand will watch over their graduates, ensuring the liberal, progressive order. And they hire professionals to manage that faith, such as UVA’s recently appointed vice provost for enrollment, who will ensure the most efficient use of students’ hopes in higher education to maximize revenues.”
* An essay adapted from The Filing Cabinet: A Vertical History of Information by Craig Robertson: “The filing cabinet does not just store paper; it stores information; and because the modern world depends upon and is indeed defined by information, the filing cabinet must be recognized as critical to the expansion of modernity. In recent years scholars and critics have paid increasing attention to the filing systems used to store and retrieve information critical to government and capitalism, particularly information about people — case dossiers, identification photographs, credit reports, et al. 4 But the focus on filing systems ignores the places where files are stored. 5 Could capitalism, surveillance, and governance have developed in the 20th century without filing cabinets? Of course, but only if there had been another way to store and circulate paper efficiently. The filing cabinet was critical to the infrastructure of 20th-century nation states and financial systems; and, like most infrastructure, it is often overlooked or forgotten, and the labor associated with it minimized or ignored.”
* Opening of a review of Entangled Life: How Fungi Make Our Worlds, Change Our Minds and Shape Our Futures by Merlin Sheldrake: “Try to imagine what it is like to be a fungus. Not a mushroom, pushing up through damp soil overnight or delicately forcing itself out through the bark of a rotting log: that would be like imagining the grape rather than the vine. Instead try to think your way into the main part of a fungus, the mycelium, a proliferating network of tiny white threads known as hyphae. Decentralised, inquisitive, exploratory and voracious, a mycelial network ranges through soil in search of food.”
* On the story of Robert McDaniel, who in 2013 was identified as a potential victim or perpetrator of a violent crime by a predictive policing algorithm: “He invited them into this home. And when he did, they told McDaniel something he could hardly believe: an algorithm built by the Chicago Police Department predicted — based on his proximity to and relationships with known shooters and shooting casualties — that McDaniel would be involved in a shooting. That he would be a “party to violence,” but it wasn’t clear what side of the barrel he might be on. He could be the shooter, he might get shot. They didn’t know. But the data said he was at risk either way.”Increasingly, it seems to me that we are presented with two paths along which we might make our way in the world. These two paths can be characterized in any number of ways. One path is marked by the desire to control experience, even the experience of others, and predictive technologies serve this purpose. The other path is marked by a greater degree of openness to experience in the interest of freedom, with the risks this entails, and, as I’ve suggested elsewhere, relies on promise rather than prediction.
* In light of the turn toward privacy in the tech industry, Evgeny Morozov wonders whether it was a mistake to put so much emphasis on matters of privacy in an effort to meet the challenges posed by big tech corporations: “Yet I wonder if these surprising victories for the privacy movement may, in the end, turn out to be pyrrhic ones – at least for the broader democratic agenda. Instead of reckoning with the broader political power of the tech industry, the most outspoken tech critics have traditionally focused on holding the tech industry to account for numerous violations of existing privacy and data protection laws.”While not specifically focusing on the privacy critique, three years ago I wrote in my first piece for The New Atlantis that whatever we make of the so-called tech backlash, it was not a serious critique of contemporary technology. I tend to think that piece holds up pretty well.
* Interesting essay in Real Life by Lauren Colleen: “The technologies that evoke synaesthetic fallacy expedite the easy translation of all experience into data, and all data into capital. At the same time they mystify this process, cloaking it in the language of scientized magic. Synaesthetic fallacy is wielded as a tool for re-branding the interfaces that serve the data economy as essential mediators of a broken relationship between screen-obsessed humans and the external world. It seduces us into the illusion that tech can ever function as simply a neutral translator.”
Re-framings
— From an interview with Suzanne Simard in Emergence Magazine. By the way, Emergence is a really interesting and beautifully put together publication. You should check it out.
EM Throughout your work, you’ve kind of departed from conventional naming in many ways—“mother,” “children,” “her.” You use very unscientific terms—and, it seems, quite deliberately, as you just described—to create connection, relationship. But it’s not the normal scientific practice.
SS No, it’s not, and I can hear all the criticisms going on, because in the scientific world there are certain things that could kill your career, and anthropomorphizing is one of those things. But I’m at the point where it’s okay; that’s okay. There’s a bigger purpose here. One is to communicate with people, but also—you know, we’ve separated ourselves from nature so much that it’s to our own demise, right? We feel that we’re separate and superior to nature and we can use it, that we have dominion over nature. It’s throughout our religion, our education systems, our economic systems. It is pervasive. And the result is that we have loss of old-growth forests. Our fisheries are collapsing. We have global change. We’re in a mass extinction.
I think a lot of this comes from feeling like we’re not part of nature, that we can command and control it. But we can’t. If you look at aboriginal cultures—and I’ve started to study our own Native cultures in North America more and more, because they understood this, and they lived this. Where I’m from, we call our aboriginal people First Nations. They have lived in this area for thousands and thousands of years; on the west coast, seventeen thousand years—for much, much longer than colonists have been here: only about 150 years. And look at the changes we’ve made—not positive in all ways.
Our aboriginal people view themselves as one with nature. They don’t even have a word for “the environment,” because they are one. And they view trees and plants and animals, the natural world, as people equal to themselves. So there are the Tree People, the Plant People; and they had Mother Trees and Grandfather Trees, and the Strawberry Sister and the Cedar Sister. And they treated them—their environment—with respect, with reverence. They worked with the environment to increase their own livability and wealth, cultivating the salmon so that the populations were strong, the clam beds so that clams were abundant; using fire to make sure that there were lots of berries and game, and so on. That’s how they thrived, and they did thrive. They were wealthy, wealthy societies.
I feel like we’re at a crisis. We’re at a tipping point now because we have removed ourselves from nature, and we’re seeing the decline of so much, and we have to do something. I think the crux of it is that we have to re-envelop ourselves in our natural world; that we are just part of this world. We’re all one, together, in this biosphere, and we need to work with our sisters and our brothers, the trees and the plants and the wolves and the bears and the fish. One way to do it is just start viewing it in a different way: that, yes, Sister Birch is important, and Brother Fir is just as important as your family.
Anthropomorphism—it’s a taboo word and it’s like the death knell of your career; but it’s also absolutely essential that we get past this, because it’s an invented word. It was invented by Western science. It’s a way of saying, “Yeah, we’re superior, we’re objective, we’re different. We can overlook—we can oversee this stuff in an objective way. We can’t put ourselves in this, because we’re separate; we’re different.” Well, you know what? That is the absolute crux of our problem. And so I unashamedly use these terms. People can criticize it, but to me, it is the answer to getting back to nature, getting back to our roots, working with nature to create a wealthier, healthier world.
The Conversation
Reminder: the previous post announced the results of my experiment with much shorter, “is this anything?” posts. The feedback was overwhelmingly positive, so you can expect one or two of those in the coming days.
Also in the last post, I suggested that if you were not keen on supporting the Convivial Society directly through this platform, you could name a price on my ebook. The response indicated that it might be worthwhile to offer a standing alternative, so I created some subscription options through Gumroad. And, finally, if you value the newsletter, please consider yourself duly encouraged to share it with others.
Cheers,
Michael
Some have argued that one benefit of the new newsletter ecosystem is a return to the older conventions of blogging in its halcyon days. I don’t know about that. I doubt you or I really want two or three dispatches from every newsletter we’ve subscribed to arriving in our inbox every day. That said, I do occasionally feel the attraction of that older form. I won’t say that this installment is blog-ish in that way, but I sat down to compose it in that old, familiar frame of mind: aiming at something relatively brief and discursive, suggestive rather than fully developed. As always, I hope you find it useful.
There is a genre of tweet that begins thus: “I don’t know who needs to hear this, but …” Well, it’s in that spirit, and un-ironically, that I write what follows.
Yesterday, I listened to a radio program with two thoughtful scholars and mental health practitioners on the subject of doom scrolling, the habit of thoughtlessly and compulsively scrolling through our infinite feeds with no clear sense of direction or purpose, often late at night when we ought to be sleeping, and with the result of inducing greater degrees of anxiety and unease.
One of the guests explained doom scrolling as an effect of having to navigate unfamiliar and potentially threatening situations with inadequate information and little clarity about what one ought to do. Understood this way, doom scrolling is just a function of our need for decent maps of the world.
This is an entirely plausible account of doom scrolling, even if it does not account for every dimension of the practice. For example, I’d say that what we’re often after is not information per se but an affective fix. Nonetheless, the term did explode onto the lexical scene last year as the pandemic wildly reconfigured the way most of us live, transforming everyday decisions we used to make rather carelessly into matters of complex actuarial decision making: when to go out, whom to see, at what distance, for how long, with what precautions. Etc., etc. Some handled these conditions better than others, but it’s easy to see how under such circumstances one would cast about for any bit of news or information that would help clarify matters and relieve the acute uncertainty, anxiety, and fear we might have been experiencing.
Of course, in this case, as in so many others, the pandemic merely revealed and heightened an already existing pattern. While the term was popularized under pandemic conditions, it pre-dates the public health crisis by at least two years and certainly describes a phenomenon that was common long before then. And, I would add, whatever its precise relationship to the pandemic, the practice of doom scrolling will persist independently of the uncertainties and anxieties generated by the pandemic. Previously, I’ve characterized the activity we call doom scrolling as structurally induced acedia, and the conditions that thus tempt us aren’t going anywhere.
What struck me about the characterization of doom scrolling in the interview, however, was the implicit assumption embedded into the terms of the analysis, an assumption which acts as the mechanism linking the experience of uncertainty to the practice of doom scrolling.
That assumption, simply put, is that what we need in order to better navigate uncertainty is more information.
I grant that this is obviously true to some extent. Good information can helpfully inform our choices. And in the total absence of good information, we would rightly feel altogether adrift and at the mercy of forces beyond our ken.
Yet, it is also the case that our problem with information is not that we have too little of it but rather that we have too much. Granted that, in saying this, I am assuming a distinction between information and knowledge, which is to say that an abundance of information does not necessarily imply an abundance of knowledge.
My point turns out to be relatively straightforward: maybe you and I don’t need more information. And, if we think that the key to navigating uncertainty and mitigating anxiety is simply more information, then we may very well make matters worse for ourselves.
Believing that everything will be better if only we gather more information commits us to endless searching and casting about, to one more swipe of the screen in the hope that the elusive bit of data, which will make everything clear, will suddenly present itself. From one angle, this is just another symptom of reducing our experience of the world to the mode of consumption. In this mode, all that can be done is to consume more, in this case more information, and what we need seems always to lie just beyond the realm of the actual, hidden beyond the horizon of the possible.
And, once again, this mode of being, with regards to navigating uncertainty, has the paradoxical effect of sinking us ever deeper into indecision and anxiety because the abundance of information, especially if it is encountered as discrete bits of under-interpreted data, will only generate more uncertainty and frustration.
One alternative to this state of affairs is to ditch the idea, should we be under its sway, that what we need to make our way in the world is simply more information. For one thing, there are practical difficulties: even in cases where more information might be genuinely helpful, it may not be forthcoming when we need it. But, more importantly, some matters cannot be adequately decided simply by gathering more information and plugging it into some sort of value-neutral formula. Indeed, we might even say that what we need to make is not a merely a decision but more like a commitment with all the risk, responsibility, and promise that this entails.
What we might truly need, then, is not information but something else altogether: courage, patience, practical wisdom, and, perhaps most importantly, friendship. Of course, these can be harder to come by than mere information, however valuable it may be.
I trust that there is no need to further clarify why what we might really need in the face of uncertainty might be courage, patience, and wisdom. Lacking these, I might add, it is easy to see how we might take refuge in the idea that we lack sufficient information. The claim that I’m holding out for more information can neatly mask my lack of courage to do what I know needs to be done.
But it’s worth reflecting for just a moment on the the last of these: friendship. I was thinking here of how isolation and loneliness, which I would sharply distinguish from solitude, can warp and disfigure our cognitive faculties. The more isolated we find ourselves, the more harrowing and disorienting the experience of uncertainty.
Moreover, if we do proceed, as we often must, without the benefit of certainty, venturing forth and assuming the real risks that must accompany our action in this world—especially once we renounce the imperative to control, manage, and master—then it would be a far better thing to do so in the affectionate and heartening company of friends who will sustain us in our failures and celebrate our triumphs. After all, it is easier by far to take a step into the unknown with another walking alongside of us than it is to do so alone. If I must bear the consequences of my choices alone, if there is no one whose counsel I trust, then it becomes especially tempting to seek both perfect knowledge and certainty before acting, and find myself paralyzed in their absence.
Unfortunately, the patterns of our techno-social order tend toward the fracturing of community and the isolation of the person. We are offered an array of tools that promise to assuage the resulting economic and psychic precarity, but, more often than not, their real aim implicit in their design is to perpetuate and accelerate social fragmentation and cultivate deeper degrees of dependency from users. They tend to inhibit the enduring satisfaction of our genuine needs in order to perpetuate our dependence on their services. They distract us from attending to the roots of our disorders in order to continue trading on the superficial and counterproductive “solutions.”
In a talk Ivan Illich gave late in his life, he made the following observation: “Learned and leisurely hospitality is the only antidote to the stance of deadly cleverness that is acquired in the professional pursuit of objectively secured knowledge.” Then he added, “I remain certain that the quest for truth cannot thrive outside the nourishment of mutual trust flowering into a commitment to friendship.”
I come back to these lines often, especially in light of the debilitating epistemic consequences of becoming too dependent on digital media to mediate our perception of the world. It may seem counter-intuitive to say that, in the face of the profound challenges our society faces, what we most need is the deliberate cultivation of friendship. But I also find myself thinking that this conclusion is, from one angle, inescapable. At the very least, it seems to me that we need such friendships as an anchor and a refuge from the disorienting tumult of the digitized public sphere and precarity of our social world.
“Attention discourse” is how I usually refer to the proliferation of essays, articles, talks, and books around the problem of attention (or, alternatively, distraction) in the age of digital media. While there have been important precursors to digital age attention discourse dating back to the 19th century, I’d say the present iteration probably kicked off around 2008 with Nick Carr’s essay in the Atlantic, “Is Google Making Us Stupid?” And while disinformation discourse has supplanted its place in the public imagination over the past few years, attention discourse is alive and well.
I don’t intend for the label “attention discourse” to come off pejoratively or dismissively. In fact, I’ve made my own minor contributions to the genre. It was a recurring theme on the old blog (e.g.), and it was the subject of an installment of this newsletter just last summer. And I still think that the fate of attention in digital culture is a topic worthy of our considered reflection. More recently, however, I’ve been reconsidering the approach I’ve taken in the past to the question of attention. You can think of what follows as a report on the fruits this reconsideration as it now stands.
As a point of departure, let’s begin with a recent column in the Times by Charlie Warzel, who, I should note, comments frequently on the dynamics of the attention economy. In this particular piece from February, Warzel profiles Michael Goldhaber, whom he calls “the internet prophet you’ve never heard of,” which, I confess, was certainly true for me. As Warzel recounts the story, Goldhaber was a physicist who sometime in the 1980s had an epiphany about the nature of attention in an age of information glut (and do note that this epiphany predates the rise of the commercial internet).
The epiphany, simply put, was that we live in an attention economy, a term Goldhaber did not coin but which he seems to have done a great deal to popularize, chiefly with a 1997 essay that appeared in Wired. Here is a key paragraph from that essay:
Yet, ours is not truly an information economy. By definition, economics is the study of how a society uses its scarce resources. And information is not scarce - especially on the Net, where it is not only abundant, but overflowing. We are drowning in information, yet constantly increasing our generation of it. So a key question arises: Is there something else that flows through cyberspace, something that is scarce and desirable? There is. No one would put anything on the Internet without the hope of obtaining some. It's called attention. And the economy of attention - not information - is the natural economy of cyberspace.
A bit further on, Goldhaber notes that “the attention economy is a star system, where Elvis has an advantage. The relationship between stars and fans is central.” But the average person is not altogether cut out of the attention economy, far from it. “Cyberspace,” Goldhaber explains, “offers a much more effective means of continuing and completing attention transactions, as well as opening up more possibilities to almost everyone. Whoever you are, however you express yourself, you can now have a crack at the global audience.”
Goldhaber goes on to argue that attention will, quite literally as I read him, become the real currency of the internet age, by which he means that it will eventually displace money. This portion of his argument seems to have missed the mark, although the entanglement of money and attention has certainly been borne out. Others will be better qualified to judge the financial aspects of Goldhaber’s vision of attention as currency.
Needless to say, developments in the NFT markets suggest some interesting lines of inquiry, something Warzel explored in his most recent column. There, Warzel walks us through some of the most recent trends among those who are, in fact, earning their livelihood by transforming attention into money. At the frontiers of attention monetization we find, for example, “a platform called NewNew, which wants to build a ‘human stock market,’ where fans can vote to control mundane decisions in a creator’s day-to-day life.” As I was composing this piece, a story about an artist who sold a portion of their skin as an NFT also crossed my feed [correction: it was a tennis player].
Clearly, there is a profoundly dark side to this vision of unfettered monetization. Frankly, it’s hard to read this as anything more than a form of voluntary indentured servitude. Beyond that, as Anil Dash explained to Warzel, “The gig economy is coming for absolutely everyone and everything […] The end game of that is the GoFundMe link posted beneath a viral tweet so they can pay for their health care. Being an influencer sounds fun until it’s ‘keep producing viral content to literally stay alive.’ That’s the machine we’re headed toward.”
In his 1997 essay, Goldhaber had anticipated some of these disturbing dynamics. “Already today,” he observed at the time, “no matter what you do, the money you receive is more and more likely to track the recognition that comes to you for doing what you do. If there is nothing very special about your work, no matter how hard you apply yourself you won't get noticed, and that increasingly means you won't get paid much either.”
Coming back to the piece with which we started, Warzel summarized Goldhaber’s thinking about the attention economy this way:
Every single action we take — calling our grandparents, cleaning up the kitchen or, today, scrolling through our phones — is a transaction. We are taking what precious little attention we have and diverting it toward something. This is a zero-sum proposition, he realized. When you pay attention to one thing, you ignore something else.
And, as we noted at the outset, Goldhaber was especially keen to point out that the value of attention is defined by its scarcity.
Not that long ago, I would’ve pretty much assented to this whole line of thought. Indeed, I know that I have also spoken and written about attention as a scarce resource that we ought to take great care in how we allocate it. I would have had no problem at all with Howard Rheingold’s principle, cited by Goldhaber, that attention is a limited resource, so we should pay attention to where we pay attention.
While I remain quite sympathetic to the spirit of this line of thought, it now seems to me that the framing of the problem is itself part of the problem. To begin with, we might do well to stop thinking about attention as a scarce resource.
After he published a burst of spirited and prophetic works of social criticism in the early and mid-70s, Ivan Illich decided that it was time to re-evaluate his own critical approach. Despite their obvious faults, the industrial age institutions Illich targeted in his scathing critiques proved to be more resilient than he anticipated, and not necessarily because they were, in fact, useful, just, and sustainable enterprises. Rather, Illich came to the conclusion that we remained locked into these inevitably self-destructive institutional structures because they were, as David Cayley explained, “anchored at a depth that ‘rabble-rousing’ could not reach, even if it were as lucid and rhetorically refined as Illich’s critiques had been.” Illich began referring to the “certainties” upon which modern institutions rested. These certainties were assumptions of which we are barely aware, assumptions which lend current institutional structures a patina of inevitability. These certainties generated the sense that people couldn’t possibly do without such tools or institutions, even if they were, in fact, relatively modern innovations. And in this next phase of his career, Illich set out to trace the origins of these certainties, which, in his view, were anything but.
Chief among these certainties was the presumption of scarcity. In fact, in 1980, Illich announced his intention to write a history of scarcity. That history never materialized, but a number of pieces of that larger work Illich was working toward were published in a variety of contexts.
The presumption of scarcity undermined the more hopeful possibility of a convivial society, which Illich had outlined in Tools for Conviviality. However, by the time he published Shadow Work in the 1980, Illich had been encouraged by the growth of small pockets of conviviality in a variety of communities across the globe. But he also worried that these outposts of more convivial social arrangements were threatened by the encroachment of formal economic structures, which were necessarily premised on the idea of scarcity. As Illich put it, he wanted to defend “alternatives to economics” not simply “economic alternatives.”
What Illich sought to defend was what he called the “vernacular domain.” Of his use of the word vernacular, Illich explained, “I would like to resuscitate some of its old breath to designate the activities of people when they are not motivated by thoughts of exchange.” The term, as he meant to use it, “denotes autonomous, non-market related actions through which people satisfy everyday needs—the actions that by their own true nature escape bureaucratic control, satisfying needs to which, in the very process, they give specific shape.”
As Cayley explains in his invaluable guide to Illich’s thought,
By naming a vernacular domain, Illich hoped to do two things: to endow activities undertaken for their own sake with a specific dignity and presence and to distinguish these activities from things done in the shadow of economics. He wanted to highlight that portion of social life that had been, remained, or might become immune to the logic of economization. By offering a name, he hoped to secure for those pursuing alternatives a place to stand—a respite from management, economization, and professionalization where new commons could take shape.
At this point, I suspect you have a good sense of where this is going. Attention discourse proceeds under the sign of scarcity. It treats attention as a resource, and, by doing so, maybe it has given up the game. To speak about attention as a resource is to grant and even encourage its commodification. If attention is scarce, then a competitive attention economy flows inevitably from it. In other words, to think of attention as a resource is already to invite the possibility that it may be extracted. Perhaps this seems like the natural way of thinking about attention, but, of course, this is precisely the kind of certainty Illich invited us to question.
I can hear the rejoinders taking shape, of course: But attention is scarce. Right now I’m giving it to your writing and not to something else. I have only so many waking hours, and so much to which I must or would like to give my attention. At any given moment, I’m likely to find my attention divided and fragmented. Etc.
Given the intuitive force of these claims, further variations of which I suspect you can readily supply, is the claim that we have all the attention we need even plausible?
As I’ve thought about it, I’ve come to think that it is, but we may need to reevaluate more than just how we think about attention in order to see it as such.
So here is a proposition for you to consider: you and I have exactly as much attention as we need. In fact, I’d invite you to do more than consider it. Take it out for a spin in the world. See if proceeding on this assumption doesn’t change how you experience life, maybe not radically, but perhaps for the better. And the implicit corollary should also be borne in mind. If I have exactly as much attention as I need, then in those moments when I feel as if I don’t, the problem is not that I don’t have enough attention. It lies elsewhere. (There is an additional consideration, which is that I’ve failed to cultivate my attention, but, again, this is not a question of scarcity.) In any case, I obviously can’t make any promises, but, you may find, as I have of late, that refusing the assumption of scarcity can be surprisingly liberating.
There are obstacles, of course. Force of habit chief among them. And, as we all know, attention feels scarce to the degree that we yield to the imperatives of telepresence, which is to say the imperatives of being digitally dispersed, of anchoring attention to a plane other than or in addition to that of bodily presence. Relatedly, resisting the assumption of scarcity probably presumes the acceptance in principle of benevolent limits, limits that, far from amounting to constraints to be overcome, are, in fact, the necessary parameters of our well-being. It is also true, of course, that I’m presuming a measure of agency that is simply not available to everyone in equal measure, but even in such cases, the problem is not one of attention scarcity, but of unjust or unequal social arrangements. All of this said, however, it still seems to me that one could get rather far in the right direction by refusing to think of attention as a scarce resource.
But it may be, too, that my initial proposition requires a qualification. Let’s put it this way: you and I have exactly as much attention as we need at any given moment provided that at that moment we also know what it would be good for us to do.
This qualification also stems from Illich’s insights, so allow me to elaborate. His crusade against the colonization of experience by economic rationality led him not only to challenge the assumption of scarcity and defend the realm of the vernacular, he also studiously avoided the language of “values” in favor of talk about the “good.” He believed that the good could be established by observing the requirements of proportionality or complementarity in a given moment or situation. The good was characterized by its fittingness. Illich sometimes characterized it as a matter of answering a call as opposed to applying a rule.
The idea of the vernacular, the preference for the language of the good rather than values, and resistance to the presumption of scarcity are all related in Illich’s thinking. In 1988, in a conversation with David Cayley, Illich says that he has “become increasingly aware of the question: What happened when the good was replaced by values?”
“The transformation of the good into values,” he answers, “of commitment into decision, of question into problem, reflects a perception that our thoughts, our ideas, and our time have become resources, scarce means which can be used for either of two or several alternative ends. The word value reflects this transition, and the person who uses it incorporates himself in a sphere of scarcity.”
A little further on in the conversation, Illich explains that value is “a generalization of economics. It says, this is a value, this is a nonvalue, make a decision between the two of them. These are three different values, put them in precise order.” “But,” he goes on to explain, “when we speak about the good, we show a totally different appreciation of what is before us. The good is convertible with being, convertible with the beautiful, convertible with the true.”
Interestingly, when Cayley asks Illich if the language of the good is recoverable, Illich responds, “Between the two of us, at this moment, yes!”
Between the two of us. At this moment. There is a personal, even intimate quality about the apprehension of the good. Illich goes on to explain how, in that specific case, it is borne out of their mutual friendship and the conditions of their conversation. To draw this more directly into our present theme, Illich and Cayley each has just as much attention as they need in the moment. The joy of their conversation, the resonance of their encounter, to borrow another formulation, may tacitly derive from the sense that there is nothing else to which they ought to be giving their attention in that moment because their attention is ordered toward the good at that moment.
If I may, in turn, speak about this in a rather personal vein, given the contours of my own life, I feel the tensions we’ve been exploring most acutely in relationship to my children. My guess is that if you are a parent yourself, you may be nodding along in agreement. This has especially been true when I have, like so many others, found myself “working” from home. What better illustration, in my experience anyway, of the contrast between the realm of value and the realm of the good. But the line cannot be as clearly drawn as one might suspect. It is not simply that my work lies in the realm of value and my children in the realm of the good. At a given moment it may be good for me to attend to my work, and at another the good requires that I set my work aside to attend to my children. My point all along has been that I have just as much attention as I need in either case, so along as I can be responsive to what is good and my circumstances enable me to be responsive in this way. The myriad factors that complicate matters do not entail a scarcity of attention.
Re-framings, of course, only get us so far, and I would be happy to hear how this particular reframing of attention serves you, or how it falls short in your view.
There is one other path that I wanted to pursue, but my sense is that it would be good (!) to bring this installment to a close. Next time, however, I will come back to this discussion and consider how attention discourse fails precisely by talking about attention as if it were an abstract resource rather than something that is intimately tied to our bodily senses.
Over the years, I’ve thought on and off about silence in the context of digital media. Mostly, this has taken the form of commending what came to be called strategic silence. The idea being that, given the dynamics of the attention economy, it is sometimes better to pass over certain developments in calculated silence than it is to comment on them or even to speak out against them. At other times I’ve commented on how the structure of social media generates an imperative to speak, and how in times of crisis and tragedy this imperative to speak feels especially disordered.
More recently, however, I’ve come to think that it is impossible to be silent online.
I don’t mean that it’s really hard and that we lack to will power to be silent. I mean that it is, quite literally, impossible.
“No, it’s not,” you may be thinking just now, “I do it all the time. It even has a name: it’s called lurking.”
I would propose, however, that we distinguish between the mere absence of speech and what might properly be called silence. Perhaps it would be more precise to say that it is impossible to enter into silence online.
Saying nothing, in other words, is not the same thing as silence. Silence is felt. It is meaningful. It is not mere negation. In fact, it can be, as we shall see, eloquent. But, and here I suppose is the crux of the matter, this kind of silence presupposes bodily presence. Silence, in the way that I’m encouraging us to think of it, emanates from the body taken whole.
Consider, for example, how even this word we’ve landed on for describing the practice of being online but saying/posting nothing, the word lurking, ordinarily suggests something rather untoward, even sinister precisely because in non-digitized settings it conveys the sense of watching without being seen, that is of a presence that hides itself, that fails to materialize in the sight of others.
To be clear, my interest here is not to police how the word silence gets used. You are perfectly within your rights to go on using the word silence however you please. Rather, I’m taking the opportunity presented by digital media to reconsider an aspect of analog experience that might not have been fully appreciated—a generally fruitful exercise, in my view, the point of which I should add, is simply to see both digital and analog forms of communication more clearly for what they are or for what they can and cannot be.
I have lately been helped in this direction by a couple of short, less well-known pieces by Ivan Illich, a couple of deep cuts, if you will. In the first, “Silence is a Commons,” Illich reflects upon the moment he was brought, within weeks of being born, to his grandfather’s estate off the coast of Dalmatia. He noted that little had changed on the estate since its establishment in the late middle ages. Then, however, something did change. “On the same boat on which I arrived in 1926,” Illich explained,
the first loudspeaker was landed on the island. Few people there had ever heard of such a thing. Up to that day, all men and women had spoken with more or less equally powerful voices. Henceforth this would change. Henceforth the access to the microphone would determine whose voice shall be magnified. Silence now ceased to be in the commons; it became a resource for which loudspeakers compete. Language itself was transformed thereby from a local commons into a national resource for communication. As enclosure by the lords increased national productivity by denying the individual peasant to keep a few sheep, so the encroachment of the loudspeaker has destroyed that silence which so far had given each man and woman his or her proper and equal voice. Unless you have access to a loudspeaker, you now are silenced.
Illich would retain a lifelong aversion to using a microphone. And, before proceeding any further, let me say that I don’t think Illich is suggesting that there had never been other, more heavy-handed ways of silencing people before the advent of loudspeakers.
The idea of silence as a commons, as Illich described it here, suggests to a me shared space into which you and I might enter and have just as much of a chance of being heard as anyone else. Technologies that augment the human voice empower those who possess them at the expense of those who do not, setting off the escalatory dynamics that eventually generate counterproductivity. To put it that way, as long-suffering readers know by now, is to use standard Illichian terminology. Put otherwise, we might say that tools that augment speech trigger an arms race for ever more powerful tools that do the same until finally everyone is thus provisioned with the result being that human speech itself begins to break down. To give everyone a loud speaker is to assure that no one can be heard.
Illich’s anecdote is, of course, a provocative reversal of the usual way that new media tend to be presented as a necessarily democratizing and empowering force, and it seems closer to mark as the events of the last decade or so have illustrated. The ostensible promise of social media was that anyone’s voice could now be heard. Whether anyone would be listening has turned out to be another matter altogether, as would be the society-wide consequences. Naturally, this is not to deny that in some cases social media can serve the interest of marginalized communities. It is only to suggest the rather obvious point that, taken as a whole, its consequences are more complicated, to put it mildly.
“Just as the commons of space are vulnerable and can be destroyed by the motorization of traffic,” Illich went on to argue, “so the commons of speech are vulnerable, and can easily be destroyed by the encroachment of modern means of communication.” Illich was presenting these comments in 1982, and he had emerging electronic and computerized modes of communication chiefly in mind.
The issue he proposed to his audience for discussion was this:
how to counter the encroachment of new, electronic devices and systems upon commons that are more subtle and more intimate to our being than either grassland or roads - commons that are at least as valuable as silence.
“Silence, according to western and eastern tradition alike,” he went on to add,
is necessary for the emergence of persons. It is taken from us by machines that ape people. We could easily be made increasingly dependent on machines for speaking and for thinking, as we are already dependent on machines for moving.
There seem to be two distinct concerns for Illich. The first is that we lose the commons of which silence is an integral part and thus a measure of freedom and agency. The second, concurrent with the first, is that you and I may find it increasingly hard to be heard even as we are given more and more tools with which to speak. Alternatively, we might also distinguish between silence as a space of possibility and silence as itself a good to be defended, something we need for its own sake. Illich is reminding us yet again that what we may need is not more of something, in this case words, but less.
I would say, too, that the temptation to be resisted, if I may put it that way, is that of reducing human interaction to a matter of information transfer, something that can be readily transacted without remainder through technological mean. This is the message of the medium, in McLuhanist terms: that, becoming accustomed to electronic or digitized forms of communication, I forget all that is involved in being understood by another and which cannot be encoded in symbolic form.
A second early essay by Illich helped me to think about silence from yet another perspective: silence as an indispensable element of human conversation. In the 1960s, Illich was involved in leading intensive language courses in Mexico, mainly for clergy interested in learning Spanish. Illich prepared brief talks for participants, and one such talk has been published as “The Eloquence of Silence.”
The title itself tells you much of what Illich will go on to say: silence can be eloquent, it can, paradoxically, “speak.” As Illich puts it in his opening remarks, “Words and sentences are composed of silences more meaningful than sounds.” Thus, as Illich explains, “It is … not so much the other man’s words as his silences which we have to learn in order to understand him.” And a bit further on, he adds, “The learning of the grammar of silence is an art much more difficult to learn than the grammar of sounds.”
Illich is presenting his students a view of language learning that renders it an ethical as well as a cognitive undertaking. He says, for example, that “to learn a language in a human and mature way is to accept the responsibility for its silences and for its sounds.” In his brief comments introducing the piece, Illich goes so far as to argue that “properly conducted language learning is one of the few occasions in which an adult can go through a deep experience of poverty, of weakness, and of dependence on the good will of another.”
Illich goes on to identify three different kinds of silences as well as their destructive and degrading counterfeits. I won’t walk us through each of these, but I will draw your attention to a few portions of Illich’s discussion. The primary context for what Illich has to say here is the face-to-face encounter, and we would do well, in my view, to take to heart what Illich has to tell us here, even if we’re not presently involved in language learning. It’s obvious that Illich is not just commending a technique for learning a language but a way of becoming a better human being. Recalling where we began, however, I would also encourage us to consider how the forms of silence Illich commends fare in the the context of social media and how we might adapt our use and expectations of social media accordingly.
“First among the classification of silences,” Illich tells his students, “is the silence of the pure listener … the silence through which the message of the other becomes ‘he in us,’ the silence of deep interest.” Illich adds that “the greater the distance between the two worlds, the more this silence of interest is a sign of love.”
Here Illich is encouraging a silence grounded in humility, a silence that arises not from a desire to be heard but from a desire to hear and to understand.
A second kind of silence is the silence that precedes words, a silence that is a preparation for speech. It involves a patience that deeply considers what ought to be said and how, one that troubles itself over the meaning of the words to be used and proceeds with great care. This silence is opposed, according to Illich, by all that would have us rush to speak. “The silence before words,” Illich adds, “is also opposed to the silence of brewing aggression which can hardly be called silence—this too an interval used for the preparation of words, but words which divide rather than bring together.”
There was a third kind of silence, “this is the silence of love beyond words,” Illich explains, but I will leave it to you to read what Illich has to say about that.
In both of the two former cases, Illich commends virtuous and meaningful silences that pass between people as they seek to be understood, indeed silences upon which meaning may very well depend. While Illich’s comments focus on speakers of different languages, it seems to me that what he has to say should inform how we relate to others even when we share a common language.
But, and here again is the point I’ve been driving at throughout this post, such silences can take shape only when we are in the presence of another, although even then, of course, they may fail to materialize. They seem to me to be the kind of silences that are mutually felt and acknowledged, that are a function not merely of the ceasing of sound but of a body at ease or eyes that remain fixed. These are silences which assure the other they are being heard not ignored. Silences that, if attended to closely and with care disclose rather than veil, clarify rather than obfuscate. They gather rather than alienate.
The frequency with which the kind of silences we do encounter in the context of digital media occasion anxiety and misunderstanding and invite hostility is striking by contrast.
And let me say again that this is not intended as a brief against social media. It is rather a way of exploring at least one reason why the experience of social media can take on such a dispiriting quality. My suggestion here is that it does so, in part, because we are forced to make do without meaningful silences.
Maybe it is possible to bring the spirit of such silences to bear on exchanges that unfold on social media, but it seems that we are then working against the grain of the medium, seeking a fullness of experience in the absence of the materiality that sustains it. But perhaps that is all that we can do. To remember, in its absence, the silence that is not merely an absence. At least it seems to me that we are in a better position to proceed if we are at least aware of what we are missing when we seek to speak and to be heard online.
Coda: Most of this was written over the last day or two. It was finished, however, in the immediate aftermath of yet another tragic shooting, this one in Boulder, Colorado. The precise number of the dead has not been disclosed, but it appears that several lives have been lost. One of my first reflections on silence and social media was occasioned by the shooting at Sandy Hook Elementary in 2012. Nearly ten years later it seems that we’ve not come very far. At least it seems as if I am still trying to say more or less what I sought to say then:
We know that our words often fail us and prove inadequate in the face of the most profound human experiences, whether tragic, ecstatic, or sublime. And yet it is in those moments, perhaps especially in those moments, that we feel the need to exist (for lack of a better word), either to comfort or to share or to participate. But the medium best suited for doing so is the body, and it is the body that is, of necessity, abstracted from so much of our digital interaction with the world. With our bodies we may communicate without speaking. It is a communication by being and perhaps also doing, rather than by speaking.
[…]
Embodied presence also liberates us from the need to prematurely reach for explanations and solutions — for an answer. If I can only speak, then the use of words will require me to search for sense. Silence can contemplate the mysterious, the absurd, and the act of grace, but words must search for reasons and solutions. This is, in its proper time, not an entirely futile endeavor; but its time is usually not in the aftermath. In the aftermath of the tragic, when silence or “being with” or an embrace may be the only appropriate responses, then only embodied presence will do. Its consolations are irreplaceable.
Back in 2019, Colin Horgan published an essay discussing the role of convenience in shaping our techno-social order. “It’s convenience, and the way convenience is currently created by tech companies and accepted by most of us,” Horgan argued, “that is key to why we’ve ended up living in a world we all chose, but that nobody seems to want.” While I’m inclined to qualify the “we all chose” element of this claim, particularly under pandemic conditions, the line nonetheless aptly captures what I suspect may be a familiar feeling, the feeling, that is, that we are somehow working at cross-purposes against ourselves. It’s the feeling that our efforts, however well-intentioned or feverish, are not only inadequate but somehow self-defeating. Or, alternatively, it’s the lack of satisfaction lampooned in the popular comedic bit from a few years ago about how “everything is amazing but nobody is happy.”
Of course, part of the problem is that everything is not, in fact, amazing. Being able to access the internet on a transatlantic flight, a key element of the routine, hardly amounts to a just society conducive to human flourishing. And, naturally, many among us might feel as if they are always spinning their wheels and getting nowhere because existing social structures are stacked against them, often deliberately so. Having acknowledged as much, though, it’s worth exploring another dimension of the problem, what we might call the paradox of control, which is the subject of German sociologist Hartmut Rosa’s recent book, The Uncontrollability of the World. It’s a short book, coming in at just over 100 pages, but it develops what is, in my view, an essential insight into one of the key assumptions structuring modern society.
“The driving cultural force of that form of life we call ‘modern,’” Rosa writes, “is the idea, the hope and the desire, that we can make the modern world controllable.” “Yet,” he quickly adds, “it is only in encountering the uncontrollable that we really experience the world. Only then do we feel touched, moved, alive. A world that is fully known, in which everything has been planned and mastered, would be a dead world.”
In other words, the more we seek to control the world, the more it will fail to speak to us, and, consequently the more alienated and dissatisfied we will feel. I might even put it this way: Rosa aims to show that how we set about to find meaning, purpose, or happiness more or less guarantees that we will never find any of them. The rest of the book is an elaboration of this basic dynamic.
Philosophically, Rosa develops his thesis from the observation that human experience is grounded in the perception that “something is present,” and that this awareness even “precedes the distinction between subject and world.” Gradually, we learn to distinguish between the self and the world, but these are “two poles … of the relationship that constitutes them.” The question for Rosa is “how is this something that is present constituted.” In other words, how does our mode of relating to the world shape our perception of it?
Rosa’s “guiding thesis” on this score is that “for late modern human beings, the world has simply become a point of aggression,” an apt phrase that seemed, sadly, immediately useful as a way of characterizing what it feels like to be alive right now. The world becomes a series of points of aggression when, as Rosa puts it, “everything that appears to us must be known, mastered, conquered, made useful.” If our response to this is a measure of befuddlement—how else would we go about living if not by seeking to know, to master, to conquer, to make useful?—then it would seem that Rosa is probably right to say that this is a bedrock assumption shaping our thinking rather than being a product of it.
And, as he goes on to say, because we encounter the world in this way, then “the experience of feeling alive and of truly encountering the world—that which makes resonance possible—always seems to elude us.” We’ll return momentarily to the idea of resonance, a critical concept to which Rosa devoted an earlier book, but for now we should simply note that, in Rosa’s view, a failure to experience resonance “leads to anxiety, frustration, anger, and even despair, which then manifest themselves, among other things, in acts of impotent political aggression.”
Rosa acknowledges that relating to the world primarily by seeking to control or manage it is hardly a new development. This “creeping reorganization of our relationship to the world,” Rosa writes, “stretches far back historically, culturally, economically, and institutionally.” Indeed, the modern project, dating back at least to the 17th century, particularly in its techno-scientific dimensions, can be interpreted as a grand effort to tame nature and bring it under human control. And, of course, as C. S. Lewis observed in The Abolition of Man, the drive to control nature was eventually turned on humanity itself.
But in Rosa’s view, this “creeping reorganization” has, in the 21st century, “become newly radicalized, not least as a result of the technological possibilities unleashed by digitization and by the demands for optimization and growth produced by financial market capitalism and unbridled competition.” For example, Rosa cites the various tools we deploy to measure and optimize our bodies: “We climb onto the scale: we should lose weight. We look into the mirror: we have to get rid of that pimple, those wrinkles. We take our blood pressure: it should be lower. We track our steps: we should walk more.” “We invariably encounter such things,” Rosa notes, “as a challenge to do better.” A bit further on, Rosa adds, “More and more, for the average late modern subject of the ‘developed’ western world, everyday life revolves around and amounts to nothing more than tackling an ever-growing to-do list. The entries on this list constitute the points of aggression that we encounter as the world … all matters to be settled, attended to, mastered, completed, resolved, gotten out of the way.”
Why are we like this? Rosa provides a two-fold answer: “the normalization and naturalization of our aggressive relationship to the world is the result of a social formation, three centuries in the making, that is based on the structural principle of dynamic stabilization and on the cultural principle of relentlessly expanding humanity’s reach.”
By “the structural principle of dynamic stabilization,” Rosa means that “the basic institutional structure of modern society can be maintained only through constant escalation.” Modern society according to Rosa “is one that can stabilize itself only dynamically, in other words one that requires constant economic growth, technological acceleration, and cultural innovation in order to maintain its institutional status quo.”
I trust that, if you’ve been reading this newsletter for a while, this claim about dynamic stabilization will have struck you as vaguely familiar. It is more or less what Ivan Illich was arguing nearly fifty years ago. In fact, I think it would be fair to describe Rosa’s book in its entirety as being Illich in another key. Just a few pages in, I turned to the index to see if Illich was cited because it was already evident that there would be a deep affinity between what Rosa was arguing and Illich’s work, something I continued to think right down to the last page. Alas, he was not, and I will resist the temptation to note the various points at which Illich anticipated Rosa’s argument. (Dear reader: there were many.)
Dynamic stabilization, then, means that should our institutions cease growing and expanding, society would become unstable. It’s worth noting that this is not simply the way societies work. Indeed, modern society, for better and for worse, may be unique in this regard. I think that it was in the aftermath of September 11th that this point was driven vividly home to me by the immediate and evidently panicked insistence that, above all else, Americans should not cease buying stuff. Sure, hug your loved ones, but not too long because you’ve got 0% financing to take advantage of. The panic, of course, was not altogether misplaced. I’m not an economist and thus happy to be corrected on this score, but it seemed to me then, as it does now, that should any sizable portion of the population suddenly decide that their well-being was not served by buying more things, the modern economy would collapse.
This is why Rosa astutely observes that “this escalatory perspective has gradually turned from a promise into a threat.” “What generates this will to escalation,” he explains, “is not the promise of improvement in our quality of life, but the unbridled threat that we will lose what we have already attained.” “The game of escalation,” Rosa argues, “is perpetuated not by a lust for more, but by the fear of having less and less. Whenever and wherever we stop to take a break, we lose ground against a highly dynamic environment, with which we are always in competition.” Rosa invites us to consider how a growing number of parents in the “developed” world claim that they are motivated not by the hope that their children will have it better than they do but by the fear that they might have it worse.
This matters for Rosa’s overall argument because it means that societies that can only stabilize themselves dynamically “are structurally and institutionally compelled to bring more and more of the world under control and within reach, technologically, economically, and politically: to develop resources, open markets, activate social and psychological potentials, enhance technological capabilities, deepen knowledge bases, improve possibilities of control, and so on.”
This structural imperative is coupled with the cultural assumption that “our life will be better if we manage to bring more world within our reach: this is the mantra of modern life, unspoken but relentlessly reiterated and reified in our actions and behavior.” According to Rosa the “categorical imperative of late modernity” is “Always act in such a way that your share of the world is increased.” Rosa goes so far as to suggest that the history of technology is driven by the “promise of increasing the radius of what is visible, accessible, and attainable to us.” This amounts, in Rosa’s view, to a desire to render more and more of the world controllable.
From here, Rosa lays out what he identifies as the four dimensions of controllability:
* making the world visible, knowable, expanding our knowledge of it
* making the world physically reachable or accessible
* making the world manageable
* making the world useful
Modern science, technology, economic development, and the political-administrative apparatus all contribute to making the world controllable along these four dimensions. In the political-administrative sphere, Rosa adds that “the struggle for power can be understood in all respects as a struggle for control.” “Power,” he continues, “always manifests itself in the expansion of one’s own share of the world, often at the expense of others.”
Next, Rosa turns to what he calls the paradoxical flipside of the modern quest for control. The “institutionally enforced program” and “cultural promise of making the world more controllable, not only does not work but in fact becomes distorted into its exact opposite.” Or, to put this into Illichian terms, they become counterproductive, first frustrating the attainment of the goal they seek to achieve and then becoming socially destructive.
“The scientifically, technologically, economically, and politically controllable world,” Rosa argues, “mysteriously seems to elude us or to close itself off from us. It withdraws from us, becoming mute and unreadable. Even more, it proves to be threatened and threatening in equal measure.” The relation to the world that emerges from a desire to control is characterized by alienation or worldlessness, it is, Rosa writes, “a relation of relationlessness in which subject and world find themselves inwardly unconnected from, indifferent toward, and even hostile to each other.”
Early on, Rosa had put the matter rather starkly: “A world that is fully known, in which everything has been planned and mastered would be a dead world.” This particular formulation recalls Simone Weil’s observation in her profound analysis of the Iliad that “force” is “that x that turns anyone who is subjected to it into a thing. Exercised to the limit, it turns man into a thing in the most literal sense: it makes a corpse out of him.” I’m suggesting, of course, that we think of what Weil calls “force” as being not altogether dissimilar from, indeed, an essential element of the drive to bring the world under control. But, of course, we don’t really want a world that is, practically speaking, dead to us.
Perhaps the most valuable part of the book, in my view, commences at this point in the argument when Rosa describes the alternative of relating to the world as a point of aggression to be mastered, managed, and controlled. This alternative mode of relation Rosa calls resonance. The prior book by that title clocks in at 450 pages. Here Rosa gives us what amounts to a 30-page primer.
What Rosa calls resonance is a way of relating to the world such that we are open to being affected by it, can respond to its “call,” and then both transform and be transformed by it—adaptive transformation as opposed to mere appropriation. “The basic mode of vibrant human existence,” Rosa explains, “consists not in exerting control over things but in resonating with them, making them respond to us—thus experiencing self-efficacy—and responding to them in turn.”
Consider, by way of example, something as prosaic as an encounter with another person. Such an encounter will be resonant only when we offer ourselves to the encounter in such a way that we can be affected or moved by the other person and when we, in turn, can respond in kind to this call. As Illich might say, it is a willingness to be surprised by the encounter and to receive ourselves back as a gift of the other. Indeed, Rosa even draws our attention, as Illich does so often, to the gaze. “Our eyes,” Rosa writes, “are windows of resonance. To look into someone’s eyes and to feel them looking back is to resonate with them.”
As a result, such encounters transform both of the people involved. One key to such encounters, however, is a measure of uncontrollability. As some of us may know from experience, any effort to manufacture a “resonant” encounter with another person is almost certainly destined to fail. Similarly, if an object or a person were altogether subject to our control or manipulation, the experience of resonance would also fail to materialize. They would not call to us or be able to creatively respond to us. Indeed, Rosa argues, as we’ve seen, that whatever is wholly within our control we experience as inert and mute. As a result, the farther we extend the imperative to control the world, the more the world will fail to resonate, the more it falls silent, leaving us alienated from it, and to the degree that we come to know ourselves through our relation to a responsive other, then also from ourselves.
Interestingly, Rosa, who otherwise develops his argument in strictly sociological language, nonetheless notes an analogy to religious insights. “Religious concepts such as grace or the gift of God,” Rosa writes, “suggest that accommodation cannot be earned, demanded, or compelled, but rather is rooted in an attitude of approachability to which the subject-as-recipient can contribute insofar as he or she must be receptive to God’s gift or grace.” “In sociological terms,” he adds, “this means that resonance always has the character of a gift.”
Along these lines, one recalls, too, Arendt’s warning in the prologue to The Human Condition: “The future man, whom the scientists tell us they will produce in no more than a hundred years, seems possessed by a rebellion against human existence as it has been given, a free gift from nowhere (secularly speaking), which he wishes to exchange, as it were, for something made by himself.” Something, we might add, that he can control and manage precisely because he has made it. On the other hand, the uncontrollability of resonance, Rosa insists, means that “there is no method, no seven- or nine-step guide that can guarantee that we will be able to resonate with people or things.”
Rosa goes on to elaborate on the nature of resonance and further specify how exactly it is undermined by the impulse to control and manage our experience. But the crux of the matter is relatively straightforward: “An attitude aimed at taking hold of a segment of the world, mastering it, and making it controllable is incompatible with an orientation toward resonance. Such an attitude destroys any experience of resonance by paralyzing its intrinsic dynamism.”
From here, Rosa walks us through a series of scenarios arranged around the progression of a life from birth to death in order to illustrate how the paradox of control plays out along the way. As the father of two young children, I was, naturally, especially interested in his discussion of child-rearing and education. “As in the case of childbirth or home security, here too,” Rosa writes, “the measurability and manageability of multiple processes seems not to diminish anxiety but to heighten it.”
Channeling Illich once again, unwittingly I suppose, Rosa goes on to describe what might be a familiar dynamic:
“This [heightening of anxiety] can be seen in modern parents’ concern that their child’s every discomfort, every scratch, every abnormality in his or her growth, speech, motor skills, or communicative faculties requires medical attention. This dependence on experts and on medical devices undermines parents’ expectations of self-efficacy and, consequently, their ability to experience it. It is no longer parents themselves who listen to their children’s needs and then (in resonance with them) seek out an appropriate response, but rather doctors and experts acting on the basis of reliable data, thus making developmental processes as controllable as possible.”
It is in this section, too, that Rosa supplies us with the useful term parameterized, by which Rosa means “made quantitively measurable one way or another.” So, speaking of the parameterization of the various aspects of child development, Rosa observes, “for each one there are countless experts, guidebooks, and support programs.” It is as if the mode of relating entailed by parameterization disabled our capacity to make reasonable and relatively confident judgments about what one ought to do and instead throws us on the mercy of countless competing and inconclusive authorities.
Not surprisingly, Rosa rightly notes that “technologies and processes associated with digitalization have fundamentally transformed our lives by making nearly the entire world, as it is represented in our consciousness, accessible and controllable in historically unprecedented ways.” Digital technology has especially abetted the parameterization of human experience with every new sensor and data-gathering device, rendering ever more aspects of our own experience as points of aggression. “It is all but impossible,” Rosa observes, “to keep track of the number of steps one takes in a day without being tempted to increase or optimize that number.” And so it is with whatever we can measure and quantify. In this way, “our relationship to our own bodily processes and psychological states has thus been transformed … from one of flexible, self-efficacious listening and responding to one of technological and medical calculation and control.”
It’s worth noting, I think, that Rosa’s examples tend to focus on how you and I might deploy digital technologies to bring more of the world ostensibly under our control. What he might also have explored at greater depth is the degree to which we are not the master’s of these systems of control, indeed, that very often they open up pathways for others to control us. I don’t mean this in some weird conspiratorial sort of way. I mean simply that the same technologies we deploy to parameterize our experience can be used to finely calibrate the worker at the workplace as if she were just another part of the machinery or, alternatively, to exclude someone from health insurance coverage based on the their health parameters.
Rosa draws the book to a close with a discussion of “the monstrous return of the uncontrollable.” “Despite the unpredictability and uncontrollability of our circumstances,” he warns, “we are still held responsible for results that we are supposed to have been able to foresee, which gives rise to anxiety.” “Controllability,” he adds, “in theory thus transforms uncontrollability in practice into a menacing ‘monster.’”
A little further on, he makes the following observation: “the impression of a world become increasingly politically uncontrollable is further reinforced by the similarly uncontrollable dynamism of media and social networks, which have rapidly become capable of provoking previously unimagined, massively consequential waves of outrage or excitement that are unpredictable and uncontrollable in terms of how they arise, how they pass away, and how they interact with one another.”
In one of his Sabbath poems, Wendell Berry reminded us that “we live the given life, not the planned.” I can’t think of a more pithy way of putting the matter. By the “given life,” of course, Berry does not mean what is implied by the phrase “that’s a given,” something, that is, which is taken for granted. Rather, Berry means the gifted life, the life that is given to us. We are presented with a choice, then: we can receive the world as a gift, which does not preclude our acting upon it and creatively transforming it, or we can think of it merely as raw material subject to our managing, planning, predicting, and controlling. Rosa helps us to see, quite precisely, why the latter path will be marked by frustration, anxiety, and alienation. So I will give him the last, more hopeful word:
“If we no longer saw the world as a point of aggression, but as a point of resonance that we approach, not with an aim of appropriating, dominating, and controlling it but with an attitude of listening and responding, an attitude oriented toward self-efficacious adaptive transformation, toward mutually responsive reachability, modernity’s escalatory game would become meaningless and, more importantly, would be deprived of the psychological energy that drives it. A different world would become possible.”
I’m foregoing links this time around out of a desire to just get this installment out and into your inboxes. I will note, though, that I recently had the pleasure of talking with Andrew McLuhan, Marshall’s grandson and the director of the McLuhan Institute, as well as David Sax, the author of Revenge of the Analog, on Quarantime, a podcast hosted by Peter Hirshberg and Mickey McManus. You can listen to it here. I also recently enjoyed a conversation with Elise Lonich Ryan of the Beatrice Institute, which you can find here. I readily confess to being somewhat ill at ease with the podcast format, in part, because I can’t control what transpires! But I’ve been glad for these conversations and honored by the invitations.
Cheers, Michael
If you’ve joined the Convivial Society over the past two or three months, this installment requires a brief introduction. I’m always ready to acknowledge my extensive debts to an older generation of scholars and writers, who have shaped my thinking about technology. Among those scholars, Ivan Illich has played an especially important role. The newsletter’s title, for example, pays tribute to his Tools for Conviviality. So last summer, as we were growing accustomed to life in Zoom-world, I began a newsletter-based reading group around Illich’s work, and that group led to an ongoing series of conversations with some of Illich’s friends and colleagues, all of whom have been extraordinarily generous with their time and encouragement. This installment, then, is the latest in that series.
For over thirty years, David Cayley worked for the Canadian Broadcasting Company, producing numerous interview and documentary programs, including two programs devoted to Illich’s work. The first of these also became the book Ivan Illich in Conversation, which remains an excellent introduction to Illich’s thought. The second became Rivers North of the Future, which provides a sketch of Illich’s unique and stimulating interpretation of the modern world.
In our conversation you will hear about the backstory to those interviews and about the relationship between Cayley and Illich, which took shape around them. And, of course, you’ll hear about a lot more, too.
You can also find the audio of the Illich interviews on Cayley’s website, which includes a remarkable archive of his programs over the years. (Notable examples include his interviews with George Grant, Charles Taylor, and Richard Sennet.)
Finally, Cayley is the author of a forthcoming intellectual biography of Illich, which I feel pretty confident saying will be the best guide to Illich’s life and thought for years to come. Ivan Illich: An Intellectual Journey will be published this month by Penn State University Press, and you can order your copy here [30% discount code: NR 21].
My conversation with Cayley follows earlier conversations with Carl Mitcham, Gustavo Esteva, and Gov. Jerry Brown.
I remain grateful to each of them, as I am to David Cayley, for their hospitality.
I trust you’ll enjoy the conversation.
Cheers,
Michael
Welcome to an unusually brief installment of the Convivial Society. An analogy came to mind, and you can tell me what you think. I vowed to keep it short, under 1,000 words. So this remains suggestive, and I’ll be eager to see if it generates any interesting insights or otherwise proves to be a helpful framing.
In 1979, the late sociologist Peter Berger published a book titled, The Heretical Imperative. As the subtitle explained, it was a book about “the contemporary possibilities of religious affirmation.” According to Berger, any form of religious affirmation in modern societies necessarily arises out of a context of pervasive religious pluralism. In such a context, choosing your religion, or choosing to have none, becomes an imperative. You can’t quite escape the awareness of having made a choice, a choice which could have been otherwise.
As it turns out, the Greek root of the word heresy can be translated as “to choose for one's self.” Thus Berger’s heretical imperative, or the imperative to choose. Unlike in certain pre-modern settings, where you’d likely be born into a religious tradition and live your life in a rather insulated and homogenous social setting that never gave you much occasion to question it, in the modern world getting religion, even if that means remaining faithful to your parent’s faith, is experienced as a choice you’ve made rather than something that is simply given in the nature of things.
So, just as for Berger the sociological structures of modern society generated the heretical imperative, so, too, I would like to propose, the technological structures of digital media generate the hermeneutical imperative.
Hermeneutics is the study of interpretation. It critically explores the methods we deploy to interpret texts of all sorts. It’s often associated specifically with the interpretation of religious texts or the modern tradition of philosophical hermeneutics. I’m using the term to suggest that the proliferation of media artifacts and the growing colonization of our experience by varieties of digital mediation have generated an imperative to self-consciously interpret.
Is that really a new situation, you may ask. Yes and no. It is true, I would grant, that human experience has always been marked by explicit or implicit acts of interpretation. But, three considerations …
(1) Acts of interpretation become more explicit when we confront symbolically encoded human artifacts or media objects. In a walk through the woods, I’m engaged in certain kind of mostly pre-conscious interpretative work—reading the landscape, we might say. Walking through a museum, on the other hand, involves interpretative work of a different and more conscious nature. To the degree that our experience is mediated by digital devices, it takes on the quality of a walk through a (very weird) museum full of works of human artifice calling forth our interpretations.
(2) Also, if I’m right about our experience of digitization generating a primary experience of the Database rather than the Narrative, then the need to be always interpreting becomes all the more apparent. For one thing, digitally mediated relations themselves become media artifacts and media objects. In the face-to-face company of those we know relatively well, our interpretative labor is less acute. We likely have a repertoire of habitual interpretive paradigms on which we can draw, and the less obviously mediated character of in-person exchanges means that the interpretive work likely remains pre-conscious. However, when the person becomes an avatar communicating through text, image, meme, film, GIF, etc., then the interpretative burden intensifies.
And, of course, while most of us know that narratives require interpretation, they also supply interpretations in such a way that it is possible to naively assume that a narrative is simply relaying a transparent account of things. So, when our primary experience is of the Database, then we find ourselves in the position of supplying the interpretative labor a trusted narrative would’ve provided for us.
(3) I’ve already been hinting at a critical difference: self-consciousness. As is often the case, the conditions of digital media make explicit and conscious what had been previously implicit and unnoticed. And this development can have profound consequences, which can be difficult to describe.
Perhaps an example will help. Consider the case of someone who deploys a certain style of naive literalism to a religious text, which assumes that reading the Bible, for example, does not involve any “human interpretation.” When they read the text and tell you what it means, they don’t see themselves as interpreters but merely reporters of what the text obviously says. I’ve called this a naive approach not because it is gullible, although that may also be true, but because it is unaware of itself. It has not been troubled by indeterminacy or doubt. It is aware of only one right reading, which it takes for granted. For such a person to become aware of themselves as interpreters is to induce a crisis of faith. They come to see their own reading as just that and as one among many.
So, yes, humans have always been interpreters of experience, but the nature and scope of the interpretive work has changed, and, most importantly, we’ve become aware of it.
We might say, then, that the conditions of pervasive digitization have rendered the full range of human experience a text to be interpreted. Condemned to preform ever more baroque hermeneutical maneuvers we are deprived the satisfactions of a naive experience of reality. Perhaps this accounts for the widely-reported sense of unreality that plagues so many of us.
Meanwhile, public discourse increasingly takes on the quality of interminable debates carried on by individuals with fundamentally different hermeneutical styles and, consequently, interpretations of reality.
And, of course, there is no magisterium to settle matters for us. Indeed, those institutions that functioned analogously to the magisterium but for matters of public interest—the press, the expert class, etc.—have been rendered just another set of interpreters. They may still see themselves as the orthodox sect, of course. But that is no guarantee their interpretive authority will be recognized by others.
Berger noted that multiple responses were possible in the face of the heretical imperative, and the same is true for hermeneutical imperative. Suffice it to say for now that the best hermeneutics require virtues of the head and the heart.
“To go in the dark with a light is to know the light.To know the dark, go dark. Go without sight,and find that the dark, too, blooms and sings,and is traveled by dark feet and dark wings.”— Wendell Berry, “To know the dark”
Welcome to the latest installment of the Convivial Society, especially to those of you who are relatively new around here. This is a full newsletter with a longish essay and much else. Also, this time around, I’m not addressing a crisis de jour, and I hope that comes off as a feature rather than a bug. Over this past year, I have found myself writing about current events a good deal more than usual. This is fine, but I’m glad to return to another mode of reflection, one in which I feel myself more at home. One note about the title: I’d say that this is the question ultimately raised by the essay rather than one it answers definitively. Nonetheless, I hope you enjoy it.
Reading Dante’s Divine Comedy poses any number of challenges to modern readers. This should not, I’ll quickly add, deter modern readers from the attempt, which, in my view anyway, will more than repay the effort. In any case, one of these challenges may be the curious astronomical dimensions of Dante’s poem. You may know, for example, that each of the three cantiche that make up the poem—Inferno, Purgatorio, and Paradiso—ends on the same Italian word, stelle, or, in English, stars. Moreover, the final portion of Dante’s journey, related in Paradiso, is literally a journey through what we think of as space and what Dante, and his contemporaries, imagined as a series of concentric spheres with earth its center and the realm of God beyond the farthest sphere. It’s also clear that Dante believed the starry heavens were meant to draw our eyes toward God. At one point, he refers to the stars as God’s “lure.”
But perhaps most interestingly, in the middle part of his journey, as he ascends the mountain of purgatory, which Dante imagines as an island in the southern hemisphere, the character of Dante displays a remarkable awareness of where the stars and planets are located in the sky at any given moment. He frequently alludes to the position of the sun relative to the constellations of the Zodiac, and he is intimately familiar with the paths of the planets (including the moon) through the sky as well as the seasonal position of the stars. Within the first few lines of Paradiso, he casually invites readers to imagine the time of year when the sun rises where four circles—the horizon, the equator, the zodiac, and the colure of the equinoxes—intersect to form three crosses in conjunction with Aries! I’m inclined to think that few modern readers would even know where to begin with such instructions.
I can’t speak to whether Dante’s astronomical proficiency was common in his time—after all, Dante was not exactly your average medieval Florentine—but he did have the same experience of the starry sky as human beings had enjoyed for millennia before him and centuries after. Of that much we can be sure.
As science journalist Jo Marchant explained in the opening pages of her recent book, The Human Cosmos, the earliest existing examples of Paleolithic art may very well have included elaborately coded depictions of the night sky. Marchant’s book goes on to explore the remarkable role the stars have played in human thought and culture since then. In short, human beings have payed meticulous attention to the stars and experienced them as a source of wonder, admiration, and even reverence. Seen in this light, our own relationship to the night sky appears as a remarkable, and perhaps tragic, anomaly.
The sight of the star-filled sky, a unifying human inheritance across thousands of years, has been all but lost to the majority of people who now live in urban and suburban settings. By one account 80% of Americans can no longer see the Milky Way. In 1994, when an earthquake knocked out power to much of Los Angeles in the middle of the night, some residents were so spooked by the appearance of the Milky Way above them that they called the police to report the strange phenomenon. I suspect that some of you reading this may be among those who have never seen the arc of our galaxy grace the night sky. For my part, I can count on one hand with fingers to spare the opportunities afforded to me to witness the undiminished night sky over the course of my four decades on planet earth.
I began to think about the loss of the night sky as I was reading about the latest SpaceX launch carrying another batch of Elon Musk’s Starlink satellites. SpaceX currently aims to place 12,000 of these satellites in orbit and is seeking permission to eventually place upwards of 40,000. The goal is to create a global broadband network making the internet accessible to even the earth’s most remote regions. Musk has said that, thanks to Starlink, anyone “will be able to watch high-def movies, play video games and do all the things they want to do without noticing speed.” With the latest launch a few days ago, there are now 952 Starlink satellites in orbit. By comparison, there have been about 8,000 total satellites put into orbit since the launch of Sputnik in 1957. We should note, too, that SpaceX is not alone in these efforts. Amazon’s Project Kuiper has also secured FCC approval to put 6,236 satellites in orbit.
Musk’s ambitions have caused more than a little controversy among astronomers, who fear that this blanket of satellites will hamper their efforts to study the universe from earth-based telescopes. Since the launch of the first Starlink satellites in 2019, which were brighter than 99% of the other 200 orbiting human artifacts visible to the naked eye, SpaceX has responded to these concerns with measures to darken the satellites and equip them with visors to reduce their impact on earth-based astronomical observations. But while these measures have helped to some degree, scientists say the problem remains. Starlink satellites remain brighter than a recommended 7th-magnitude threshold, which would put them beyond visibility to the naked eye.
The Starlink satellites are clearly not responsible for the loss of the starry sky. That process has been underway for more than two centuries and has been the consequence of what are now much more mundane technologies that we hardly think of at all. But I began to think of the ambitions of the Starlink project as somehow amounting to a final twist of the knife. Perhaps this is a bit too dramatic a metaphor, but if we think that the loss of the star-filled night sky is a real and serious loss with significant if also difficult to quantify human consequences, then the final imposition of an artificial network of satellites where before the old celestial inheritance had been seems rather like being tossed cheap trinkets to compensate for the theft some precious treasure. One might also interpret the development in more symbolic terms, almost as a modern-day Tower of Babel, which is to say as a defiant and hubristic gesture of human self-sufficiency, a self-referential enclosure of the human experience, a literal immanent frame (to borrow a term from philosopher Charles Taylor).
But Starlink was only a point of departure leading me to consider the costs of the unrelenting drive toward artificial illumination, a technological development most of us now take for granted. The loss of the stars, after all, could only happen because of a more general loss of darkness. Our relationship to the night, like most human things, is not simply given. It has a history, which varies from culture to culture. The conflicting and evolving way human cultures have thought about the night, and the darkness that accompanies it, has been the subject of numerous essays and books. (Craig Koslofsky’s Evening’s Empire: A history of the night in early modern Europe comes to mind). That said, until recently in human history, these various cultural constructions of the night arose out of a fairly uniform experience of the daily patterns of light and darkness and the ubiquitous presence of the stars, which have been the subject of scientific, philosophical, religious, and artistic interest across cultures from times immemorial. The transformation of this relatively uniform human experience began as new technologies of illumination became available throughout the modern era. In his aptly titled In Praise of Shadows, Japanese novelist Jun’ichirō Tanizaki has written of the attitude of the characteristically modern and Western individual: “from candle to oil lamp, oil lamp to gaslight, gaslight to electric light—his quest for a brighter light never ceases, he spares no pains to eradicate even the minutest shadow.”
The decisive turn, of course, came with the advent of electrification, in the late nineteenth century. One of the best accounts of the progress of electrification in the United States, Electrifying America: Social Meanings of a New Technology, 1880-1940, was written by the historian of technology, David Nye. Thomas Hughes’s Networks of Power: Electrification in Western Society, 1880-1930 is another notable account of the same.
Nye went on to devote a couple of chapters to electrification in a later book, American Technological Sublime, a work I continue to think is indispensable to understanding the role of technology in American culture. As the book’s title suggests, Nye examined how new technologies, of a certain scale and power, generated experiences of the sublime in those who first observed them in action. For example, among other technologies, Nye considers the railroad, bridges and skyscrapers, the atomic bomb, and Apollo XI. According to Nye, these technologies occasioned a quasi-religious response of awe and fear from the public and they were often introduced with elaborate civic fanfare and ritual. The result is what amounts to the cultivation of a functional civil religion centered on technology, which served as a unifying force within an otherwise fractious society.
Electrification, and the electrified cityscape in particular, served as one of Nye’s case studies of the technological sublime. It would be hard to overstate the degree of wonder and fascination that electric lighting generated in those who witnessed it for the first time, often in the presence of spectacular displays of artificial lighting produced for various turn-of-the-century world’s fairs and exposition. (These fairs and expositions, by the way, also offer a fascinating window on technology and American culture from roughly 1870 to 1970. The work of Robert Rydell is especially useful here: All the World’s A Fair: Visions of Empire at American International Expositions, 1876-1916 and A World of Fairs: The Century-of-Progress Expositions.)
Nye’s thesis suggests that, in the American case at least, the loss of the night sky might be described as the surrender of the natural sublime for the sake of the technological sublime. Any lament of the loss of the night sky needs to reckon with the wonder electric lighting also elicited at its advent. But the two were far from equivalent and the costs of the exchange were not readily apparent. As is often the case, I find myself thinking that Ivan Illich’s insistence on the need to recognize thresholds of productivity is essential. The point is not to reject new technologies or the conveniences they offer, but rather to identify the limits beyond which these technologies become counterproductive and even destructive.
At the time there were, of course, some who noted that something of consequence was being lost. “We of the age of the machines,” Henry Beston wrote in the 1920s,
“having delivered ourselves of nocturnal enemies, now have a dislike of night itself. With lights and ever more lights, we drive the holiness and beauty of the night back to the forests and the sea; the little villages, the crossroads even, will have none of it. Are modern folk, perhaps, afraid of night? Do they fear that vast serenity, the mystery of infinite space, the austerity of stars? Having made themselves at home in a civilization obsessed with power, which explains its whole world in terms of energy, do they fear at night for their dull acquiescence and the pattern of their beliefs? Be the answer what it will, today’s civilization is full of people who have not the slightest notion of the character or poetry of night, who have never even seen the night.”
Curiously, this is a thoroughly modern lament. I don’t think Dante could have written it. The pre-Copernican cosmos, with the earth at its center surrounded by a series of concentric spheres on each of which a planet was embedded like a jewel, was a relatively cozy place. A man or woman looking up to the stars did not see a vast, cold, dark emptiness that made them feel small and insignificant, as we sometimes tend to do, perhaps especially to the degree that we have lost sight of the stars themselves. They saw instead a well-ordered cosmos in which they felt themselves at home. They saw, too, a realm bathed in light and, odd as it may seem to us, suffused with music—the so-called “music of the spheres” or musica universalis, itself a fascinating topic.
(An even more arcane aside: Even in the post-Copernican world, the idea of astral musical harmonies played a crucial role in the development of Kepler’s theory of elliptical orbits, see his Harmonices Mundi. It also helped Immanuel Kant improve on Newton’s account of the lunar influence on tidal patterns! You can begin to make sense of the seemingly odd grouping of the latter four of the medieval liberal arts, the quadrivium: arithmetic, geometry, music, and astronomy.)
Setting the medieval digression aside, Beston is mostly preoccupied with what we might call abstract, unquantifiable costs incurred by electrification, to which we’ll return shortly. But there are other costs, of course. Many that we find it easier to talk about and which have indeed been widely discussed, usually under the heading of “light pollution,” which the International Dark-Sky Association defines as “any adverse effect of artificial light, including sky glow, glare, light trespass, light clutter, decreased visibility at night, and energy waste.”
Paul Bogard’s 2013 The End of Night: Searching for Natural Darkness in an Age of Artificial Light is probably about as a good a survey of the consequences of light pollution as you’re likely to find. Bogard traces the rise of the regime of artificial lighting and its less than benign consequences for both humans and non-humans, from the well-documented interruption of the body’s natural sleep cycles and the consequent poor health outcomes to the disruption of natural ecosystems and waste of resources. We hardly ever think of it this way, but electrification can be understood as a massive and unprecedented social and environmental experiment. And I’d say the result are not in yet.
Much of this amounts to a stubborn refusal to acknowledge the limits of our creaturely frame and the kind of techno-social environment that it requires. As Jacques Ellul might have put it, we have built a techno-social environment that is in many respects inhospitable to human beings as such, although it serves the interest of some humans quite well; better for some that our consumption and labor be unbounded. Techno-scientific advances once sought to improve the human lot. Now they as often arise for the sake of the techno-scientific enterprise itself or the economic order that sustains it, generating spurious needs while failing to meet basic ones. In turn, whole industries and markets arise to produce techniques designed mitigate the harm done by a human-built world whose structures and rhythms undermine the possibility of genuine human freedom and flourishing.
Moreover, darkness and the starry sky have succumbed to that all too familiar pattern whereby a public good, commonly shared or freely accessible, has been transmuted into a luxury item available only to the privileged classes. Dark Sky tourism had been flourishing in the pre-pandemic world. To glimpse the night sky, which had for the whole of human history until the last 50 to 70 years, which amounts to the blink of an eye, all one had to do was step outside in the evening. Now you may have to pay for the privilege. This is not unlike Ivan Illich’s argument in “Silence is a Commons.” “Just as the commons of space are vulnerable, and can be destroyed by the motorization of traffic,” Illich argued, “so the commons of speech are vulnerable, and can easily be destroyed by the encroachment of modern means of communication.” “Such a transformation of the environment from a commons to a productive resource,” Illich went on to insist, “constitutes the most fundamental form of environmental degradation.” And as with silence so with darkness.
Illich understood that commons of this sort were “more subtle and more intimate to our being than either grassland or roads.” How, then do we describe something so subtle and intimate?
“Two things,” Kant famously observed, “fill the mind with ever new and increasing admiration and awe, the more often and steadily we reflect upon them: the starry heavens above me and the moral law within me.” Is there a relationship between the two? Is there any sense in which we get not only our spatial bearings but also our psychological and emotional bearings from observing the beauty and rhythms of the star-filled sky? Are we bearing an unacknowledged burden of mental and physical exhaustion because the night no longer brings most of our labors to a close and bids us rest. Is there anything to be said for the inspiration the night sky has given to the human imagination?
We are doomed, it seems, to abide the loss of all that we cannot quantify. Absent shared ethical frameworks or normative accounts of human flourishing, modern societies tend to resort to quantification as an ostensibly neutral and value free lingua franca suitable for the public sphere. Meanwhile, it becomes increasingly difficult to recognize and defend human goods that cannot be objectively measured. And should some effort be made to quantify them, they are likely to be reduced, impoverished, and exploited.
What do we lose when we lose the stars? What has it cost us to conquer the night?
Perhaps only the poet can say.
But, in this case, all of us may have some part to play. I encourage you to check out the International Dark Sky Association. There are, in fact, relatively simple things that can be done to improve the situation and some cities across the United States have already taken action.
Links and Resources
“The new world atlas of artificial night sky brightness”World Atlas Night Sky Brightness Dark Site Finder“Missing the Night Sky”
News and Resources
* Evan Selinger reviews a recent book, Life After Privacy, which argues that we need to accept the fact that we live in a post-privacy world. Selinger patiently dissects the key claims in the book and argues that such despair is premature.
* “Taking Trust Seriously in Privacy Law” was among the several articles cited by Selinger in his review. From the abstract: “Instead of trying to protect us against bad things, privacy rules can also be used to create good things, like trust. In this paper, we argue that privacy can and should be thought of as enabling trust in our essential information relationships. This vision of privacy creates value for all parties to an information transaction and enables the kind of sustainable information relationships on which our digital economy must depend.”
* And here is Selinger’s entry, co-written with Brenda Leong, in the forthcoming The Oxford Handbook of Digital Ethics, “The Ethics of Facial Recognition Technology.”
* Jeremy Antley has an engaging piece on war games in Real Life: “As the goal of technoculture is no longer to control the future but rather to pre-empt it with the predictive power of simulation, its training games now involve the creation of “just in time” subjectivities — citizens capable of reconfiguring their training and worldview on the fly, allowing, for example, the near-seamless transformation of Xbox or Playstation devotees into drone pilots.”
* Interesting essay on the historical relationship between novel technologies and how we imagine the mind, focusing on the recent prevalence of predictive technologies: “Human beings aren’t pieces of technology, no matter how sophisticated. But by talking about ourselves as such, we acquiesce to the corporations and governments that decide to treat us this way. When the seers of predictive processing hail prediction as the brain’s defining achievement, they risk giving groundless credibility to the systems that automate that act – assigning the patina of intelligence to artificial predictors, no matter how crude or harmful or self-fulfilling their forecasts.”This essay reminds us that one of the more subtle, but not insignificant ways technology shapes us is by supplying metaphors for human experience that condition how we think about ourselves.
* You may have already seen this, but, just in case, a delightful tour of North American accents.
* Short post from historian Lee Vinsel summarizing his research over the past few years: “Seven Theses on Technology and the US Economy.”
* Antonio Garcia-Martinez interviews Zeynep Tufekci. As you all know, I’m a fan of Tufekci’s work, especially as she has written about the pandemic throughout this past year. Lots of good insights here on social media, political culture, etc.
* This is a lovely online exhibit: “Data Visualization and the Modern Imagination.” The section on “Nature in Profile” features the work of 19th century naturalist, Alexander von Humboldt. It just so happens that The New Atlantis just published a terrific essay on von Humboldt: “A Scientist’s Mind, a Poet’s Soul: On the unified cosmic vision of Alexander von Humboldt, the nineteenth century’s great naturalist-adventurer.”
Re-framings
— Erazim Kohák, The Embers and The Stars: a philosophical inquiry into the moral sense of nature. the first part of the book’s second chapter is titled, “The Gift of the Night.” There’s a great deal therein that I could have excerpted. Here’s a small bit of it:
In the global city of our civilization we have banished the night and abolished the dusk. Here the merciless glare of electric lights extends the harshness of the day deep into the night restless with the hum of machinery and the eerie glow of neon. Unreflectingly, we think it a gain, and not without reason. We are creatures of daylight, locating ourselves in our world by sight more than by any other sense. We think of knowing as seeing. Light and darkness belong among our most primordial metaphors for good and evil … Ever since the dawn of history, humans have struggled to kindle a light against the darkness, making it, too, a place of works of charity and necessity …
Those lights are deeply good, as good as the labor of all who keep vigil by their glow. To think of them as a triumph over darkness, however, is far more problematic. We have thought in those terms for so long that night has come to appear alien and threatening, an enemy to be banished, no long a place of our being. Yet half of our time on this earth is, perforce, lived in the night. Might we not do better to teach ourselves to think of the lights we make as a human way of dwelling at peace with the night?
[…]
[W]e are not only creatures of the light. We are creatures of the rhythm of day and night, and the night, too, is our dwelling place. Darkness enriches even our days. Pure light would blind us: our perception depends on discerning contrasts, the interplay of light and darkness. Without the rhythm of day and night, of going forth and resting, our lives would flatten out in unchanging monotony and our philosophy in an undifferentiated technē. It is good, deeply good, to kindle a light in the darkness, though not against it. There must also be night.
— In Fahrenheit 451, Ray Bradbury included the following bit of narration, based perhaps on his own experience. Describing the light of character’s eyes, the narrator describes the protagonist’s apprehension of the light in the face of a young girl he has just met:
It was not the hysterical light of electricity but—what? But the strangely comfortable and rare and gently flattering light of the candle. One time, when he was a child, in a power-failure, his mother had found and lit a last candle and there had been a brief hour of rediscovery, of such illumination that space lost its vast dimensions and drew comfortably around them, and they, mother and son, alone, transformed, hoping that the power might not come on again too soon ....
The Conversation
After an extended hiatus, the Illich reading group will be resuming. This time around we are discussing David Cayley’s Rivers North of the Future, essentially an extended interview Illich gave Cayley late in his life. This was Cayley’s effort to get Illich to expand on his interpretation of modernity, which, despite Cayley’s urging, Illich never managed to turn into a book. Rivers North also gives us the best glimpse of Illich’s faith and how it shaped his work. The reading group is pretty much the only part of the newsletter that is reserved for paid subscribers, so if you want to join up, you know what to do. Additionally, Cayley will be joining me for a conversation about Illich, which I’ll be posting in early February.
You can also be on the look out for a bit of an experiment in the near future: a discussion thread open to all readers. When I solicited some feedback at the end of last year, one recurring theme in the responses was the desire for a bit more of a community experience among readers. This will be an effort to push a bit in that direction. We’ll see how it goes.
Cheers,
Michael
Last Wednesday, I was working on the draft of a newsletter I had intended to send out later in the week. Of course, that was before my Twitter feed was taken over by the failed coup or insurrection or seditious mob, or whatever else one may call it, which stormed the Capitol to interrupt the certification of the electoral college votes and, as far as some participants were apparently concerned, hang the Vice President of the United States.
I’m fully aware of the insignificance of my own judgments on these matters, but let me nonetheless make clear at the outset that, however dispassionate the following discussion may seem, I consider the actions of mob, its enablers, and its apologists reprehensible and seditious. Moreover, I regret to add, the proceedings were, in my view, merely one particularly dramatic symptom of a grave, possibly fatal condition, which will not magically resolve itself come January 21st.
It’s hard to know where to begin, of course; the situation has many interlocking layers. The most notable and disturbing elements have been well covered, and we continue to learn more about the event each day. The picture, it seems, only grows darker. For my part, I’ve been especially interested in thinking through the role of digital media in these events and what it portends for the future. Here, then, are a few reflections for your consideration along those lines.
In light of the complexity and gravity of the situation, which transpired just days ago (although it may already seem like weeks), I feel obliged to stress that this is a tentative exercise in thinking out loud. I’ll begin with a few comments about the labels and categories we use to think about digital media before turning to a more direct analysis of last week’s events and what they reveal about our media environment. More than is usually the case, the following discussion lacks a tight, well-ordered structure, so I’ve supplied the numbering to provide some sense of how my thoughts were grouped together. Think of what follows not so much as an argument but as a series of interlocking perspectives on the same phenomenon.
(1) Reading through the torrent of commentary on the assault on the Capitol has left me with the sense that we’ve still not quite figured out how exactly to talk about the relationship between digital media and human experience.
Much of the discussion has centered on the moderation policies of social media companies, particularly given their role in the organization of the assault on the Capitol. Some have commented on the role of social media during the assault. And others have sought to examine whether digital media played a more fundamental role in these events and, by extension, our cascading national omni-crisis. Each of these deserves our attention, as does much else, of course. For my part, I’ve been thinking about related matters for some time now under the assumption that digital media—like writing, printing, and electronic media before it—occasions profound social and political change. This is not, in my view, a techno-determinist position. I fully acknowledge that new technologies interact unpredictably with existing values, institutions, and social structures. Moreover, all along the way, people make choices, although perhaps increasingly constrained and conditioned by the new media infrastructure once it has become entrenched. But I remain convinced that media ought to be understood ecologically rather than additively. When a new species is introduced into a natural ecosystem, you don’t just get the old ecosystem plus a new species. You may very well end up with an entirely new ecosystem or even a collapsed ecosystem. Thus, when digital media restructures human communication in roughly 25 years time (dating roughly from the early years of the commercial internet), we should expect significant social and political change. The challenge is to make sense of it midstream, as we still are.
(2) As events unfolded and also in their immediate aftermath, it seemed as if the reality of what was happening was difficult to ascertain. What I have in mind is not simply a case of what we tend to mean when we say something like “I still can’t believe x happened,” which almost always communicates the opposite of the literal sense. Rather, it seems to me that we were confronted with a rather more nebulous sense of unreality, one grounded in a similar inability to clearly parse the relationship between digitization and our experience of the world, which is in turn related to the unfathomable proliferation of digitally mediated reality. We are, of course, well along an established trajectory dating back decades, which has been alternatively theorized, for example, as involving the rise of pseudo-events, spectacle, or hyperreality.
(3) It’s not uncommon to hear someone claim that “Twitter is not real life.” The phrase is generally meant to convey the position that only a relatively small percentage of the population are active Twitter users, thus Twitter is not really representative of reality. Consequently, those whose understanding of reality is shaped largely by their time on Twitter are not really perceiving real life or at least don’t have a good grasp of what really matters in real life.
There is, mind you, a measure of truth to this, but claiming that Twitter is not real life tends to obscure more than it illuminates. What it obscures are the porous boundaries between Twitter and non-Twitter, a fact which has for the past four years been driven home to us on a nearly daily basis. Better to say, for example, that Twitter mediates reality, as does Facebook, Instagram, CNN, your local NPR station, a textbook, a smartphone camera, and your native language. It is not a matter of real life in opposition to mediated reality. The challenge is simply to understand the nature and consequences of the various mediations that together shape our understanding of the world we all share.
While I think it is ultimately unhelpful to speak about digitization generating unreal phenomena or to think of it as a set of activities that are somehow sealed off from the so-called “real” world, it is nonetheless revealing that we reach for this language. It suggests both a lack of trenchant categories with which to describe digitization and, consequently, our inability to fully the integrate the consequences of digitization into our thinking about the world.
(4) Speaking of the digital sphere as a place or even a space is part of the problem. Digital tools do not generate places in the ordinary sense of the word, they mediate relationships, in part precisely by disassociating the self from place. It seems to me that if you think of “online” as a place, it is easier to imagine that this place is somehow detached from the so-called real world—you leave here and go there. However, if you think instead of digitized relations, then that temptation seems to lose its plausibility. The key is to understand the nature of these relations.
Let’s speak, then, of digital tools, digital media, and digitized relations. The three are clearly interdependent—you don’t get digitized relations without either digital tools or their products—but I think it will be useful to keep these distinctions in mind. When it is helpful to think of the three together as a package, I will simply refer to the digital or digitization for brevity’s sake. (Much in the same way that we speak of electrification or industrialization.)
(5) It is certainly true that the total relevant media environment includes network television, cable news, and talk radio. Many have rightly pointed out that, for example, our present situation cannot entirely be blamed on Facebook, Twitter, or any other digital media platform when television and radio also command such large and devoted audiences. This is true as far as it goes, but it doesn’t account for the degree to which digital is now the master medium, in the sense of being the technical infrastructure for other media (digital tools), supplying content for other media (digital media), and forming the larger environment within which other media operate (digitized relations).
(6) Let’s get back to the mob at the capital as our discreet case in point. Here again, I’ll begin with distinctions.
The event is complex not simple. It has many causes, dimensions, and consequences. If we are tempted to reduce it to one thing and search for one cause, it is because we always find it easier to think in those terms. Moreover, there are multiple, overlapping angles of analysis to consider when we set out to think through the significance of what happened. And it may be that thinking through the relationship of Digitization to this event may not be the most important consideration. As Adam Elkus, for example, has been insisting, this particular event was shaped by elite calculations about hard power during extraordinary circumstances; the attention placed on the internet and the sub-cultures it hosts is, in his view, at least somewhat misguided. I don’t disagree, although I believe its worth exploring the consequences of Digitization and its relation these events. I think we’ll find a great deal of consequence unfolding on this terrain as well.
(7) We will fail to grasp the nature of our situation unless we understand that we inhabit a world composed of two distinct but intermingling configurations of social relations, digitized and analog (for lack of a better way of putting the latter). It is a mistake to either collapse these two configurations into each other as if they were identical or to assume that the two are hermetically sealed off from one another. When this happens, the analysis either attributes too much or too little to digitization. The key, it seems to me, is to recognize the distinctiveness of the relations constituted by digitization and how these relations interact with pre-digital institutions and social arrangements. Consider, for example, the vector of time. Digitization generates temporal pressures that place acute stress on institutions which operate at pre-digital temporal settings. One doesn’t even need to pass a value judgment on which may be the “better” in order to realize that significant problems arise when these two orders of social relations are entwined.
(8) I remain relatively convinced that if we think of a culture as a materially and symbolically mediated set of human relations with a distinct, relatively coherent set of beliefs and values, then it is perfectly legitimate to speak of the proliferation of cultures resulting from the digital mediation of human relations. In this sense, I would argue that if modernity was characterized by mutually reinforcing trends towards pluralism and homogenization, trends which loosen the grip, so to speak, of distinct and independent local cultures, then Digitization has nourished the revival of micro-cultures, which, unlike older, traditional cultures, however, are not to be found in place but rather in the symbolic order of relations sustained by Digitization but whose members then spill out into a common world, which receives members from competing digital cultures and with radically different views of reality.
Of course, this may seem like a banal observation when we have been hearing about internet sub-cultures for years and years. However, I’d say the term has been generally understood as culture in the weak sense. In the old order of things, deep cultural differences could be sapped of their power, chiefly by the commodifying forces of the market economy, the often unstated assumptions of liberal democracy, and mass media. Culture was often reduced to a matter of cuisine, dress, and music rather than one of divergent and often competing orientations toward the world. Digitally mediated sub-cultures around particular games or films, for example, have been understood in this sense. I’m suggesting that digitally mediated sub-cultures can become cultures in the strong sense, generating distinctive perspectives on truth, morals, and norms.
And, again, I think this becomes clear precisely when we cease thinking about the internet as a space (virtual reality, cyberspace, “go online,” etc.) and begin to think of it as alternatively mediated relations. Early in the history of digitization, the internet was called the information superhighway. But the internet was never simply a conduit of information. It did not merely transmit information, it connected people in ways they could not be connected otherwise. It materialized and supercharged the dynamics of cultural formation: symbolic exchange, social networks, and the mechanisms of shame and approbation. And it did so, while simultaneously diminishing the significance of place, which has historically been the most formative vector of cultural formation, and also undermining the authority of older culturally formative institutions.
But, there’s more to this. The fact that this cultural formation happens in the context of digitized relations also means that participants can more readily be locked into alternative realities rather than simply alternative moral orders. Precisely because the formation is happening in the absence of a “common world of things” a “common sense” fails to emerge. I’m thinking here of the way that Hannah Arendt has defined these terms. “The presence of others who see what we see and hear what we hear assures us of the reality of the world and ourselves,” Arendt wrote in The Human Condition. She went on to claim that “while the intimacy of a fully developed private life, such as had never been known before the rise of the modern age […], will always greatly intensify and enrich the whole scale of subjective emotions and private feelings, this intensification will always come to pass at the expense of the assurance of the reality of the world and men.”
I’ve written recently about Arendt’s understanding of a common world and a common sense with a view to our digitized age, so I won’t belabor the point here. Suffice it to say that under the conditions of digitization, it becomes increasingly difficult to arrive at a common world and a common sense, and much less any general agreement about what shape that world should take.
Digitized relations create conditions that can be described as disembodied environments of symbolic exchange structured by carefully calibrated architectures of reward and affirmation, environments which lend themselves to rapid cultivation of alternative understandings of public phenomena. Do such alternative understandings always yield violent insurrections? No, obviously not. Are such possibilities always latent? Yes.
It’s worth noting, even if just in passing, the underlying loneliness and indeterminacy of identity that render someone susceptible to the temptations of a selfhood dialectically optimized in tandem with an inscrutable algorithm.
(9) Consider the debate about whether this was a serious coup attempt or whether it was a farce and participants were there mostly for social media points. The answer is simply “yes.”
You’ve likely heard some commentary, especially early on and in light of the visually dominating presence of the QAnon shaman, dismissing the mob as nothing more than LARPers (Live Action Role-Playing), who got a bit out of hand. But this view clearly misses an important point. The reference to larping in this context suggests either games like Dungeons and Dragons or, more accurately, in-person meet ups of would be wizards and knights. Obviously, this is a misguided caricature, but it’s what the rhetoric is meant to suggest: people who should not be taken seriously because they are playing silly games.
Unfortunately, this strikes me rather as a slander of larpers, who, as far as I can tell, retain a decidedly firm grip on the borders of the game world and often deploy elaborate rites and practices to secure a high degree of self-awareness about the distinction between fantasy and reality.
Digitization affords no such distinction, as I’ve already argued, even though it is clear that since the early days of the internet, many participants believed that it did. An illusion, I suspect, generated in part by the disembodied and anonymous nature of the early internet forums.
Early internet theorizing and some present day nostalgia celebrates this stage of the internet as a golden age, wherein individuals could freely play with different identities. The reality, of course, was more complicated. Such role playing could liberate aspects of the self that were unjustly suppressed by existing prejudices, but they could just as easily liberate aspects of the self that were justly suppressed by legitimate and salutary moral and ethical standards. Getting to role play white supremacy, for example, can hardly be conceived of as a commendable experience of liberation.
The critical point, however, is that there is no line between political role playing online and the so-called real world. When there is no clear line between the stage and the world, you cannot go on playing a role or acting a part without assuming the risk that you will in fact be transformed by the performance. Moreover, there being no line between digitized relations and analog relations, the perceived immateriality of the digital spectacle can be seen to invite actions that might otherwise have stuck the same person as ludicrous or ill-advised.
Perhaps best remembered for having coined the phrase “the global village,” Marshall McLuhan later came to prefer “the global theater.” In a 1977 interview, McLuhan was asked about his view of technology as a revolutionizing agent. “Yes,” McLuhan responds, “it creates new situations to which people have very little time to adjust. They become alienated from themselves very quickly, and then they seek all sorts of bizarre outlets to establish some sort of identity by put-ons. Show business has become one way of establishing identity by just put-ons, and without the put-on you’re a nobody. And so people are learning show business as an ordinary daily way of survival. It’s called role-playing.”
(10) One of the most widely circulated responses to the events on January 6th has been an essay in the New York Times by historian Timothy Snyder, who has written widely on authoritarianism in the 20th century. It’s a long piece and I won’t pretend to summarize its contents here. I do, however, want to engage one particular section of Snyder’s argument. As we move into the heart of Snyder’s analysis, he writes the following:
Post-truth is pre-fascism, and Trump has been our post-truth president. When we give up on truth, we concede power to those with the wealth and charisma to create spectacle in its place. Without agreement about some basic facts, citizens cannot form the civil society that would allow them to defend themselves. If we lose the institutions that produce facts that are pertinent to us, then we tend to wallow in attractive abstractions and fictions. Truth defends itself particularly poorly when there is not very much of it around, and the era of Trump — like the era of Vladimir Putin in Russia — is one of the decline of local news. Social media is no substitute: It supercharges the mental habits by which we seek emotional stimulation and comfort, which means losing the distinction between what feels true and what actually is true.
Snyder pays some attention to the role of digital media, although he is narrowly focused on social media platforms. But those comments, most of which are accurate and point us toward a better understanding of the role of digital media, do not quite get us there. And, in this paragraph I’ve cited, which has been widely quoted online and can be justly called the crux of the essay, we see the consequences of an inadequate account of digitization.
To see the problem, ask yourself this question: Who exactly is the “we” who has given up on truth?
It would seem that the problem is rather a proliferation of “truths,” stridently and even desperately believed. Am I prepared to say that some of these “truths” are, in fact, lies and falsehoods? Yes, of course. But that’s immaterial. They are believed. Furthermore, politics has always been understood to be the realm of lies and falsehoods, noble, Big, or otherwise. It would seem that we are dealing with something more than conventional lies in this sense.
Additionally, we did not “lose” the institutions that produce facts, these institutions have lost their authority among large segments of the public across the political spectrum.
It must be understood that these two developments proceed in tandem, such that we can describe our situation not as post-truth but as post-trust. Although, even then it is not that we are post-trust so much as we are beyond the age of institutions that commanded widespread trust. Trust has always played a role in the establishment of public knowledge. None of us have independently reasoned ourselves to every single belief we hold about public realities, nor could we even if we so desired. We take on authority a great deal more than we realize. So, when the old institutions we trusted for a base of common knowledge and understanding have (deservedly or otherwise) lost their standing, then public knowledge splinters accordingly.
(11) Last summer I argued that, in the context of information superabundance, the Database now precedes the Narrative. Digitization has made possible the dissemination and storage of information at unprecedented scale and speed. To the degree that your view of the world is mediated by digitized information, to that same degree your encounter with the world will be more like an encounter with a Database of unfathomable size than with a coherent narrative of what has happened. The freedom, if we wish to call it that, of confronting the world in this way also implies the possibility that any two people will make their way through the Database along wildly divergent paths.
Consider the events of January 6th. Long before any kind of credible and authoritative narrative had been established, most of us had already encountered a multitude of data related to the event: videos, images, audio, and endless commentary from people directly involved or observing from a distance. In this context, no one controls the narrative. In this context, speculation runs rampant. In this context, people will form impression, that while false, will never be corrected.
Existing digital cultures will connect the disparate entries in the emerging database to form narratives, which line up with their existing understanding of reality. It is possible to run through the database in countless plausible ways. A consensus narrative will almost certainly not emerge.
It is impossible to overstate the speed with which any phenomena, however slight it may seem, gets entered, irrevocably, into the Database for the generation of narratives, which is to say for the curation of competing realities. The “realtime” nature of this dynamic is critical as is the easily manipulable nature of digital media.
And consider that this is not merely a function of willful ignorance or intentional deceit. Under these conditions, it is entirely possible for serious, educated people to arrive at disparate understandings of reality. The grifters and manipulators don’t help mind you, but I think it’s too facile, and falsely comforting, to say that they alone are the source of the problem.
It’s notable, too, that this fragmentation of perspectives happens at a foundational level. Which is to say that it’s not just that there is widespread disagreement about how to interpret the meaning of an event. It is also that there is widespread disagreement about the basic facts of the event in question. It is one thing to argue the meaning of the moon landing for human affairs, it is another to incessantly debate whether the moon landing happened. Which is why I have argued in the past that we are all conspiracy theorizers now. We are all in the position of holding beliefs, however sure we may be of them, that a sizable portion of the population considers not just mistaken but preposterous and paranoid.
I should, of course, acknowledge another dimension to this reality. The conspiracy theorist is ordinarily imagined as a lone, troubled individual convinced of things hardly anyone else believes. Under the conditions of digitized relations, this is no longer the case. We can readily find others that share our view of the world, which is to say that they have run through the Database and discovered the same patterns. This naturally reinforces what might otherwise have been a tenuously held belief or suspicion. We are not alone, there are others. So rather than saying that we are all conspiracy theorizers now, I should say that we are all cult members now.
The trouble, of course, is that while we might inhabit very different perceived realities, we live in the same world. In the case of the United States, the election and the pandemic are both troubling examples of what can happen under these circumstances. In both cases, it might be said that the world as it is catches up with our mediated realities. You can spin alternative political realties only so long before blood is shed. You can argue the true nature of a virus only so long before the deaths pile up. But this is little comfort and hardly points us forward.
Seen in this light, then, the spectacle, as Snyder puts it, or the Database, as I have called it, precedes the epistemic fragmentation. Is it the case that digital sophists with wealth and charisma will do their best to manipulate this state of affairs to serve their own ends? Yes, of course. But I think it is important to consider that even apart from such actors, we are in a bad place. The epistemic habits are becoming ingrained and the Database only grows.
(12) Thomas Kuhn famously argued that scientific revolutions happened when a reigning scientific paradigm, could no longer account for proliferating anomalies. For some, the encounter with the Database may be described as an incessant assault of anomalies, perpetually deferring the establishment of a paradigm or compelling narrative. The result, I suspect, is generalized suspicion and reluctant indifference bordering on apathy, if not finally cynicism.
Digitized relations, then, allow for the possibility of getting locked into an alternative reality. But they also create the possibility of descending into a state of permanent skepticism about public knowledge. Neither are conducive to a healthy public sphere.
Thus, as William Butler Yeats put it in “The Second Coming,” a poem published interestingly enough in November 1920, “The best lack all conviction, while the worst / Are full of passionate intensity.”
(13) While the events of January 6th were the result of a confluence of factors that were in large measure independent of digitization, the event itself could not have happened apart from the general context created by digitization. Consequently, I think it best to view the event, which cannot be said to be complete even as I write this, to be an early, if also particularly outrageous and violent actualization of a pattern of event that will become increasingly common barring any large scale and unforeseeable changes to our digital media environment.
“Existence in a society that has become a system finds the senses useless precisely because of the very instruments designed for their extension. One is prevented from touching and embracing reality. Further, one is programmed for interactive communication, one's whole being is sucked into the system. It is this radical subversion of sensation that humiliates and then replaces perception.”— Ivan Illich, “To Honor Jaques Ellul” (1993)
Consider the following words spoken by a character in Nathaniel Hawthorne’s 1851 novel, The House of Seven Gables:
“Then there is electricity, the demon, the angel, the mighty physical power, the all-pervading intelligence! … Is it a fact — or have I dreamt it — that, by means of electricity, the world of matter has become a great nerve, vibrating thousands of miles in a breathless point of time? Rather, the round globe is a vast head, a brain, instinct with intelligence! Or, shall we say, it is itself a thought, nothing but a thought, and no longer the substance which we deemed it!”
A bit overwrought, perhaps, but it expresses something of the wonder, dread, and exhilaration which attended the growing understanding of electricity in the mid-19th century. Roughly the same historical context had already yielded Mary Shelley’s Frankenstein with its imaginative debt to galvanism, the late eighteenth century fascination with the relationship between electricity and biological life.
Hawthorne’s language in this passage calls to mind the way, a century later, Marshall McLuhan would talk about electronic forms of communication. “With the arrival of electric technology,” McLuhan wrote in Understanding Media, “man extended, or set outside himself, a live model of the central nervous system itself.”
I would guess that this is the the kind of claim that makes McLuhan seem a bit too esoteric or even a little bizarre to some readers. But if we sit with it for just a moment, I think we’ll find it both fairly straightforward and also illuminating (no pun intended). The relationship between networks of electronic communication and the nervous system, which also communicates via electrical impulses, seem obvious enough, of course. McLuhan is suggesting that the networks of electronic communication extend the functions of the biological nervous system beyond the physical limits of the body.
So let’s take a look at a handful of places where McLuhan leans on the concept of electronic communication as an extension of the nervous system and see where this might lead us. Bear in mind that McLuhan is writing in the 1960s, and these observations predate the advent of the internet as we know it. McLuhan has radio and the television chiefly in mind, with the telegraph as a distant predecessor. He is, however, already thinking about how the computer will affect these networks of electronic media.
So here is McLuhan explaining the analogy a bit further: “It is a principle aspect of the electric age that it establishes a global network that has much of the character of our central nervous system. Our central nervous system is not merely an electric network, but it constitutes a single unified field of experience.”
In other words, global networks of electronic media augment our field of experience. Whereas the field of experience constituted by our biological nervous system was anchored to the body’s location in space and time, electronic media as an extension of the nervous system generates a field of experience that is potentially global in scope.
Emphasizing the speed electronic networks of communication, McLuhan noted that “when information moves at the speed of signals in the central nervous system, man is confronted with the obsolescence of all earlier forms of acceleration, such as road and rail. What emerges is a total field of inclusive awareness. The old patterns of psychic and social adjustment become irrelevant.”
The notion of a field of inconclusive awareness (or unified field of experience) is more or less the same dynamic geographer Yi-Fu Tuan noted when he made the following observation: “In the past, news that reached me from afar was old news. Now, with instantaneous transmission, all news is contemporary. I live in the present, surrounded by present time, whereas not so long ago, the present where I am was an island surrounded by the pasts that deepened with distance.”
Contrary to how he is sometimes (mis)read, McLuhan was not exactly sanguine about this state of affairs. “To put one's nerves outside, and one's physical organs inside the nervous system, or the brain,” McLuhan argued, “is to initiate a situation ... of dread.” But McLuhan was deeply interested in understanding rather than deriding the psychic consequences of these transformations occasioned by new technologies.
Note how he tells us in the paragraph I cited above that the “old patterns of psychic and social adjustment become irrelevant.”
There are two important parts to this claim, first the distinction between psychic and social, and, second, the idea that patterns of adjustment have become irrelevant. Regarding the latter, I take him to mean that whatever means of organizing and coping with information, stimuli, perceptions, etc. we deployed in the old age of pre-electronic media were now no longer up to the task.
Regarding the former, the distinction is a common one. We are accustomed to distinguishing between the individual and society. But I think McLuhan is also implying that we can speak of a social or collective consciousness in the same way that we might speak of a person’s consciousness. And that, along similar lines, we can speak about disorders of the corporate psyche in the same way that we might speak about disorders of the individual psyche.
I find it helpful to think along these lines by taking memory as a case in point. Clearly, we have our personal memories, and, certainly, even these memories have a social dimension. When we gather with old friends or family members, we might mutually spur each other to recollections of shared events that no one member of the group would have arrived at on their own. But I think it is meaningful to also speak about how societies remember (to borrow the title of Paul Connerton’s book on the theme).
A society’s memory is not merely the sum total of all the memories of the individuals that make up that society. In fact, in certain respects, it may be said to exist independently of individuals. It is true that such memories are not always subjectively realized in human consciousness, but they do not, for that reason, cease to effect the social body. We might even speak of such memories as suppressed or repressed, and, in this way, also potentially traumatic. Naturally, we do not look for these memories in structures of individual consciousness, but rather in the material structures of society: the layout of its cities, its architecture, its allocation of resources, its place names, etc.
So, for example, the layout of a city may continue to reflect decisions made decades earlier with overtly racist intent. Whether or not any individuals consciously remember such decisions, the material substrate carries the memory, as it were, just as the body often carries memories the mind has forgotten. And such memories continue to work themselves out in the life of the city, whether a critical mass of the city’s populace becomes consciously aware of them or not. In this way, we might even speak of them as traumatic memories. Only when they are brought to conscious awareness is there any hope of escaping their disordering consequences. But awareness is, I think, a double-edged sword. At least, we might say that awareness can in its own way become paralyzing for both individuals and society.
So, if we allow for the usefulness of the concept of collective consciousness we can entertain the idea that the consequences of the internet, for example, are felt not only privately but also collectively. This may seem like a banal observation, and may be it is. Of course the internet has collective and social consequences, it seems that we have been doing little else than talking about such consequences for the last several years. But I mean to especially emphasize the consequences felt at the level of the social psyche. As I put it a few months ago, I don't think we take seriously enough the idea that the internet functions as a kind of collective unconscious which is generating a form of collective madness.
Perhaps madness is not the best word, although, I don’t know, it seems like a credible case can be made on certain days. But let’s return to a few more of the comments McLuhan made before working our way to some kind of conclusion.
“In the electric age,” McLuhan observed, “when our central nervous system is technologically extended to the whole of mankind and to incorporate the whole of mankind in us, we necessarily participate, in depth, in the consequences of our every action.” “It is no longer possible,” he added, “to adopt the aloof and dissociated role of the literate Westerner.”
Both the idea of participation in depth and that of a novel interest in the consequences of our action as a result of electronic media are recurring themes in McLuhan’s work. I think they make more sense when we try to imagine what instantaneous exposure to national and world events would have seemed like to people who had ordinarily only encountered such events through write-ups in newspapers. (McLuhan, I should note, was born in 1911 in Edmonton, Alberta.) It’s easy to forget the wonder of seeing events transpiring live from across the globe when most of us have only known a world in which this was a banal experience. In other words, I don’t think we are well-positioned to comprehend what it would have been like to suddenly feel as if the world were collapsing in on you because electronic media had now dramatically extended the reach of your perceptive apparatus.
Of course, you happen to be around 40 years of age, give or take a few years, you might be rather well positioned to comprehend how digital media built upon and augmented these developments, chiefly by allowing us to carry our extended nervous system around with us at all times. But there are important differences, of course. The age of pre-digital electronic media was also the age of mass, non-participatory media. Whereas electronic media in the pre-digital age generated a more or less passive experience of rather uniform streams of information, digital tools have wildly diversified our feeds and enabled us to generate a meta-level of self-aware discourse to overlay the field of unified experience that electronic media generated. Digital media has also generated massive and readily accessible databases of memory, which feed back into the layer of self-aware discourse. Indeed, in a metaphorical sense, we might say that if electronic media constituted an extension of the nervous system, digital media has extended memory and speech. Together, these three have generated a simulacrum of collective consciousness. Seen this light, we have become mad indeed, talking endlessly to ourselves and increasingly trapped within our own words, unable to rightly perceive the world around us and much less act effectively in it.
Consider, too, McLuhan’s claim that it is no longer possible to adopt an aloof or dissociated stance toward our experience. McLuhan seem to have had in mind the way that electronic media involves one affectively in the events they transmit. We might, for example, note, as McLuhan did, how television coverage of the war in Vietnam transformed the American experience of war. American society felt the war in a different sense than it had any previous war. But digital media does more to erode the ideals of disinterestedness, objectivity, and neutrality. It diversified the mass media feed that had previously generated a false sense of national consensus. Its expansive and searchable archives through light on every inconsistency and all hypocrisy which sustained the myth of disinterested neutrality on the part of experts, leaders, and institutions. It was observed frequently in the late 20th century that television especially made the aura of detached, formal dignity attending public figures, and the respect it ostensibly commanded, implausible. It did so by collapsing the distance between public figures and the masses, which was the prerequisite of such an aura. It rewarded more approachable, visually charismatic, and informal presentations of the self. Digital media has had an even more profound effect, which will become all the more evident once the light of the electric age is altogether extinguished.
McLuhan, I’ll note in passing, also understood the dynamics of the so-called attention economy long before the term was coined. “Once we have surrendered our senses and nervous systems,” he warned, “to the private manipulation of those who would try to benefit from taking a lease on our eyes and ears and nerves, we don't really have any rights left.” “Leasing our eyes and ears and nerves to commercial interests,” he added, “is like handing over the common speech to a private corporation [!], or like giving the earth's atmosphere to a company as a monopoly.” In other words, we do not own our extended nervous system, nor our external memories or our augmented voices. And this only heightens the disorders of our collective consciousness. It generates a kind of paranoia about what we perceive, which in the meta-discourse of our collective mind takes the form of endless debates about tech platforms, free-speech, deep fakes, filter bubbles, etc.
McLuhan also argued that the act of extending one of capabilities is simultaneously an act of auto-amputation. “With the arrival of electric technology,” McLuhan wrote, “man extended, or set outside himself, a live model of the central nervous system itself.” “To the degree that this is so,” he continued, “it is a development that suggests a desperate and suicidal autoamputation, as if the central nervous system could no longer depend on the physical organs to be protective buffers against the slings and arrows of outrageous mechanism.”
Further on, he claimed that “the principle of self-amputation as an immediate relief of strain on the central nervous system applies very readily to the origin of the media of communication from speech to computer.” Each new technology seeks to relieve the stresses induced by the earlier medium, but then serves only to create a more desperate situation requiring a further extension and subsequent acceleration. This dynamic seems self-evident at this point. Just make note of all the ways we turn to new technologies and techniques in order to compensate for the consequences of existing technologies. McLuhan is here anticipating, in rather more esoteric terms, an crucial element of Hartmut Rosa’s more recent theory of social acceleration, in which he describes a feedback loop whereby technical acceleration, achieved by both new techniques and new technologies, leads to the acceleration of social change, and the acceleration of the pace of life, which then calls for further technical acceleration.
Interestingly, McLuhan also claims that “self-amputation forbids self-recognition.” But I’m not sure this is quite right, at least not any longer. As McLuhan himself explained, we are, as of the mid-twentieth century at least, increasingly aware of the consequences of new technology. “Today it is the instant speed of electric information,” he observed, “that, for the first time, permits easy recognition of the patterns and the formal contours of change and development. The entire world, past and present, now reveals itself to us like a growing plant in an enormously accelerated movie.” And so, we may perhaps be able to recognize the self-amputation of our perceptive apparatus.
This possibility recalled, not surprisingly, something that Ivan Illich wrote in a talk honoring Jacques Ellul. Noting the degree to which we had taken leave of our senses, not our wits, mind you, but our more literal senses, sight, touch, smell, hearing, taste.
Notice the McLuhanesque phrasing when he notes that “existence in a society that has become a system finds the senses useless precisely because of the very instruments designed for their extension. One is prevented from touching and embracing reality. Further, one is programmed for interactive communication, one's whole being is sucked into the system. It is this radical subversion of sensation that humiliates and then replaces perception.”
But here was the cure, as far as Illich could see it:
“It appears to me that we cannot neglect the disciplined recovery, an asceticism, of a sensual praxis in a society of technogenic mirages. This reclaiming of the senses, this promptitude to obey experience, the chaste look that the Rule of St. Benedict opposes to the cupiditas oculorum (lust of the eyes), seems to me to be the fundamental condition for renouncing that technique which sets up a definitive obstacle to friendship.”
News and Resources
* Farmville is shutting down after a ten-year run. This piece, leaning on Ian Bogost, looks at how the game set the pace for the much of the internet in the last decade. “The game encouraged people to draw in friends as resources to both themselves and the service they were using, Mr. Bogost said. It gamified attention and encouraged interaction loops in a way that is now being imitated by everything from Instagram to QAnon, he said.‘The internet itself is this bazaar of obsessive worlds where the goal is to bring you back to it in order to do the thing it offers, in order to get your attention and serve ads against it or otherwise derive value from that activity,’ he said.”
* 30 “Bug” drones delivered to the British Army.
* “Limits to prediction: pre-read.” For a course taught this fall by Arvind Narayanan and Matt Salganik (made available by Narayanan):“Is everything predictable given enough data and powerful algorithms? Researchers and companies have made many optimistic claims about the ability to predict phenomena ranging from crimes to earthquakes using data-driven, statistical methods. These claims are widely believed by the public and policy makers. However, even a cursory review of the literature reveals that state-of-the-art predictive accuracies fall well short of expectations.”
* “Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI” (January 2020): “The rapid spread of artificial intelligence (AI) systems has precipitated a rise in ethical and human rights-based frameworks intended to guide the development and use of these technologies. Despite the proliferation of these ‘AI principles,’ there has been little scholarly focus on understanding these efforts either individually or as contextualized within an expanding universe of principles with discernible trends. “To that end, this white paper and its associated data visualization compare the contents of thirty-six prominent AI principles documents side-by-side. This effort uncovered a growing consensus around eight key thematic trends: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and promotion of human values.”
* Stanford Medical Center came under fire this month for not having prioritized its front-line workers for vaccine distribution. Clearly, the whole matter has been highly contentious, and my aim in linking this story is not to stake out a moral position on the question of vaccine distribution. Rather, it is to highlight how hospital administrators took refuge in “the algorithm made me do it” rationalization. When you outsources human judgment to algorithmic process you also outsource (or at least distribute) responsibility, which will be all too convenient to some.
* “Singularity Vs. Daoist Robots”: An interview with Yuk Hui, a Chinese philosopher of technology, which serves as a helpful introduction to his work. One goal for 2021 is to become better acquainted with Hui’s work, which was first brought to my attention, if I remember correctly, by Carl Mitcham. More recently, Adam Elkus recommended his work and this interview as a good starting point. I believe that Hui also plays an important role in Alan Jacobs’s latest essay for The New Atlantis, which is still behind a paywall.
* “The mechanical monster and discourses of fear and fascination in the early history of the computer”: “This discourse established a clear dichotomy of fear and fascination: fears of a loss of autonomy and usurpation of labour, and fascination with a machine that possessed unlimited possibilities. What made the computer truly a mechanical monster was its hybridity, which was perhaps more present in representations of the computer than in the mechanics of the technology itself. It represented both the technological sublime and an apocalyptic dystopia at the same time. Through the combination of existential threat, categorical impurity, and exotic fascination, the computer emerged as a contemporary image of the mechanical monster.”
* “A Forgotten Pinhole Camera Made from a Beer Can Captures the Longest Exposure Photograph Ever”
* Marshall McLuhan: “Anything I talk about is almost certainly to be something I’m resolutely against, and it seems to me the best way of opposing it is to understand it, and then you know where to turn off the button.”
* This recent release from Theodore Porter looks interesting: The Rise of Statistical Thinking, 1820–1900.
Re-framings
— Wendell Berry, almost certainly channelling Ivan Illich, in “Health is Membership” (1994):
People seriously interested in health will finally have to question our society's long-standing goals of convenience and effortlessness. What is the point of ‘labor saving’ if by making work effortless we make it poor, and if by doing poor work we weaken our bodies and lose conviviality and health?
We are now pretty clearly involved in a crisis of health, one of the wonders of which is its immense profitability both to those who cause it and to those who propose to cure it. That the illness may prove incurable, except by catastrophe, is suggested by our economic dependence on it. Think, for example, of how readily our solutions become problems and our cures pollutants. To cure one disease, we need another. The causes, of course, are numerous and complicated, but all of them, I think, can be traced back to the old idea that our bodies are not very important except when they give us pleasure (usually, now, to somebody's profit) or when they hurt (now, almost invariably, to somebody's profit).
— From Wendell Berry’s 2012 collection of Sabbath poems:
X
In memory: Ivan Illich
The creek flows full over the rocksafter lightning, thunder, and heavy rain.Its constant old song rises underthe still unblemished green, new leavesof old sycamores that have so farwithstood the hardest flows. And thisis the flux, the thrust, the slow songof the great making, the world neverat rest, still being madeof the ever less and less that we,for the time being, make of it.
The Conversation
I have one more newsletter project that I am working on right now, and that is to collect the essays I’ve written throughout the year into one document, which I’ll make available in a few days. I’m doing this mostly for my sake, but perhaps some of you might want to download the document, too. So you can look for that next week.
Other than that, you can look for the newsletter to continue in the new year with another installment out by the middle of January. In the new year, I hope to also get back to my conversations with Ivan Illich’s friends and colleagues, his co-conspirators. The online Illich reading group will also resume with David Cayley’s, Rivers North of the Future, which is essentially the transcript of a long interview Cayley conducted with Illich in the late 1990s. (The reading group is pretty much the only paid subscriber-only content.)Some of you will be coming to the end of your year’s subscription in the next couple of weeks. Through the time warp that was 2020, it seems like the launch of the newsletter was both just yesterday and a decade ago. In any case, I hope you’ve found The Convivial Society valuable enough to continue your support. (Thanks to you, the newsletter is apparently holding down the 25th spot under Substack’s Culture tag.)
Some of you have written to tell me that you would appreciate it if the newsletter had a more robust community-building dimension to it. At least that’s how I would summarize the comments. I have continued to think about this, and I hope to find some ways to make that possible. In truth, I suspect that the chief requirement would be more time on time on my part in the work of moderating and engagement, and that’s going to be a bit of a challenge. Nonetheless, I think we can make somethings happen on this front, so stay tuned.
Happy new year, to all for whom that applies. I wish I could say that I was optimistic about the coming year, especially given all that 2020 has been. But, alas, I can’t quite. Nonetheless, may we find the right measure of courage, resilience, humility, generosity, and even gratitude to face whatever the new year will bring. And may we work, in whatever way we are able, for the health of our communities, remembering, as Wendell Berry has put it, that “a community in the fullest sense—a place and all its creatures—is the smallest unit of health and that to speak of the health of an isolated individual is a contradiction in terms.”
Cheers,
Michael
“All technical progress exacts a price. We cannot believe that Technique brings us nothing; but we must not think that what it brings it brings free of charge.”— Jacques Ellul, “The Technological Order” (1962)
In case you’ve ever wondered, I don’t exactly have a grand research project or even a clearly defined area of interest. You might, in fact, should I ever have occasion to bore you with the details, be surprised to learn just how haphazardly I stumbled into writing about technology. Several years ago now, I reflected a bit on the work of tech criticism, by which, of course, I mean not a reflexive hostility to technology but rather the effort to think well about its meaning and consequences. The gist of it was that, generally speaking, the tech critic thinks and writes for the sake of something other than technology itself, whether that be, for example, the environment, the just society, or the good life. And this is as it should be. Technology is properly a means toward an end, and we get into trouble precisely when it becomes an end in itself.
My writing, I suspect, probably reflects this rather unfocused interest in matters technological, cultural, and moral. Insofar as I have a “thing,” however, I’d say that it is an interest in exploring and applying the work of an older, now mostly forgotten tradition, loosely defined, of technology criticism. It’s an effort I once called, and still occasionally refer to as the recovery of the tech critical canon. If it ever seems that I am saying something new or different, it is more likely the case that what I am saying is simply a variation on some older theme and that it appears novel or original only to the degree that it has been forgotten or ignored.
My explorations along these lines tend to alternate among different strands of this tradition. At times the media ecological strand is top of mind, and the work of scholars like Marshall McLuhan, Walter Ong, and Neil Postman becomes more prominent in my thinking. If you’ve been reading the newsletter of late, for the better part of this year really, you already know that my attention has more recently been mostly on the work of Ivan Illich, one of the few critics to whom I’d attach the epithet “radical,” suggesting one who gets to root of things. The names of others come readily to mind—Lewis Mumford and George Grant among the dead and Albert Borgmann, Wendell Berry, Carl Mitcham, and Langdon Winner among the living. Hannah Arendt, while not ordinarily thought of as a philosopher of technology, strictly speaking, has also served me well in this regard.
Of late, I’ve had occasion to turn again to the work of Jacques Ellul, the 20th century French scholar, lay theologian, and sometime member of the French Resistance during the Second World War. Ellul was honored as one of the Righteous Among the Nations by the World Holocaust Remembrance Center for his labors on behalf of Jews fleeing the Nazi regime. While the homage to Illich’s notion of conviviality is evident in the title of this newsletter, I also intended to evoke the title of Ellul’s best known work, The Technological Society.
Of course, it is not that I think Ellul, Illich, or any of the other individuals I’ve mentioned were infallible in their judgments or that they foresaw all of the particular challenges we now face, but I do continue to be struck by the prescience of their analysis and the enduring relevance of their insights. I should add, of course, that I do highly value the work of contemporary writers, many of whom I read with considerable profit. The older critics, however, do have one decided advantage, and that is the advantage of not taking for granted the techno-social configuration that is more or less our default cultural setting and which thus inevitably shapes our thinking about technology. To be clear, this is not necessarily a matter of their keen insight or clear vision, it is, in large measure, simply a matter of chronological vantage point. They came up in a different age, so their experiences allowed them to perceive aspects of the modern technological project that we are more likely to miss if only because we have no similar point of contrast in our experience to illuminate the distinctive contours of our own age. What I’m suggesting, though, is that their work can become just such a point of contrast for us. At least, it has served that function for me (and I hope, by extension, for you as well). Many of their categories and frameworks remain useful, and they point us to alternative ways of being with technology. Hardly a day goes by in which I don’t find occasion to deploy the work of one of these thinkers to make sense of present circumstances.
As one example, while reading through Ellul recently, I was reminded of his discussion of what he called “technical humanism,” and it caught my attention because of the recent discussions of both the documentary The Social Dilemma and the Center for Humane Technology with which the film is associated. Several individuals connected with the center, former Google employee Tristan Harris most prominent among them, appear in the documentary about the social ills of social media.
In my feeds, people responded to the The Social Dilemma, which came out in October, in much the same way that they responded to the Center when it was first launched a few years back, which is to say with more than a healthy dose of skepticism and frustration. As far as I can tell, a good bit of the frustration stemmed from the fact that the documentary, and to a similar degree the work of the center, ignored the labors of contemporary academics and activists, who had been raising the alarm about social media companies long before the wayward technologists experienced their ostensible moral awakenings. Fair enough, I say. But then I immediately think about how Ellul and company were likewise marginalized and even scorned, often by contemporary scholars, who were all too ready to dismiss their work. But this is merely a self-indulgent digression—back to what Ellul had to say about technical humanism.
Writing in The Technological Society, which was first published in 1954, Ellul noted that “the claims of the human being have thus come to assert themselves to a greater and greater degree in the development of techniques; this is known as 'humanizing the techniques.’” But Ellul, who had up to that point in his book gone to great lengths to demonstrate how technique had thoroughly captured society, was not impressed.
Ellul defined technique as (for a given stage of development) in every field of human activity.” Ellul understood that what mattered most about modern technology was not any one artifact or system, but rather a way of being in the world. This form of life or fundamental disposition precedes, sustains, and is reinforced by the material technological order.
So, Ellul went on, if we seek the “real reason” for humanizing technology “we hear over and over again that there is ‘something out of line’ in the technical system, an insupportable state of affairs for a technician. A remedy must be found."
But, Ellul invites us to ask, “What is out of line?” “According to the usual superficial analysis,” Ellul answers, “it is man that is amiss. The technician thereupon tackles the problem as he would any other. But he considers man only as an object of technique and only to the degree that man interferes with the proper function of the technique.”
In other words, he continued, “Technique reveals its essential efficiency in discerning that man has a sentimental and moral life. These factors are, for technique, human and subjective; but if means can be found to act upon them, to rationalize them and bring them into line, they need not be a technical drawback. Of course, man as such does not count.”
This humanizing of technology presumes the existing techno-social status quo and ultimately serves its interests. It only amounts to a recalibration of the person so that they may fit all the more seamlessly into the operations of the existing techno-economic order of things. That techno-economic order is itself rarely questioned; it is taken mostly for granted, the myth of inevitability covering a multitude of sins.
I’m not sure we can say that contemporary proponents of humane technology reason precisely by this logic. But neither do I think that they avoid ending up in much the same place, practically speaking. Consider the proliferation of devices and apps, some of which the Center for Humane Technology promotes, which are designed to gather data about everything from our steps to our sleep habits in order to help us optimize, maximize, manage, or otherwise finely calibrate our bodies and our minds. The calibration becomes necessary because the rhythms and patterns of our industrialized and digitized world have proven to be inhospitable to human well-being, while, nonetheless, alleviating certain forms of suffering. One might say that while, for many, although certainly not all, modern technological society has managed to supply various material needs, it has been less adept at meeting many of our non-material needs. And it would be a serious mistake to imagine that only our material needs mattered. So now the same techno-economic forces present themselves as the solution to the problems they have generated. In Ellul’s terms, the answer to problems generated by technique is the application of ever more sophisticated and invasive techniques. The more general technological milieu is never challenged, and there’s very little by way of a robust account of what human flourishing might look like independent of the present technological milieu.
Ellul also has little time for the “professional humanists” who cheer on such solutions. “This procedure suits the literati, moralists, and philosophers who are concerned about the human situation,” he writes, “Unfortunately, it is a historical fact that this shouting of humanism always comes after the technicians have intervened; for a true humanism, it ought to have occurred before. This is nothing more than the traditional psychological maneuver called rationalizing.”
“It seems impossible to speak of a technical humanism,” Ellul concluded after some further discussion of the matter. It was more likely, in his view, that human beings would simply be forced to adapt to the shape of the technological system. “The whole stock of ideologies, feelings, principles, beliefs, etc. that people continue to carry around and which are derived from traditional situations,” these Ellul believed would only be conceived as unfortunate idiosyncrasies to be eliminated so that the techno-economic system may operate ever more efficiently. “It is necessary (and this is the ethical choice!) to liquidate all such holdovers,” he continued sarcastically, “and to lead humanity to a perfect operational adaptation that will bring about the greatest possible benefit from the technique. Adaptation becomes a moral criterion.”
One is reminded here of how tech enthusiast Robert Scoble recently tweeted his thanks to a man killed in a Tesla accident a couple of years back for his sacrifice, which, Scoble explained, helped improve the auto-pilot’s functioning. The tweet now appears to have been deleted. Scoble, who recorded his video message while riding with his children in a Tesla by the site of the accident, got a fair amount of heat for it. It is possible to imagine someone making the case for the inevitable costs of technological progress in a less callous if no less objectionable fashion, of course, and without also presuming to speak for the family of the victim. What struck me was the way Scoble spoke with almost religious fervor, as if technological progress was a transcendent value for which the sacrifice of a mere human life was an ultimately negligible price to pay. Indeed, one which the victim, unwitting as he no doubt was, ought to have been grateful to make. It is was not enough, it would seem, to accept the tragedy. One must celebrate it.
Later, in the midst of the backlash, he tweeted, “Twitter is rough tonight but I have sailed rougher seas. People never understand the future at first.” One begins to imagine why Illich, when he was once asked to forecast the future, sharply replied, “To hell with the future! It’s a man-eating idol.”
Returning once more to Ellul, later in a 1983 article about ethics and technology, he also recognized the problem which still plagues us but that few seem to acknowledge: those who call for ethical technology presume that human beings “must create a good use for technique or impose ends on it, but [are] always neglecting to specify which human beings.”
“Is the ‘who’ not important?” Ellul asked. “Is technique able to be mastered by just any passer-by, every worker, some ordinary person? Is this person the politician? The public at large? The intellectual and technician? Some collectivity? Humanity as a whole? For the most part politicians cannot grasp technique, and each specialist can understand an infinitesimal portion of the technical universe, just as each citizen only makes use of an infinitesimal piece of the technical apparatus. How could such a person possibly modify the whole?”
Needless to say, the situation has hardly improved on this score in the last 30+ years. In fact, technological systems have become ever more complex and our governing institutions more dysfunctional.
It’s worth noting that Ellul’s work was often dismissed by later scholars precisely because it attempted to consider “the whole,” to speak about technological society, to make judgements about the total techno-social package. This approach was rejected in favor of granular analysis of technological development, which avoided sweeping claims about something as vague as “the technological order.” This makes a certain amount of sense, and it has yielded valuable insights. But it came at the cost of missing the proverbial forest for the trees, ignoring larger patterns and cumulative effects. The contingencies evident at a micro-level of analysis compound into culturally formative currents. The complete technological milieu has a total effect that is greater than its constituent parts, just as the total effect of a work of fiction cannot be properly assessed merely by tabulating literary devices and figures of speech. And these effects include shifting assumptions, new habits and dispositions, the dissolving and reconstitution of the plausibility structures sustaining political values, the redrawing of the horizons of expectation and desire, restructurings of the social order, the reshaping of our imagination, and a reorientation of our experience of the world. None of which will be apparent from a social history of the refrigerator, however interesting such a tale might be.
Now, while readers of The Technological Society would be forgiven for assuming that Ellul was overly fatalistic, providing neither a path forward nor any measure of hope, that was not exactly true. It’s just that Ellul intended for readers to engage the whole of his corpus (over 40 books!) and read his sociological works in dialectic tension with his theological reflections, in which Kierkegaard and the Swiss theologian Karl Barth loom large. One might even say that, in this expectation, Ellul was, in fact, overly optimistic! In any case, he did make an argument for the value freedom as it arises out of a condition of perceived necessity presented by contemporary technology. It was precisely against the background of necessity that freedom could exist.
To one interviewer he said, “I would say two things to explain the tenor of my writings. I would say, along with Marx, that as long as men believe that things will resolve themselves, they will do nothing on their own. But when the situation appears to be absolutely deadlocked and tragic, then men will try and do something.” (As odd as it may seem to some contemporary American readers, it could be said that Marx and Jesus where the two pillars of Ellul’s thought.)
“Thus it is,” Ellul went on to explain,
that I have written to describe things as they are and as they will continue to develop as long as man does nothing, as long as he does not intervene. In other words, if man rests passive in the face of technique, of the state, then these things will exist as I have described them. If man does decide to act, he doesn’t have many possibilities of intervention but some do continue to exist. And he can change the course of social evolution. Consequently, it’s a kind of challenge that I pose to men. It’s not a question of metaphysical fatalism.”
Seen in this light, Ellul’s work was an effort not simply to instruct but also to provoke. And it is to provoke us toward the realization of a measure of freedom available only when we fully reckon with the reality that opposes it.
I would only add this note in closing. We ought to understand freedom as having two dimensions: freedom from and freedom for. Too often we fail to consider that freedom is fully realized only when it is conceived not only as a freedom from restraint, but also as a freedom to fulfill a deeper calling toward which freedom itself is but a penultimate means. The two are related but not identical. What Ellul would have us see is that the modern technological order tends to promise the former while simultaneously eroding the latter.
News and Resources
* An introduction and invitation to the work of the philosopher Maurice Merleau-Ponty (whose work I’ve found personally stimulating as I have thought about technology over the years): “Perceiver and perceived, then, are drawn into the cohesion of life. In the posthumous collection The Visible and the Invisible (1964), Merleau-Ponty wrote of the shared ‘interworld’ where ‘our gazes cross and our perceptions overlap’; it is here, he says, that the ‘intertwining’ of your life with other people’s lives is revealed. Far from a world of detached egos, or one of mere objects, what we encounter through embodied perception is this crisscrossing of lateral, overlapping relations with other people, other creatures and other things – an expressive space that exists between lived bodies. It’s not that we are all ‘one’, but that we inhabit a world in which, to quote the philosopher Glen Mazis, ‘things, people, creatures intertwine, interweave, yet do not lose the wonder that each is each and yet not without the others’.”
* A couple of short primers on Ellul I usually recommend. One by Samuel Matlack at The New Atlantis and the other by Doug Hill in the Boston Globe.
* Here is Carl Mitcham’s “Three Ways of Being-With Technology” alluded to in the essay above and from which the chart was taken.
* If I may be forgiven for recommending my own work, this was the first essay I wrote for The New Atlantis a couple of years back. It covers some of the same ground as this newsletter. This is one of those pieces, which, having looked back on it some time later, I still feel pretty good about. “We fail to ask, on a more fundamental level, if there are limits appropriate to the human condition, a scale conducive to our flourishing as the sorts of creatures we are. Modern technology tends to encourage users to assume that such limits do not exist; indeed, it is often marketed as a means to transcend such limits. We find it hard to accept limits to what can or ought to be known, to the scale of the communities that will sustain abiding and satisfying relationships, or to the power that we can harness and wield over nature. We rely upon ever more complex networks that, in their totality, elude our understanding, and that increasingly require either human conformity or the elimination of certain human elements altogether. But we have convinced ourselves that prosperity and happiness lie in the direction of limitlessness. ‘On the contrary,’ wrote Wendell Berry in a 2008 Harper’s article, ‘our human and earthly limits, properly understood, are not confinements but rather inducements to formal elaboration and elegance, to fullness of relationship and meaning. Perhaps our most serious cultural loss in recent centuries is the knowledge that some things, though limited, are inexhaustible.’”
* On the mystery of that Gatwick drones: “Most people with any interest in the Gatwick drone have already made their mind up. Either the initial sighting was a mistake, and subsequent sightings were the result of mass panic or confirmation bias, as proved by the technical unfeasibility of what was described. Or there was a drone, and the same technical challenges are evidence that it was an extremely sophisticated attack, one that we should be wary of dismissing.”
* A discussion of some of the main themes in the work of Walter Ong, “Looking Is Not Enough: Reflections on Walter J. Ong and Media Ecology” (h/t to Mike Plugh). This in particular merits reflection: “Ong argued that all technological mediation requires some level of interpretation. Where face-to-face interaction presented one interiority to another, the technologies of rhetoric added a distortion, so that people had to learn how to understand rhetorical products—at least according to Plato and Aristotle. Written texts demand more interpretation: what do these marks mean? This interpretation occurs both at the level of the code itself and at the level of the text. Printed texts include more helps to interpretation: type face, type style, page arrangement, the attention to visual patterns that influence thought, and so on. Products of secondary orality demand more, not less, interpretation since they involve a deception—the hiding of the text on which they depend. Digital materials, as being yet more abstract, require more interpretation. And so it goes. In Ong’s titular phrase, ‘Hermeneutic Forever.’”Bonus: Here is audio of a talk Walter Ong gave in 1972.
* A tryptic: 1. “How Trees Made Us Human.” 2. “The Social Life of Forests.” 3. Alan Jacobs’s recently re-designed site, “The Gospel of the Trees.”From the second piece: “Although plants are obviously alive, they are rooted to the earth and mute, and they rarely move on a relatable time scale; they seem more like passive aspects of the environment than agents within it. Western culture, in particular, often consigns plants to a liminal space between object and organism. It is precisely this ambiguity that makes the possibility of plant intelligence and society so intriguing — and so contentious.” Yet not one mention of ents!
* Article by Jacques Ellul titled “The Technological Order.” Published in 1962, two years before The Technological Society would appear in English. Useful introduction to his work. “Since Technique has become the new milieu, all social phenomena are situated in it. It is incorrect to say that economics, politics, and the sphere of the cultural are influenced or modified by Technique; they are rather situated in it, a novel situation modifying all traditional social concepts. Politics, for example, is not modified by Technique as one factor among others which operate upon it; the political world is today defined through its relation to the technological society. Traditionally, politics formed a part of a larger social whole; at the present the converse is the case.”
* On “the peril of persuasion in the Big Tech age.” This article focuses on the dangers posed by novel digital technologies of persuasion, chiefly that citizens may be manipulated and misled by finely tuned and targeted misinformation. Fine as far as it goes, but it seems to me that the more significant consequence of the digital media environment will be that the ideal of a public sphere ordered around persuasion will itself become implausible to a growing number of people. The divide will be between those who continue earnestly presuming such an ideal and those willing to take advantage of such earnestness for their own ends.
* Long-ish piece on why time management is ruining our lives. I suspect most of you reading this don’t need to be convinced of the titular claim. Here’s one brief excerpt:“The allure of the doctrine of time management is that, one day, everything might finally be under control. Yet work in the modern economy is notable for its limitlessness. And if the stream of incoming emails is endless, Inbox Zero can never bring liberation: you’re still Sisyphus, rolling his boulder up that hill for all eternity – you’re just rolling it slightly faster.”Now here is Ellul from the essay linked just above this one: “It is useless to hope that the use of techniques of organization will succeed in compensating for the effects of techniques in general; or that the use of psycho-sociological techniques will assure mankind ascendancy over the technical phenomenon.” Part of the problem, of course, is that technique itself forms our environment and shapes our moral imagination. The appeal of discreet instances of technique lies in their promise of fulfilling the logic of Technique writ large, which goes by many names, such as Productivity or Efficiency. But these are at best means to an end that have been taken as ends in themselves. We have no clear sense of where we ought to go, but we’re sure that we ought to be getting there faster and more efficiently.
* On “the coming war on the hidden algorithms that trap people in poverty”:”Low-income individuals bear the brunt of the shift toward algorithms. They are the people most vulnerable to temporary economic hardships that get codified into consumer reports, and the ones who need and seek public benefits. Over the years, Gilman has seen more and more cases where clients risk entering a vicious cycle. ‘One person walks through so many systems on a day-to-day basis,’ she says. ‘I mean, we all do. But the consequences of it are much more harsh for poor people and minorities.’”
* Shannon Mattern on the cultural history of plexiglass: ”Plexi pairs visual access with physical distance in order to sanction exchange: the handover of money or goods, the serving of food, the verification of identity and confirmation of action, the transmission of messages (albeit through somewhat muffled voices and blurred facial expressions). The presence of plexi prompts us to suspend our fear of contamination while we engage in necessary transactions. Its assurances, even if partly a matter of ‘security theater,’ can serve vital cultural and economic functions: they keep us shopping, going out to eat, attending class, congregating at political rallies. But it is also through plexiglass that Americans have, for decades, been negotiating social tensions and civil unrest.”
Re-framings
— From the conclusion of Xiaowei Wang’s reflections on factory farming:
Life outside may not always be grandiose, visible, or permanent, but as the constant failed attempts at biosecurity show, nothing is steady or stable. As COVID-19 continues, from afar I get glimpses of the village on social media—while my life in the city has been suspended, they continue to plant rice, raise chickens, and make rice wine. Life outside requires a focus on mutual care; a vocabulary of tending to the future that we increasingly hear calls for; a kind of thoughtfulness that asks us to attend to the present moment and the communities we are accountable to. Life outside requires us, as urban dwellers, to think outside, too—outside ourselves and our cities. To think of life outside, beyond containment, is an experiment in imagining new forms of security beyond the kind shaped by market forces.
— I suspect that many of you already knew about the 1983 film Koyaanisqatsi, which is the Hopi word for “life out of balance.” I did not, until recently, when I also learned that the director, Godfrey Reggio, was inspired by the work of Ellul and Illich. My thanks to Madhu Prakash for pointing me to it. Here is a short interview with Reggio about the film. From the credits:
The Conversation
Folks, it’s been quite a year. I’ve got nothing else to say for it presently, except that it is almost behind us. The Convivial Society, having lumbered through the last part of this year, will enter its second year with renewed focus and perhaps a few tweaks. More about that next time.
As I usually include links to things I’ve written here, so here’s one to recent essay on Ivan Illich: “The Skill of Hospitality.” Also, it was a year ago that I put together a collection of my writing on the old blog, which you can find here.
Finally, I’ve recently found a few email replies to the newsletter in my spam folder. This has happened before, but it’s been awhile and I’ve been remiss in reviewing what’s been going in there. So, I’ll do my best to get back to those in the coming days, although it will likely be slow-going through the holidays.
My best to all of you. May this season bring you and yours a measure of joy.
Cheers,
Michael
You can listen to an audio version of the essay by clicking Play above.
Jane Jacobs opened her mid-twentieth century classic, The Death and Life of Great American Cities, with a discussion of the peculiar nature of cities. In the course of this discussion, she devoted all of three chapters to a single aspect of city life: the uses of sidewalks. I’ve always found the second of these three chapters especially interesting. In it, Jacobs examines the myriad incidental contacts generated among people who daily make use of a shared sidewalk—the nods, the smiles, the brief conversations, etc.
Let’s think for a bit about Jacobs’s analysis and take the sidewalk as an example of what I recently called the material infrastructure of social life, and, more specifically, as a space of public rather than private consequence. As Jacobs observes, the point of “the social life of city sidewalks” is precisely that it is public*, bringing together, as she puts it, “people who do not know each other in an intimate, private social fashion and in most cases do not care to know each other in that fashion.”
In other words, Jacobs is describing the multiple, usually brief and inconsequential, contacts people who live on the same city street will have with one another over an extended period of time. These contacts are mostly with people who are not necessarily part of our circle of friends, but who, because of these contacts, become something more than mere strangers. And that seems like a crucial, often ignored category because it informs, as Jacobs recognizes, whatever vague understanding we have of the public writ large.
Jacobs acknowledges, of course, that, taken separately, these contacts amount to nothing, but the sum of such contacts, or their absence, becomes absolutely consequential. At stake, in her view, is nothing less than the trust that is essential to any functioning civic body.
“The trust of a city street,” she writes, “is formed over time from many, many little public sidewalk contacts. It grows out of people stopping by at the bar for a beer, getting advice from the grocer and giving advice to the newsstand man, comparing opinions with other customers at the bakery and nodding hello to the two boys drinking pop on the stoop ….”
Again, as Jacobs explains, most of these contacts are “utterly trivial but the sum is not trivial at all. The sum of such casual, public contact at a local level—most of it fortuitous, most of it associated with errands […]—is a feeling for the public identity of people, a web of public respect and trust, and a resource in time of personal or neighborhood need.”
“The absence of this trust is a disaster to a city street,” she insists. And, what’s more, “Its cultivation cannot be institutionalized.” (I’ll trust you to fill in the Illichian digression on that last observation!)
In the course of her analysis of the little publics sustained by the city sidewalks, she also offers an astute observation about the nature of suburban social life. As a built environment, the suburbs make it very difficult to cultivate the casual acquaintanceship generated by the countless contacts that inevitably arise from the shared city sidewalk. In a suburban setting, you either invite people into your home or, with vanishingly few exceptions, they remain strangers altogether, and Jacobs is realistic about how few people we are likely to invite into our homes. The materially induced tendency, then, is to know relatively well those who are most like us and, those who are not, hardly at all. What is lost, we might say, is the category of what Aristotle called civic friendship.
All of which raises the question: From where exactly will “a feeling for the public identity of people” arise? How might “a web of public respect and trust” be fostered? We’ll come back to that in a bit, but first let’s consider how these dynamics have been impacted by our contemporary technological milieu.
On this score, I’m particularly struck by the degree to which we are encouraged to displace or outsource the sort of micro-interactions, which generate the human contacts Jacobs finds so valuable. Sometimes this is a matter of unintended consequences; sometimes it is a matter of intentional design and expressed preferences.
As an example of the former, consider one unintended consequence of GPS. We tend to think of GPS displacing the paper map, which is true enough but not quite the whole story. The paper map was not the only method we used to find our way when we were in need of directions. We were just as likely to ask someone for directions to where we wanted to get. And, if it happened that I lost my way or that my directions proved inadequate, I’d likely pull over or stop someone to ask directions. In other words, in circumstances where we would have had occasion to interact with another person, we now turn to a device.
As an example of the latter, consider the move toward automated tellers, online banking, or self-checkouts in retail spaces. In these cases, a fairly common opportunity for a brief human interaction has been lost. The proliferation of home delivery services and online retail also promise to relieve us of the need to venture out into the spaces that previously presented us with opportunities for casual human contacts. We’ve tended to frame these developments with questions about employment and labor, which are perfectly legitimate frames of analysis, but Jacobs encourages us to imagine a different kind of cost, which is also much harder, if not impossible to quantify.
Consider as well how digital devices confront us with the subtle temptations of telepresence. We have the capacity and perhaps the proclivity to take partial leave of our immediate surroundings, including a tacit permission to forego the sorts of contacts Jacobs discussed by presenting as one who is presently conducting business elsewhere or otherwise preoccupied.
In fact, the trajectory toward a situation where we find ourselves ensconced within relatively comfortable zones of affinity, familiar at first hand chiefly with those who are mostly like us, is longstanding. One might see it, as Richard Thomas did in a manner not altogether dissimilar from Jacobs’s analysis of the sidewalk, in the architectural shift from front porches to back patios, and all that such a shift entails and implies about our social lives.
As Thomas observed, “Nineteenth century families were expected to be public and fought to achieve their privacy. Part of the sense of community that often characterized the nineteenth-century village resulted from the forms of social interaction that the porch facilitated. Twentieth-century man has achieved the sense of privacy in his patio, but in doing so he has lost part of his public nature which is essential to strong attachments and a deep sense of belonging or feelings of community.” The chef’s kiss comes with the advent of the doorbell camera, which casts our gaze into the public as a mode of surveillance rather than civic interest.
It’s not that any one instance of these cases is significant or necessarily “wrong.” Rather, as Jacobs suggested, it is the case that they become consequential in the aggregate. In other words, we should be attentive to the sort of people we become as a result of the mundane social liturgies, engendered by our material environment, which we daily enact with little or no reflection.
I should grant that Jacobs had in view not merely chance interactions, but recurring encounters with people who shared a city block over time and thus would gradually become familiar to one another. For those who have not lived on a city block in this manner, of course, these recently outsourced human interactions are simply a further attenuation of our public lives, that is to say of our lives insofar as they intersect with those who are not a part of our private circles.
But let’s return to the question of how we imagine the public when we have so severely constricted the contacts we might have had with those who are not part of these circles. What most interested me in Jacobs’s discussion was her insistence that these casual sidewalk contacts were mostly with people with whom we do not ordinarily desire any deeper relationship. Given the material structure of suburban life, people tend to operate with two categories of relationships: those they know relatively well and those who remain strangers altogether. There is little or no space in between. And, naturally, those in the class of people we know relatively well would tend to be more like us than not. All of which is to say that some of our most pronounced “filter bubbles” emerged long before the advent of social media.
What matters here is that we will still operate with some mental model of the other. We will still conjure up some generalizations about the people who are not like us. When we enjoy a high frequency of contacts with the public such that some of them become more than mere strangers although less than friends, then our conception of the public is anchored in particular flesh and blood human beings, thus, in theory, tethering our imaginings more closely to some approximation of reality.
However, when we lack such contacts, when our experience of others too readily divides into friends and strangers, then our image of the public tends toward abstraction, a blank screen onto which it may be tempting to project our fears, suspicions, and prejudices or, perhaps more benignly and naively, our own values and assumptions.
But the situation seems to be a bit worse than that. It’s not just that we lack the sidewalk as Jacobs experienced it, or some similar public space, and are thus left with a wholly inchoate image of the public beyond our affinity groups. It is, rather, that our digital media feeds and timelines have become our sidewalk, our trivial and incidental contacts, very different indeed from those Jacobs observed, transpire on digital platforms. This has turned out to be, how shall we say, a suboptimal state of affairs.
The problems are manifold. We are tempted to mistake our experience of a digital media platform for the full breadth of reality. While in-person contacts tend to be governed by operative social norms, digital platforms foster a comparatively high degree of irresponsible and anti-social behavior. Untethered by civic friendships, our image of the public may be filled in for us by those who have an expressed interest in sowing division and fear. Relatedly, and perhaps most significantly, social trust craters on digital media. It would be hard to overstate the damage done by the weaponization of bad faith at the scale made possible by digital media. It's far worse than the mere proliferation of lies. It undermines the very plausibility of a politics sustained by speech. And it is utterly untouched by our precious fact-checking.
In short, our most public digital sidewalks tend toward open hostility, rancor, and strife. Little wonder, then, that so many are fleeing to what might think of as the digital suburbs, relatively closed, private, and sometimes paywalled spaces we share with our friends and the generally like-minded. But I suspect this will do little for the public sphere that we will still share with those who remain outside of our circles, be they digital or analog, and who do not share our values and assumptions.
Civic virtues, as it turns out, do not spring up out of nowhere. All virtues and vices arise from habits engendered by practices, which, in turn, reflect the material infrastructure of our social lives. Right now it seems as if that infrastructure is increasingly calibrated to undermine the possibility of civic friendship. Which brings us back, once again, to Ivan Illich staking his hope on the practice of hospitality: “A practice of hospitality recovering threshold, table, patience, listening, and from there generating seedbeds for virtue and friendship on the one hand. On the other hand radiating out for possible community, for rebirth of community.”
* It’s worth noting Sara Hendren’s comment in What Can A Body Do? on Jacobs’s discussion of the sidewalk and the unstated assumptions about accessibility: “But you can only see and be seen, only get into and out of the shared public of life of the world, if you can get down the sidewalk in the first place.”
“To be sane in a mad timeis bad for the brain, or worsefor the heart.” — Wendell Berry, “The Mad Farmer Manifesto: The First Amendment”
I’ve been reading a fair amount about the meaning and significance of place over the last several weeks, and in the course of that reading I encountered an observation made almost in passing by the renowned Chinese-American geographer, Yi-Fu Tuan. “In the past,” Tuan wrote, “news that reached me from afar was old news. Now, with instantaneous transmission, all news is contemporary. I live in the present, surrounded by present time, whereas not so long ago, the present where I am was an island surrounded by the pasts that deepened with distance.”
I find historical observations of this sort instructive. They need not be profound, and, upon reflection, they tend to have an “of course that was the case” quality to them. But, that said, they are not, in fact, the kind of thing we routinely think about. The value of such observations lies in the striking point of contrast they offer to our situation, which then allows us to perceive more clearly an aspect of our experience that is so thoroughly ordinary we are tempted to think that this is just the way things have always been and, hence, must always be. Until, that is, a simple observation suddenly reveals the historical contingency of our situation and, consequently, affords us the simultaneously obvious but potentially revolutionary realization that things could be otherwise.
In this case, Tuan reminds us that until relatively recently, roughly the middle of the nineteenth century, the speed at which news or information could reach us was meaningfully correlated to place—the greater the distance the longer it took for news to get to you. News from afar, like light from distant stars, was always from the past.
The results of this correlation could be unfortunate, of course—recall, for instance, the Battle of New Orleans, which was fought nearly three weeks after the War of 1812 was formally concluded. But, at the same time, place and distance acted as filters of sorts on reality, concentrating a person’s attention, by default as it were, upon the world before them, which may now strike us as a feature rather than a bug. In a recent conversation with a student about Tuan’s observation, she put it this way: place, and by implication distance, regulated our information intake. It was likely that we would know most, and first, about what was nearest (and likely dearest) to us.
The contrast with our situation could hardly be more pronounced, of course. Not too long ago, for example, regardless of where you were in the world, if you happened to be on Twitter at the right time, you would have seen several videos of a massive explosion in Beirut mere minutes after it happened, followed, of course, by wildly speculative real-time commentary about its causes and consequences. This is but one relatively vivid and memorable example out of the innumerable cases we encounter daily.
Our present digital deluge of indiscriminately instantaneous information is not altogether without precedent. It lies rather on a trajectory that has already taken us through the age of electronic media. In the mid-1980s, for example, Joshua Meyrowitz noted a familiar pattern. “Nineteenth century life,” Meyrowitz observed in No Sense of Place,
entailed many isolated situations and sustained many isolated behaviors and attitudes. The current merging of situations does not give us a sum of what we had before, but rather new, synthesized behaviors that are qualitatively different. If we celebrate our child’s wedding in an isolated situation where it is the sole ‘experience’ of the day, then our joy may be unbounded. But when, on our way to the wedding, we hear over the car radio of a devastating earthquake, or the death of a popular entertainer, or the assassination of a political figure, we not only lose our ability to rejoice fully, but also our ability to mourn deeply.
The kind of incident Meyrowitz described is, of course, no longer limited to moments when we have access to broadcast media like radio or television. Upon reading this paragraph I naturally thought about the emotional roulette we play each time we glance at our social media feeds, which are always with us. You never quite know what news you’ll encounter and how it will mess with you for the rest of the day. As a recent song I rather like puts it, “Turning on my phone was the first mistake I made.”
In other words, ubiquitous connectivity means that we experience very few “isolated situations,” in Meyrowitz’s sense, and that we inhabit a psychic realm of perpetual affective dissonance and discord, buffeted by unrelenting crosswinds of data and information.
It’s a small quibble, but I’m not sure the word isolated is the word I’d use here. We tend to think of isolation as a generally negative experience giving the word a pejorative connotation. I’d prefer to speak about the integrity of a situation, how it holds together as a distinct experience. What Meyrowitz is describing, and what digital media accentuates, is the loss of situational integrity entailed by the varieties of tele-presence enabled by digital technology. The boundaries of my situation are always fuzzy and permeable. My here is always saturated by countless elsewheres. Place fails to bound my now.
And it is not only a matter, as in his example, of experiencing the full and singular emotional depth of an occasion. To take another instance of the same pattern, it is also true, as has been frequently noted, that the boundaries between work and rest have likewise blurred so that we tend to do neither well, assuming we enjoy the sort of work which can and ought to be done well.
Meyrowitz premised his analysis of electronic media on a fusion of the frameworks provided by Marshall McLuhan and sociologist Erving Goffman, who theorized human identity as a function of the roles we play in a variety of front stage and back stage settings. But Goffman assumed these settings would be bounded in place with relatively clear and concrete boundaries, the door separating the seating area of a restaurant and the kitchen, for example. We knew where we were and thus how to be. McLuhan’s work taught Meyrowitz that electronic media dissolved boundaries of that sort, generating a measure of disorientation with regards to where and when we are, which in turn throws our sense of who we are and how we ought to be into a bit of confusion.
“The electronic combination of many different styles of interaction from distinct regions,” Meyrowitz concluded, “leads to new ‘middle region’ behaviors that, while containing elements of formerly distinct roles, are themselves new behavior patterns with new expectations and emotions.” It would seem that this middle region, as Meyrowitz puts it, is now more or less where we live, to the degree we adopt the default settings of our techno-cultural moment. Consequently, we are all, with mixed results, improvising and navigating our way through it.
But let’s come back to the idea that place and distance, which is another way of saying the parameters of experience drawn by the body, regulated our information economy with regards to the quantity of information we encountered and its quality. By quality I mean not only whether the information was good information, which is to say accurate or truthful, but also relevant, important, pertinent, or personally valuable. Whatever the relative merits of such a situation or whatever ills of provincialism it may imply, what strikes me is the degree to which such filters were simply given. We might think of them as default settings about which we would have been largely unreflective. We, on the other hand, bear the epistemic and affective burdens of information superabundance regardless of whether we deem information superabundance itself to be a blessing or a curse. Either way, we have to grapple with its personal and social consequences.
The burdens I have in view, of course, are those we now routinely associate with filtering and managing flows of information—a task which invites the constant deployment of new tools and techniques, which, in turn, often have counter-productive effects. Clearly, these are not altogether novel burdens, we may find complaints about the sort of thing we think of as “information overload” in connection with printing, but they are hardly getting easier to bear. And these burdens are not merely cognitive. They are affective as well. Tending to our information ecosystem, if we attempt it at all, requires a striking degree of vigilance and discipline. And as we noted at the outset, there is no given balance between place and speed, no natural context of relative meaningfulness to regulate the pace and quality of information for us. It’s on us to do so, daily, often minute by minute. We exist in a state of continuous and conscious attention triage, which can be exhausting, disorienting, and demoralizing.
Doomscrolling is one symptom of the general condition, but the habit predates 2020. It’s what happens when we give ourselves over to the flood of information and allow it to wash over us. Whatever else one may say about doomscrolling, it seems useful to think of it as structurally induced acedia, the sleepless demon unleashed by the upward swipe of the infinite scroll (or the pulldown refresh, if you prefer). Acedia is the medieval term for the vice of listlessness, apathy, and a general incapacity to do what one ought to do; ennui is sometimes thought of as a modern variant. As we scroll, we’re flooded with information and, about the vast majority of it, we can do nothing … except to keep scrolling and posting reaction gifs. So we do, and we get sucked into a paralyzing loop that generates a sense of helplessness and despair.
To further clarify our situation, consider W. H. Auden’s discussion, which I’ve cited before, of the idea that, as he put it, “the right to know is absolute and unlimited.” “We are quite prepared,” Auden wrote,
“to admit that, while food and sex are good in themselves, an uncontrolled pursuit of either is not, but it is difficult for us to believe that intellectual curiosity is a desire like any other, and to recognize that correct knowledge and truth are not identical. To apply a categorical imperative to knowing, so that, instead of asking, 'What can I know?' we ask, 'What, at this moment, am I meant to know?' — to entertain the possibility that the only knowledge which can be true for us is the knowledge that we can live up to — that seems to all of us crazy and almost immoral.”
Before the advent of electronic media, the limits associated with being a body in place made it more likely that the knowledge we encountered was also knowledge that we could live up to, in the sense that Auden is commending here. In a digital media environment, it is not simply the case that we might be tempted to deliberately, in some Faustian sense, search out knowledge we cannot live up to, we are, in fact, overwhelmed by such knowledge. The idea of knowledge I can live up to implies a capacity to discern a meaningful correlation among the knowledge in question, my situation, my abilities, and my responsibilities, but this is capacity is precisely what is overwhelmed in our media ecosystem. Hence the ensuing state acedia.
We have ordinarily thought of the dynamics I’ve been discussing under the rubric of information overload, but I think it’s worth pursuing a slightly different line of thought. To think in terms of information overload is already to think in terms of the human being as an information processing machine. I’d prefer to start with the recognition that whatever else we may be, we are bodies, and that the conditions of our embodiment present us with a set of limits we may choose to either respect or ignore. Relatedly, I’ve been contemplating a thesis of late: that the body has been the root of all human understanding, but that this has been changing. So the real challenge we face is that of inhabiting a human-built world wherein the body can no longer ground understanding and may even be experienced as a liability. I suspect there will be more along these lines in future installments. Stay tuned.
News and Resources
* More from Frank Pasquale this time around. This one is an interview at Commonweal titled “What Robots Can’t Do.” I was especially heartened by reading this line: “There is a fundamental equality among them, a common dignity grounded in our common fragility.” In context, Pasquale is discussing the difference between having a human teacher and a robot “teacher,” but the idea of grounding a common dignity in our common fragility resonated with me.
* Timely reading from Geoff Shullenberger, “Put Not Thy Trust in Nate Silver,” a review of Jill Lepore’s new book, If Then: How the Simulmatics Corporation Invented the Future, which might be summed up as #JeanBaudrillardWasRight: ”When reality and a model of that reality appear to be mismatched, in other words, we may discard the model, or we may discard reality. Baudrillard argues that we have collectively discarded reality. The cultural logic of simulation has altered the epistemic framework that determines what is real, leaving us with the hyperreal where the real once was.”
* Back in July, Aaron Lewis wrote a widely-read essay on memory and our sense of time in the digital age, “The Garden of Forking Memes.” Regrettably, I’ve just recently gotten around to reading it because I kept waiting for a chunk of time to be able to sit with it for awhile, which, as you all know, isn’t always forthcoming these days. In any case, it’s an insightful piece and you should read it. As you may have noticed, any effort to understand our situation which focuses on how we remember (or don’t) will always get my attention: “If, as Marx once wrote, the grievances of all dead generations weighs like a nightmare on the brains of the living, the “perfect memory” of digital media has made this burden all the more weighty. Creating a stable political arrangement atop technologies of total recall will be a tall task. Our systems of governance were built for a world of extreme information asymmetry. Educated elites controlled the flow of information and kept old ghosts at bay. Now, the floodgates have been opened, and the Big Mood is one of temporal confusion and disorientation — we no longer feel like we’re marching steadily forward from the past into the future. There’s a massive subreddit devoted to documenting “glitches in the Matrix”; a new science of Progress Studies that’s trying to cure our End-of-History malaise; a whole entire subculture of Doomers who don’t believe there will be such a thing as history in the future.”
* Kelly Pendergrast in Real Life on the temptations of anthropomorphized robots, “robot’s friendliness or cuteness is something of a Trojan horse—an appealing exterior that convinces us to open the castle gates, while a phalanx of other extractive or coercive functions hides inside.” More: “The robot is not conscious, and does not preexist its creation as a tool (the zombie was never a friend). The robot we encounter today is a machine. Its anthropomorphic qualities are a wrapper placed around it in order to guide our behavior towards it, or to enable it to interact with the human world. Any sense that the robot could be a dehumanized other is based on a speculative understanding of not-yet-extant general artificial intelligence, and unlike Elon I prefer to base my ethics on current material conditions.”
* In England, citizens have created hedgehog “highways” through enclosed properties in an effort to boost the declining hedgehog population: ”The highway is an eccentric delight – stone steps, hedgehog cutouts and little signs like ‘Church’ for any hedgehog that can read. The ramp at Peter Kyte and Zoe Johnson’s house is 85cm tall and believed to be one of the biggest in the country. Last year the couple put out their night camera and captured visits most nights. ‘One or two of them are quite tubby and got stuck at the bottom,’ says Peter. One video of a hedgehog using their ramp has been viewed 33,000 times.”
* Sociologist Zeynep Tufecki recently launched a newsletter. I used to think that I got in on the newsletter thing a bit late; lately, it’s starting to feel like I actually got in early. In any case, Tufekci is a consistently sane, clear, and insightful writer. On the pandemic, she has been indispensable. I’d recommend subscribing if you’ve not done so already.
* On human illumination as a source of pollution:“Artificial light should be treated like other forms of pollution because its impact on the natural world has widened to the point of systemic disruption, research says … ”In all the animal species examined, they found reduced levels of melatonin – a hormone that regulates sleep cycles – as a result of artificial light at night. [Narrator: we humans, too, are an animal species.]“At the heart of this is a deep-rooted human need to light up the night. We are still in a sense afraid of the dark,” he said. “The ability to turn the night-time into something like the daytime is something we have pursued far beyond the necessity of doing so.”
Re-framings
— From Ivan Illich’s “The Rebirth of Epimethean Man,” the essay which closes Deschooling Society:
To the primitive the world was governed by fate, fact, and necessity. By stealing fire from the gods, Prometheus turned facts into problems, called necessity into question, and defied fate. Classical man framed a civilized context for human perspective. He was aware that he could defy fate-nature-environment, but only at his own risk. Contemporary man goes further; he attempts to create the world in his image, to build a totally man-made environment, and then discovers that he can do so only on the condition of constantly remaking himself to fit it. We now must face the fact that man himself is at stake.
— This is C. S. Lewis writing in a letter to a friend dating from the middle of the last century. Naturally, I’ll leave it to each of you to navigate the religious element, but I think the general principle is widely applicable:
“It is one of the evils of rapid diffusion of news that the sorrows of all the world come to us every morning. I think each village was meant to feel pity for its own sick and poor whom it can help and I doubt if it is the duty of any private person to fix his mind on ills which he cannot help. (This may even become an escape from the works of charity we really can do to those we know).A great many people do now seem to think that the mere state of being worried is in itself meritorious. I don't think it is. We must, if it so happens, give our lives for others: but even while we're doing it, I think we're meant to enjoy Our Lord and, in Him, our friends, our food, our sleep, our jokes, and the birds song and the frosty sunrise.As about the distant, so about the future. It is very dark: but there's usually light enough for the next step or so.”
The Conversation
“As about the distant, so about the future. It is very dark: but there’s usually light enough for the next step or two.” I thought that line worth repeating, and, indeed, may it be so for all of us.
I’ve got the next installment, or possibly a dispatch, in the draft folder, so the pace might be picking up a bit around here this month.
As always, feel free to reach out via email. I can be a bit slow to reply depending on when the email catches me, but I very much appreciate hearing from you all. And, as always, please do consider passing this newsletter along to anyone you think might find it useful.
Cheers,
Michael
“I do think that if I had to choose one word to which hope can be tied it is hospitality. A practice of hospitality— recovering threshold, table, patience, listening, and from there generating seedbeds for virtue and friendship on the one hand — on the other hand radiating out for possible community, for rebirth of community.” — Ivan Illich, interview (1996)
[Welcome back friends to the Convivial Society. This latest installment has been a while in coming, and it’s not short. The gist of it is this: thinking with Arendt about the material dimensions of a common world and a common sense with a view to better understanding our experience of digital culture. I hope you find it helpful.]
I’ve been thinking about tables of late, literally and figuratively. Chiefly, what I’ve had in mind is the table as an emblem of hospitality, and, relatedly, as an example of the material infrastructure of our social lives or the stuff of life that sustains and mediates human relationships. This owes something, of course, to the great importance Ivan Illich placed on hospitality, especially as it took shape around a table. But here I’m turning to the work of another theorist in order to think through some of the more vexing and at times disturbing features of public life.
Thinking about the table has drawn me back to Hannah Arendt’s The Human Condition, first published in 1958. This work is notable for Arendt’s discussion of the distinctions among what she calls the private, public, and social realms. The political arena of the ancient Greek polis was her model for the public. The private realm was the realm of the household. The social realm was a more recent development, it was the realm of mass society. It was not a private realm, but neither was it a realm in which the individual could meaningfully appear in the full integrity of her particularity. I won’t take the time to explain those distinctions at greater length here, except as they relate to Arendt’s use of the table as a recurring metaphor, a metaphor which will, I think, usefully illuminate aspects of our digitally mediated experience. I suspect, in fact, that ultimately it would be useful to develop a fourth category, the digital, to extend Arendt’s analysis of the private, public and social. You might take what follows as some initial thinking toward that end.
The Common World of Things
Arendt’s figurative use of the table had tucked itself away in my mind from the time I first read The Human Condition around 2010. It always struck me as an evocative image, but it was not until recently that I began to see more clearly its significance.
“To live together in the world,” Arendt wrote in the paragraph that first caught my attention, “means essentially that a world of things is between those who have it in common, as a table is located between those who sit around it; the world, like every in-between, relates and separates men at the same time.”
So there it is: our life together is built upon a world of things, which, like a table, gathers and distinguishes us. The point may at first seem somewhat trivial, but we’ll find that there’s some depth here as soon as we start unpacking Arendt’s argument.
These lines I just cited appear in the course of Arendt’s discussion of the public realm and its relation the world. Both of these terms, public and world, are technical terms in her work.
“The term ‘public’ signifies the world itself,” she explains, “in so far as it is common to all of us and distinguished from our privately owned place in it.” She goes on to clarify that the world is not simply synonymous with the earth, which she thinks of as related to our “organic life.” The world, in her sense, is related “to the human artifact, the fabrication of human hands, as well as to affairs which go on among those who inhabit the man-made world together.”
We might say that the world as she means it is more or less co-extensive with what the historian Thomas Hughes called the human-built world—it is our cultural habitat and also what I’m calling the material infrastructure that sustains it. In this light, then, the table is not simply a metaphor, it is a case in point, a microcosm of the larger social order, which itself takes shape around an array of material artifacts.
We’ve already seen that for Arendt the world of things that constitutes the public is like a table in that it alternatively gathers, relates, and separates individuals. In other words, by virtue of being around a table a set of individuals are simultaneously related together as a group while also distinguished from one another. It is a role played by all the elements that make up the material infrastructure of social life. The question we need to bear in mind, of course, is this: How exactly are we being gathered and how exactly are we being related to one another?
Permanence and Stability
“The existence of a public realm,” Arendt observed, “and the world's subsequent transformation into a community of things which gathers men together and relates them to each other depends entirely on permanence.”
Here we once again encounter the table, or, at least, what the table illustrates: a gathering and relating of individuals. This gathering and relating function is attributed to a community of things, which I’m reading as a network of materiality mediating human relationships. The curious additional insight is the indispensable quality of permanence, a feature that also speaks to a distinct mode of materiality.
“If the world is to contain a public space,” Arendt argues, “it cannot be erected for one generation and planned for the living only; it must transcend the life-span of mortal men.” Further on, she writes,
“The common world is what we enter when we are born and what we leave behind when we die. It transcends our lifespan into past and future alike; it was there before we came and will outlast our brief sojourn in it. It is what we have in common not only with those who live with us, but also with those who were here before and with those who will come after us.
Arendt’ insistence on a measure of permanence and stability across time recalls Simone Weil’s discussion of a stable ground upon which a human life may be rooted. In The Need for Roots, Weil argued that rootedness was an essential human need and, she added, “a human being has roots by virtue of his real, active and natural participation in the life of a community which preserves in living shape certain particular treasures of the past and certain particular expectations for the future.”
Like Arendt, Weil is here insisting upon a trans-generational common world, although she is less explicit about its material base. And, yes, of course, I’ll add that Ivan Illich made similar observations. In discussing society’s substitution of the better for the good, for example, Illich warns that “at this point the balance among stability, change, and tradition has been upset; society has lost both its roots in shared memories and its bearings for innovation.” Note especially the past/future orientation of that last clause, and, perhaps especially, the notion of having “bearings for innovation.” Another subject for another day.
But one last note on the matter of permanence: For Arendt the permanence of the world of things not only grounds our common experience of the world but also human identity. “The things of the world have the function of stabilizing human life,” Arendt wrote, “and their objectivity lies in the fact that … men, their ever-changing nature notwithstanding, can retrieve their sameness, that is, their identity, by being related to the same chair and the same table.”
But let’s turn now to the epistemic implications of Arendt’s notion of a common world.
A Common Sense
The world of things turns out to have important psychological and epistemological functions in Arendt’s analysis, and this is were I think her line of thinking gets really interesting. We might say that Arendt takes the world of common things to be an epistemic backstop that keeps us from sliding into pure subjectivism, nihilism, or egoism. As we’ll see in a moment a world of common things grounds a common sense.
So, for example, she writes,
“The presence of others who see what we see and hear what we hear assures us of the reality of the world and ourselves, and while the intimacy of a fully developed private life, such as had never been known before the rise of the modern age and the concomitant decline of the public realm, will always greatly intensify and enrich the whole scale of subjective emotions and private feelings, this intensification will always come to pass at the expense of the assurance of the reality of the world and men.”
This is quite a remarkable claim. The inverse correlation she posits between an intensification of subjective emotion and private feeling, on the one hand, and an assurance of the reality of the world on the other seems particularly striking given present concerns about the degree to which Americans appear to have not only conflicting beliefs, but to live in alternate realities.
[N.B. I refer specifically to “Americans” not to suggest that something similar isn’t happening elsewhere, but only that I feel that I can speak to the case here in a way that I would not presume to speak about other societies, especially since so many of you are better positioned to do so! And, international readers, please do feel free to fill me in on the situation on the ground as you see it.]
But where does the world of things fit into this picture? Arendt speaks here of the presence of others, yes, but also of the decline of the public realm, which she has already equated with the human-built world that sustains it, or, to put it another way, that acts as the stage upon which the public appears. In other words, she has in view the presence of others within a particular materially objective context.
Arendt argues that to live an “entirely private life means above all to be deprived of things essential to a truly human life.” She expands on this by explaining that it means that one is “deprived of the reality that comes from being seen and heard by others, to be deprived of an ‘objective’ relationship with them that comes from being related to and separated from them through the intermediary of a common world of things.”
Here again is the notion of being gathered and separated by the common world of things with an emphasis on an “objective” relationship with others. Of course, it is not the nature of reality itself that is at issue here. Rather, Arendt has in view our experience of reality, or, to put it another way, the measure of certainty we attain from knowing that we inhabit a shared reality with others. We see and hear and are seen and heard in turn, and somehow the intermediation of the common world of things is essential to this dynamic. This certainly does not at all preclude vigorous and intense disagreement about what is good, right, and just; but it does suggest that it is possible for such debates to unfold meaningfully within shared horizons of the real. And this is what Arendt understands as “common sense,” which she calls “the sixth and highest sense.” Common sense is not just a set of mundane observations that are widely assumed to be true. Rather, it was common in the sense that it was the product of the senses working in tandem on a world held in common with others.
“Only the experience of sharing a common human world with others who look at it from different perspectives,” she wrote, “can enable us to see reality in the round and to develop a shared common sense.”
However, in the modern world Arendt argued, common sense “became an inner faculty without any world relationship.” “This sense now was called common,” she added, “merely because it happened to be common to all. What men now have in common is not the world but the structure of their minds.” And that is a critical point aptly stated.
Moreover, she observes that “a noticeable decrease in common sense in any given community and a noticeable increase in superstition and gullibility are therefore almost infallible signs of alienation from the world.” Again, she does not mean alienation from the earth, but alienation from a common world of human things that constitutes a public space of appearance within which a common sense can take hold and bind individuals to a commonly shared reality.
This alienation marked by a decrease in common sense is not inconsequential. Not only might it be paired with superstition and gullibility, but with darker and even destructive proclivities. Consider the following analysis from the Origins of Totalitarianism, in which Arendt takes up the question of what we would label escapist literature. She attributes the desire to escape reality, which, in her view, characterizes the masses, to “their essential homelessness,” which I read as more or less synonymous with what she later calls world alienation in The Human Condition and with what Weil termed rootlessness. But Arendt believes that the human need to make sense of things is also a factor. Deprived of its share in a common world that persists across time, a person can no longer bear reality’s “accidental, incomprehensible aspects.” Thus, she argues,
“The masses’ escape from reality is a verdict against the world in which they are forced to live and in which they cannot exist, since coincidence has become its supreme master and human beings need the constant transformation of chaotic and accidental conditions into a man-made pattern of relative consistency.”
I’ve come back again and again, to the relationship Arendt drew between loneliness and totalitarianism. (See the essay by Samantha Rose Hill linked below.) Arendt made a point of distinguishing between solitude and loneliness, noting that one may be alone without being lonely and that loneliness often occurs in the midst of others.
Interestingly, for our purposes, Arendt connects loneliness to the loss of a common world. “Loneliness arises when thought is divorced from reality,” she observed, “when the common world has been replaced by the tyranny of coercive logical demands.” Quoting Martin Luther she adds, “‘A lonely man always deduces one thing from the other and thinks everything to the worst.’” Without a common world there is no break on the slide into slavish and despairing ideological consistency.
In other words, without a common and stable world of things to ground our experience with others, without the table around which we might gather, the mind is cut off from a common sense and set loose upon itself in ways that become self-destructive.
Thus, in The Origins of Totalitarianism, she also makes the following argument:
Totalitarian propaganda can outrageously insult common sense only where common sense has lost its validity. Before the alternative of facing the anarchic growth and total arbitrariness of decay or bowing down before the most rigid, fantastically fictitious consistency of an ideology, the masses probably will always choose the latter and be ready to pay for it with individual sacrifices — and this not because they are stupid or wicked, but because in the general disaster this escape grants them a minimum of self-respect.
Now, along with “totalitarian propaganda” let us also include “conspiracy theories” and the relevance of this analysis will be all the more apparent. The loss of a common world and the common (or communal) sense it sustains engenders not only heightened subjectivity but also leaves individuals susceptible to propaganda, conspiracy theorizing, and loneliness.
The Tele-Present Age
I’ve belabored the exposition of Arendt’s argument, so let me draw things to a close by speaking more directly to our present media environment. What especially interests me is the degree to which our digital media environment differs from the older analog order of things, specifically with regard to its role in sustaining a common world and public life. I’m sometimes tempted to speak of this difference as a move from a material order to an immaterial order, but I realize that this is not quite right. After all, digital media is a thoroughly material reality built on tubes, cables, satellites, servers, and rare-earth metals mined at great human cost, none of which are any less material in nature simply because they are ordinarily hidden from public view.
Nevertheless, it is important to account for how digital media reconfigures the material infrastructure of social life such that the dynamics of human experience are also transformed. And a good deal of this transformation involves the scrambling of the relationship between bodily presence and action. What happens, for example, when important segments of our life together no longer emerge within a world of common things we simultaneously occupy? In other words, what are the consequences of a social life increasingly dependent on varieties of tele-presence?
Tele-, as you remember from some long-ago middle school vocabulary lesson, is the Greek root that means “far” or “distant” and suggests “operating at a distance.” Consider three common words: telegraph, telephone, television—writing at a distance, voice at a distance, sight at a distance, respectively. Each of these is a mode of telepresence, and, as the example of the telegraph suggests, telepresence is not uniquely tied to digital media. Digital media, however, has permeated our experience with telepresent activity.
Early debates about the internet were sometimes framed by an opposition of digital activities to “real life.” This was never a very helpful framing, as sociologist Nathan Jurgenson spent a great deal of time explaining several years ago. It seems to me that we would have better spent our time had the question of telepresence framed our discussions. “Is this real?” now seems to me to have been a far less interesting question to ask than “Where am I?”
When we gather, as we so often do now, on a service like Zoom, where are we? Where exactly is the interaction happening? And, what difference does it make, say, that there is no here we can easily point to, and much less is there a table? What sort of world is this that now “hosts” so much of our social life, and how might we distinguish it from the world of common things that for Arendt was so important to public life, and, as we saw, even to our grasp of a shared reality.
It seems apparent that the digital realm lacks the permanence that Arendt thought was essential to a common world in which individuals could appear and be seen, and also that it has accelerated the liquefaction of modern life. Consequently, it fails to stabilize the self in the manner Arendt attributed to a common world of things. It also seems that Arendt’s fears about the epistemic consequences of the loss of a common world of things were well grounded. By abstracting our interactions into a placeless world of symbolic interchange and generating the conditions of what Jay Bolter has labeled digital plentitude, digital media appears to undermine rather than sustain our capacity to experience a common world, which in turn generates a common sense. Increasingly, then, we come to suspect that we are all occupying altogether different realities.
There are, of course, many more questions to be asked about how digital tools transform human experience, but reckoning with the seeming worldlessness, in Arendt’s sense, of the digital realm and its abstraction of experience from bodily presence may help us better understand some of the challenges we face as we seek to wisely navigate this digital world together.
Of course, in Arendt’s view, mass society and the realm of the social it generated, already tended in some of these directions.
In a memorable paragraph, Arendt describes the experience of the table under conditions of mass society:
The public realm, as the common world, gathers us together and yet prevents our falling over each other, so to speak. What makes mass society so difficult to bear is not the number of people involved, or at least not primarily, but the fact that the world between them has lost its power to gather them together, to relate and to separate them. The weirdness of this situation resembles a spiritualistic seance where a number of people gathered around a table might suddenly, through some magic trick, see the table vanish from their midst, so that two persons sitting opposite each other were no longer separated but also would be entirely unrelated to each other by anything tangible.
At first glance, this is not a bad way of conceiving the digital realm: the materiality of the table suddenly vanishes and in our telepresent interactions we begin to fall over ourselves as it were, chaotically clashing with each other even as we are ensconced within our respective epistemic bubbles. But unlike the members of mass society, we are at a further (or at least different) remove from one another confronting not embodied presences, but something more akin to subjectivities variously represented by images and avatars.
Finally (yes, really), I’ll note that we cannot replicate the agora, the public space of the ancient Greeks, which so deeply informed Arendt’s view of the public realm. To the degree that we are connected politically with each other at a much different geographic scale than the ancient Greek city-state, to that same degree we cannot replicate the ancient public sphere. In that ancient model, however, the public and the private, sustained one another if they were rightly ordered. Mass society, in Arendt’s view, scrambled the private and the public realms, robbing each of their particular virtues. It did so by gradually eroding the local and material context of the public realm.
I wonder if we might not reimagine a new pairing. Not the private and the public, but the digital and the local. I’m not exactly sanguine about this possibility, mind you. It seems to me that the tendency of the digital realm as it is presently configured tends toward the erosion of the local, which, I tend to think is the natural habitat of the human being and thus the proper site of human flourishing. However, it may be possible for digital tools, perhaps if they were designed with a view to conviviality, to also sustain a vibrant local realm, which may nourish the human experience and ground our necessary ventures in the digital public. Perhaps I’m glossing over irreconcilable tensions, but I’ll be coming back to these themes and I’d be happy to hear your thoughts.
News and Resources
* Frank Pasquale on affective computing: “In all too many of its present implementations, affective computing requires us to accept certain functionalist ideas about emotions as true, which leads to depoliticized behaviorism and demotes our conscious processes of emotional experience or reflection. Just as precision manipulation of emotions through drugs would not guarantee “happiness” but only introduce a radically new psychic economy of appetites and aversions, desires and discontents, affective computing’s corporate deployments are less about service to than shaping of persons. Preserving the privacy and autonomy of our emotional lives should take priority over a misguided and manipulative quest for emotion machines.”
* Samantha Rose Hill on Arendt, loneliness, and totalitarianism: “We think from experience, and when we no longer have new experiences in the world to think from, we lose the standards of thought that guide us in thinking about the world. And when one submits to the self-compulsion of ideological thinking, one surrenders one’s inner freedom to think. It is this submission to the force of logical deduction that ‘prepares each individual in his lonely isolation against all others’ for tyranny. Free movement in thinking is replaced by the propulsive, singular current of ideological thought.
* “What Forest Floor Playgrounds Teach Us About Kids and Germs”:“At the end of four weeks, the kids’ arms were swabbed and their blood was drawn again, and Sinkkonen’s team began analyzing the results. In a study published Wednesday in Science Advances, they found that the children who had been playing in the newly forested spaces had more diverse communities of friendly bacteria living on their skin. Specifically, alphaproteobacteria species seemed to flourish. Not surprising: Previous studies have shown this subgenre to be associated more often with children who grow up on farms than city kids.”
* Double shot of Frank Pasquale this time around. This one is an excerpt from Frank’s new book, New Laws of Robotics: Defending Human Expertise in the Age of AI (which I’m eager to pick up soon), tackling autonomous weapons systems: “[I]t is hard to avoid the conclusion that the idea of ethical robotic killing machines is unrealistic, and all too likely to support dangerous fantasies of pushbutton wars and guiltless slaughters”Back in 2015, I wrote briefly and in an Arendtian vein on lethal autonomous weapons: Lethal Autonomous Weapons and Thoughtlessness.
* One of the newsletters I enjoying receiving is The Tourist written by Phil Christman. It’s been especially good of late.
* Cleverly titled essay about chairs designed specifically for gaming from Lewis Gordon in Real Life—“Throne of Games”:“There’s an element of surrender in the way users give up their bodies to games, which is literalized in the design of chairs that cocoon and immobilize them — chairs that aim, as much as possible, to minimize any reminder of the player’s embodiment.”
* NASA’s OSIRIS-REx briefly made contact with the asteroid Bennu (a mere 200 million miles from earth) for five to six seconds, collected a sample, and took off again. It is now on its way back to earth. (Although …) This is, of course, a remarkable achievement of human ingenuity. NASA noted that the probe touched down “within three feet (one meter) of the targeted location.”
This latter note naturally recalled one of the several subtitles Walker Percy facetiously offers for Lost in the Cosmos: The Last Self-Help Book … “How it is possible for the man who designed Voyager 19, which arrived at Titania, a satellite of Uranus, three seconds of schedule and a hundred yards off course after a flight of six years, to be one of the most screwed-up creatures in California—or the Cosmos.”
* Click here for one minute animation of medieval (possibly also Roman) bridge building. Lots of these bridges are still standing and in working order. For example:
One also happens to appear in this painting of Frankfurt (1858) by Gustave Courbet:
* As I was writing about how a table gathers and separates us, I stumbled upon this image of the seating arrangement for a dinner at Buckingham Palace on May 19, 1910, for the world leaders who were attending the funeral of the late King Edward VII. More, including a photograph of nine of the gathered kings, on this thread.
And look, frankly I’m not going to pass up the opportunity to pass along to you the opening paragraph of Barbara Tuchman’s The Guns of August: “So gorgeous was the spectacle on the May morning of 1910 when nine kings rode in the funeral of Edward VII of England that the crowd, waiting in hushed and black-clad awe, could not keep back gasps of admiration. In scarlet and blue and green and purple, three by three the sovereigns rode through the palace gates, with plumed helmets, gold braid, crimson sashes, and jeweled orders flashing in the sun. After them came five heirs apparent, forty more imperial or royal highnesses, seven queens - four dowager and three regnant - and a scattering of special ambassadors from uncrowned countries. Together they represented seventy nations in the greatest assemblage of royalty and rank ever gathered in one place and, of its kind, the last. The muffled tongue of Big Ben tolled nine by the clock as the cortege left the palace, but on history's clock it was sunset, and the sun of the old world was setting in a dying blaze of splendor never to be seen again.”
Re-framings
— Pope Francis’s latest encyclical, Fratelli Tutti, touches on the social consequences of digital media:
42. Oddly enough, while closed and intolerant attitudes towards others are on the rise, distances are otherwise shrinking or disappearing to the point that the right to privacy scarcely exists. Everything has become a kind of spectacle to be examined and inspected, and people’s lives are now under constant surveillance. Digital communication wants to bring everything out into the open; people’s lives are combed over, laid bare and bandied about, often anonymously. Respect for others disintegrates, and even as we dismiss, ignore or keep others distant, we can shamelessly peer into every detail of their lives.
43. Digital campaigns of hatred and destruction, for their part, are not – as some would have us believe – a positive form of mutual support, but simply an association of individuals united against a perceived common enemy. “Digital media can also expose people to the risk of addiction, isolation and a gradual loss of contact with concrete reality, blocking the development of authentic interpersonal relationships”. They lack the physical gestures, facial expressions, moments of silence, body language and even the smells, the trembling of hands, the blushes and perspiration that speak to us and are a part of human communication. Digital relationships, which do not demand the slow and gradual cultivation of friendships, stable interaction or the building of a consensus that matures over time, have the appearance of sociability. Yet they do not really build community; instead, they tend to disguise and expand the very individualism that finds expression in xenophobia and in contempt for the vulnerable. Digital connectivity is not enough to build bridges. It is not capable of uniting humanity.
— Neils Bohr to Werner Heisenberg (from Heisenberg’s Physics and Beyond: Encounters and Conversations):
Isn’t it strange how this castle [pictured below] changes as soon as one imagines that Hamlet lived here? As scientists we believe that a castle consists only of stones, and admire the way the architect put them together. The stones, the green roof with its patina, the wood carvings in the church, constitute the whole castle. None of this should be changed by the fact that Hamlet lived here, and yet it is changed completely. Suddenly the walls and the ramparts speak a quite different language. The courtyard becomes an entire world, a dark corner reminds us of the darkness in the human soul, we hear Hamlet’s ‘To be or not to be.’ Yet all we really know about Hamlet is that his name appears in a thirteenth-century chronicle. No one can prove that he really lived, let alone that he lived here. But everyone knows the questions Shakespeare had him ask, the human depth he was made to reveal, and so he, too, had to be found a place on earth, here in Kronberg. And once we know that, Kronberg becomes quite a different castle for us.
— During my conversation with Gov. Jerry Brown about his friendship with Ivan Illich, Gov. Brown briefly recounted how Illich expressed his preference for the work of philosopher Emmanuel Levinas over that of Martin Buber. A listener passed along an essay by Illich unearthing a history of ocular perception, which concluded with a discussion of Levinas on the human face:
Levinas set out to save "the face." The face of the other stands at the center of his life's work. The face of which he speaks is not my own, which appears reversed in the mirror. Nor is it the face that a psychologist would describe. For Levinas, face is that which my eye touches, what my eye caresses. Perception of the other's face is never merely optical, nor is it silent; it always speaks to me. Central in what I touch and find in the face of the other is my subjectivity: "I" cannot be except as a gift in and from the face of the other
The Conversation
I’ve used this space in the past to let you know about recent publications. There’s not been too much of that lately, but I will remind more recent subscribers of my last essay in The New Atlantis, “The Analog City and the Digital City,” if for no other reason than to direct you to their recently and beautifully redesigned website. And those of you relatively new to the newsletter may also want to check out a recent collection of my writing here.
Allow me to also pass along a link to my conversation with Henry Zhu, which you can find on his podcast in two parts: “Natural Limits” and, appropriately enough, “The Convivial Society.” You’ll note that Henry does these things right. The page for each of these conversations includes a time-stamped transcript with relevant quotes and links included.
Finally, I’m sure you’ve noted that I’m barely keeping up with the main installments of the newsletters and Dispatches have been few and far between over the last two months. On second thought, maybe you haven’t noticed at all. Either way, I’ll keep plugging along here as best I can. As always, thanks for reading.
Cheers,
Michael
This summer, as it became evident that the global pandemic was exposing the weaknesses of many of our institutions, it seemed like an auspicious time to revisit the work of Ivan Illich. Of course, if you’ve been following the newsletter for any amount of time at all, you know that I think any time is an auspicious time to be reading Illich.
In any case, I decided to use the newsletter to organize an Illich reading group. As a part of that experiment, I reached out to the philosopher of technology Carl Mitcham to see if he might be willing to talk with me for a bit about Illich and his work. I thought that would be a great way to launch the group. Little did I know that I would soon find myself enveloped by the hospitality of Illich’s friends and collaborators, each person I spoke with offering to connect me with another in worldwide Illich diaspora.
It did not take me too long to realize that I had felicitously stumbled onto a rather unique opportunity to gather together an oral history of Ivan Illich’s life as told by his friends. Thus far I’ve posted my conversations with Carl Mitcham and the scholar and activist Gustavo Esteva, who has brought Illich’s work to bear in his labors with the indigenous people of Oaxaca, Mexico.
I have a few other conversations in the works, which I’ll be posting in the coming weeks and months. But today I’m happy to pass along my conversation with Gov. Jerry Brown.
Gov. Brown, best known, of course, for his four terms (1975-1983 and 2011-2019) as governor of California and his bid for the Democratic presidential nomination in 1992, was a friend of Illich’s and a careful student of his work as well.
Brown also served as the mayor of Oakland from 1999-2007 and as the Attorney General of California from 2007-2011. Currently, he is a visiting professor at Berkley and chair of the California-China Climate Institute.
I first learned of Gov. Brown’s friendship with Illich a couple of years back when I stumbled upon the transcript of a talk-radio show, We the People, hosted by Brown on which Illich and Carl Mitcham were guests. It’s a wonderful conversation and I encourage you to read through it. The homepage of We the People’s website, which is still live, is a tribute to Illich written by Gov. Brown after Illich’s passing in December of 2002. Brown also wrote a letter to the New York Times taking issue, justly in my view, with the rather dismissive obit the paper ran for Illich.
My thanks to Gov. Brown for taking the time share his recollections of Illich’s life and work and to Evan Westrup for his help in making the conversation happen.
I hope you enjoy listening to the exchange. You can look forward to others like it over the next few months.
Cheers,
Michael
“The substance of the good life must be taken into consideration if radical political reform is to become a live option.”— Albert Borgmann, Technology and the Character of Contemporary Life (1984)
In 1943, Simone Weil, the French philosopher and activist who was living in England at the time, was tasked by the Free French government with writing a report exploring how French society might be revitalized after its liberation from Nazi Germany. Despite suffering from debilitating headaches and generally poor health, Weil completed her work during a remarkable burst of activity. She died later that year at the age of 34. The report was published in 1949. The first English translation appeared in 1952 as The Need for Roots: prelude towards a declaration of duties towards mankind.
I was immediately struck by how Weil began her report. In the midst of a global cataclysm of unprecedented scope and scale, tasked with drawing up plans for the renewal of society, she begins by arguing for the primacy of human obligations rather than human rights. The very first sentence reads: “The notion of obligations comes before that of rights, which is subordinate and relative to the former.” Quite the claim coming from a French thinker, as she is well aware. As Weil sees it, rights are ineffective so long as no one recognizes a corresponding obligation, and obligations are always grounded in our common humanity. “Duty toward the human being as such—that alone is eternal,” she writes.
Our obligations toward our fellow human beings, Weil goes on to argue, “correspond to the list of such human needs as are vital, analogous to hunger.” Some of these needs are physical, of course—housing, clothing, security, etc.—but Weil identified another set of needs, which she described as having to do not with the “physical side” of life but with what she calls its “moral side.” The non-physical needs “form … a necessary condition of our life on this earth.” In her view, if these needs are not satisfied, “we fall little by little into a state more or less resembling death.” And while she acknowledges that these needs are “much more difficult to recognize and to enumerate than are the needs of the body,” she believes “every one recognizes they exist.”
I’m inclined to believe that Weil is right about this. As she suggests, “everyone knows that there are forms of cruelty which can injure a man’s life without injuring his body.” Weil goes on to call for an investigation into what these vital needs might be. They should be enumerated and defined, and she warns that “they must never be confused with desires, whims, fancies and vices.” Finally, she believes that “the lack of any such investigation forces governments, even when their intentions are honest, to act sporadically and at random.”
Naturally, the rest of the work is an attempt to provide just such an enumeration and discussion of these vital needs with the express purpose of supplying a foundation for the rebuilding of French society. She deals briefly with a set fourteen such needs before turning to a longer discussion of “rootedness” and “uprootedness,” which opens with this well-known claim: “To be rooted is perhaps the most important and least recognized need of the human soul.”
I like to pair this claim with Hannah Arendt’s discussion of loneliness, alienation, and superfluousness, which, in The Origins of Totalitarianism, she identifies as ideal conditions for the emergence of totalitarian regimes. “Under the most diverse conditions and disparate circumstances,” Arendt wrote, “we watch the development of the same phenomena—homelessness on an unprecedented scale, rootlessness to an unprecedented depth.”
Combining Weil and Arendt, then, we might say that to the degree that the need for rootedness—which is to say, a sense of belonging in relatively stable communities—goes unfulfilled, to that same degree human beings become vulnerable to destructive political regimes.
My aim here, however, is not to discuss the merits of Weil’s particular enumeration of these vital needs. Rather, it is simply to recommend that we, too, undertake a similar radical analysis, recalling, of course, that our word radical comes to us from radix, the Latin word for roots. While our circumstances in 2020 are certainly not Weil’s in 1943, it does appear to me that we are, nonetheless, in a time of cascading crises and that our most urgent need is to figure out not how to shore up the old order but rather start something anew—perhaps a renewed humanism premised not upon human exceptionalism and self-sufficiency but rather upon human needs, interdependence, and mutual obligations.
Manufactured Neediness
You’ll not be surprised to learn that this talk about needs immediately brings to mind the work of Ivan Illich, who devoted a considerable amount of his intellectual labors to the task of exploring the sources of what we might think of as, from his perspective, our manufactured neediness. It is not, of course, that Illich denied that human beings have needs. It was that from his point of view many of the needs we think we have are, in fact, deliberately cultivated in us by a techno-economic institutional order that excels at nothing so much as the generation of dependent consumers. So, for example, we may very well have a need to learn, but why exactly has that need been transmuted into the need for schooling?
In the opening of Deschooling Society, Illich claims that the “hidden curriculum” of schooling is dependency on the institution of the school. “The pupil,” Illich writes, “is thereby ‘schooled’ to confuse teaching with learning, grade advancement with education, a diploma with competence, and fluency with the ability to say something new.” The student’s imagination, Illich continued, “is ‘schooled’ to accept service in place of value. Medical treatment is mistaken for health care, social work for the improvement of community life, police protection for safety, military poise for national security, the rat race for productive work.”
Illich then explains how he will “show that the institutionalization of values leads inevitably to physical pollution, social polarization, and psychological impotence: three dimensions in a process of global degradation and modernized misery.”
Interestingly, for our purposes, Illich goes on to write about how this process of degradation is “accelerated when nonmaterial needs are transformed into demands for commodities; when health, education, personal mobility, welfare, or psychological healing are defined as the result of services or ‘treatments.’”
The line to tuck away, along with Weil’s observations, is the one about nonmaterial needs being transformed into demands for commodities. If Weil is right about the vital importance of what she calls moral needs or the needs of the soul, then what Illich identifies is, of course, a pernicious and perverse hijacking of these needs. Pernicious because of the transmutation of vital non-physical needs into the need for commodities. Perverse because the the nature of the commodification is such that these vital needs are never satisfied. Indeed, having been institutionalized along the lines Illich identifies, they must be forever perpetuated so as to justify the ongoing existence of the institution in question.
Consider for a moment a more concrete and contemporary example. Why does anyone need a Ring camera? Or, better, whose interests are best served by a Ring camera? The most obvious answer is Amazon. If there is a problem that Ring is supposed to solve it is the problem of packages being stolen from people’s front porches, a problem that arises when our consumption is increasingly funneled through Amazon. But, of course, Ring presents itself as more than just the surveillance arm of a multibillion dollar corporation deployed to your front door. It hijacks the human need for security or safety and transmutes it into a need for Ring. It is chiefly the needs of Amazon that are being met, particularly given the way that Ring allows Amazon to also profit from partnerships with police departments. And as Illich would have readily predicted, this dependence on a corporate product comes at the additional cost of alienating neighbors, eroding social trust, and replacing mutual interdependence with a state of perpetual suspicion.
By contrast, in Tools for Conviviality, Illich wrote “that society must be reconstructed to enlarge the contribution of autonomous individuals and primary groups to the total effectiveness of a new system of production designed to satisfy the human needs which it also determines.” In other words, individuals and groups ought to be able to determine their needs rather than have their needs determined or manufactured for them. But, as Illich went on to argue, “the institutions of industrial society do just the opposite. As the power of machines increases, the role of persons more and more decreases to that of mere consumers.” Nowhere is this reduction of the person to the status of mere consumer more evident than in the ruthless efficiency of Amazon’s near total enclosure of our lives within a network of self-perpetuating and automated consumption, one within which we come to increasingly function as a mere node rather than the autonomous consumer we imagine ourselves to be.
But Illich saw in our dependence on institutions that dictate to us the nature of our neediness more than just a failure of personal autonomy and self-realization. The question of justice was also at stake.
“At present,” Illich observed [emphasis mine],
“people tend to relinquish the task of envisaging the future to a professional élite. They transfer power to politicians who promise to build up the machinery to deliver this future. They accept a growing range of power levels in society when inequality is needed to maintain high outputs. Political institutions themselves become draft mechanisms to press people into complicity with output goals. What is right comes to be subordinated to what is good for institutions. Justice is debased to mean the equal distribution of institutional wares.”
Illich is here suggesting the existence of a counterfeit form of justice, one which we might gloss as a matter of becoming equally dependent on institutions and their commodities. Perhaps it will seem like a stretch, but the contemporary example that leaps to my mind is the belief in some quarters that the problem with facial recognition technology is simply that it seems, in its present iteration, to be especially biased against people of color, as if the tool would be just and good as soon as it was calibrated so that people of color were equally legible to its gaze. In other words, equal access to fundamentally degrading institutions and their products is not justice.
Elsewhere in Tools for Conviviality, Illich wrote about the three distinct values: survival, justice, and self-defined work, which were, in his view, “fundamental to any convivial society however different one such society might be from another in practice, institutions, or rationale.”
As he went on to explain,
“The conditions for survival are necessary but not sufficient to ensure justice; people can survive in prison. The conditions for the just distribution of industrial outputs are necessary, but not sufficient to promote convivial production. People can be equally enslaved by their tools … A postindustrial society must and can be so constructed that no one person’s ability to express him- or herself in work will require as a condition the enforced labor or the enforced learning or the enforced consumption of another.”
There’s a three-tiered framework here that will have a Janus function at this juncture in the essay. Illich argues that what he calls a convivial society—which we can think of simply as a distinctly Illichian way of speaking about a good society—involves not only equal access to commodities, however broadly we conceive of them, but something more. This “something more,” as we see in the paragraph just quoted, Illich ties very close to work, work that is free, creative, and meaningful. In this regard, Illich recalls Simone Weil, who, though approaching the matter from her own deeply religious perspective, believed that “all the problems of technology and economy should be formulated functionally by conceiving of the best possible condition for the worker.”
It would be worth exploring how Weil and Illich each conceive of work as a condition of human flourishing (that work may already have been done, if so I’m presently unaware of it), but it enough for my purposes here to note that they both understand that a good society would furnish its citizens with more than just a steady stream of endless diversions.
The Good Society
But Illich’s three-tiered schema not only recalls Weil in its high regard for meaningful work, it also recalls another threefold schema offered by the philosopher Albert Borgmann in his 1984 work, Technology and the Character of Contemporary Life, a significant and still highly relevant book that doesn’t get the attention it deserves.
In his discussion of technology and democracy, Borgmann also puts forth a three-tiered “vision of society”: the constitutional or formally just society, the fair or substantively just society, and the good society. In the formally just society, all citizens are assured of equal liberties by the constitution and the legal code. But, as Borgmann notes, “formal justice is compatible with inequality.” “I may have the right to do nearly everything,” he adds, “and yet the economic and cultural means to do next to nothing.” Thus the need for what Borgmann calls substantive justice that accounts for “economic arrangements and legislation” as well as “civil rights and liberties.”
Yet, as Borgmann puts it, a substantively just society can still yield a life that is “indolent, shallow, and distracted.” In other words, a substantively just society may still fail to be a good society, one which addresses the full range of human needs. Borgmann believes that a substantively just society “remains incomplete and is easily dispirited without a fairly explicit and definite vision of the good life.”
Further on, Borgmann puts the distinctions this way (emphasis his): “A constitutional society furnishes formal or vacuous equality of opportunity. A just society secures fair or substantive equality of opportunity. Whether we have a good society depends on the kind of opportunities that the society provides for its citizens.”
Perhaps another more contemporary example can help clarify Borgmann’s distinctions as I understand them. We can imagine a society, without a great deal of effort, in which the elderly routinely find themselves isolated, lonely, and lacking a sense of purpose—in a word, uprooted in Weil’s sense. This society has developed robots and digital devices to care for the elderly and to keep them company. In a formally just society, all elderly citizens have the right to procure these consumer goods. In a substantively just society, all elderly citizens can afford to procure these goods or else they are supplied by the state.
I trust, however, that you might agree with me in recognizing neither of these societies as good societies. Better, we might grant, that the elderly have a robot to keep them company or modern tools of communication to slake their loneliness given no other alternatives, but much better still that they be rooted, that be an integrated part of a multi-generational family or community in which they also supply, in their turn and as they are able, the needs of their children and grandchildren, retaining as a result their dignity, purpose, and joy.
Borgmann goes on to argue that “liberal democracy is enacted as technology.” By which he means that contrary to its avowed neutrality toward the nature of the good life, liberal democracy “does not leave the question of the good life open but answers it along technological lines.” Furthermore, Borgmann claims
“the theory of liberal democracy both needs and fears modern technology. It needs technology because the latter promises to furnish the neutral opportunities necessary to establish a just society and to leave the question of the good life open. It fears technology because technology may in fact deliver more than it had promised, namely, a definite version of the good society and, more important yet, one which is ‘good’ in a dubious sense.”
In other words, Borgmann is arguing that the professed neutrality toward the good life that has traditionally ordered liberal democracies has, in fact, acted as a cover under which the advance of modern technology has smuggled in a distinct vision of the good life and one which may not be conducive either to democracy or to human flourishing.
As Borgmann saw it in the mid-80s, while they differed as to how the fruits of economic growth should be distributed, both major American political factions “understand such growth as an increase in productivity which yields more consumer goods.” Echoing arguments we’ve already encountered, Borgmann went on to argue that “improved productivity … entails a degradation of work, and greater consumption leads to more distraction. Thus in an advanced industrial country, a policy of economic growth promotes mindless labor and mindless leisure.”
I’ve assembled the work of these three writers because it seems to me that they are all circling around a similar set of concerns about human needs, work, technology, justice, and the good life. Their reflections make clear that these are interlocking realities, which must be considered together. They direct our attention to a more fundamental level of analysis, which we do well to take up. And they all saw the dangers of ordering society around technologically automated production and consumption and of uprooting human beings to enhance both.
I’ve argued before in this newsletter and elsewhere that one of the salient features of digital culture is the rapid collapse of the ideals of neutrality and disinterested objectivity that have been central to the legitimacy of modern liberal institutions. While this collapse will continue to be attended by varying degrees of turmoil and conflict, it may also provide us with an opportunity to examine more carefully some of the assumptions that have informed the way we think about the nature of a good life. And I would suggest that we do well to start, as Simone Weil did, with a consideration of the full range of human needs, clarified by Ivan Illich’s searching critique of the needs engendered in us by industrial (and now digital) institutions, and oriented toward a more robust vision of a good society as Albert Borgmann urged us to imagine.
News and Resources
* Scott Remer (2018), “A Radical Cure: Hannah Arendt & Simone Weil on the Need for Roots” (page numbers are for The Origins of Totalitarianism): “Arendt argued that people who feel themselves to be rootless or homeless will seek a home at any price, with possibly horrific results. For this reason, the ‘competitive structure and concomitant loneliness of the individual’ (p.317) in capitalist mass society can pave the way for authoritarianism and totalitarianism. Indeed, the atomized and individualized mass is a necessary precondition for totalitarianism (p.318). Languishing in a ‘situation of spiritual and social homelessness’ (p.352), shorn of sustaining social bonds and ties, individuals are forced to live in a world where they cannot exist meaningfully and fruitfully. They try to escape this agonizing limbo and, in the absence of powerful inclusive left-wing alternatives, they look to exclusivist reactionary movements for succor. In this way, tribalism and racism are the bitter fruit of territorial rootlessness. They are wrongheaded attempts to secure roots. But rather than securing roots for the rootless masses, they simply create ‘metaphysical rootlessness’. Totalitarian and proto-totalitarian movements represent what Arendt calls a ‘fictitious home’ for people to ‘escape from disintegration and disorientation.’”
* For The New Atlantis, David Guaspari reviewed, The Weil Conjectures: On Math and the Pursuit of the Unknown, Karen Olsson’s 2019 book about Andre and Simone: “The appeal of her thought can, I hope, be seen from a telegraphic précis of one strand pursued throughout her life: Any division between intellectual and physical labor is pernicious. Any technology, or ideology, or social organization that imposes this division creates ‘two categories of men — those who command and those who obey,’ she wrote in Oppression and Liberty (1955). The commanders, ‘intoxicated,’ to borrow from her famous essay on the Iliad, lose sight of their own vulnerabilities. The commanded — laborers — become means, not ends.”Bonus: Sylvie Weil remembers her father, André Weil, and her aunt Simone, whom she never met.
* Zachary Loeb reviews Matt Tierney’s Dismantlings: Words Against Machines in the American Long Seventies:“That the Luddites have lingered so fiercely in the public imagination is a testament to the fact that the Luddites, and the actions for which they are remembered, are good to think with. Insofar as one can talk about Luddism it represents less a coherent body of thought created by the Luddites themselves, and more the attempt by later scholars, critics, artists, and activists to try to make sense of what is usable from the Luddite legacy.”
* Ryan Calo and Daniell Citron on “The Automated Administrative State: A Crisis of Legitimacy.” From the abstract: “Scholarship to date has explored the pitfalls of automation with a particular frame, asking how we might ensure that automation honors existing legal commitments such as due process. Missing from the conversation are broader, structural critiques of the legitimacy of agencies that automate. Automation throws away the expertise and nimbleness that justify the administrative state, undermining the very case for the existence and authority of agencies.Yet the answer is not to deny agencies access to technology. This article points toward a positive vision of the administrative state that adopts tools only when they enhance, rather than undermine, the underpinnings of agency legitimacy.”
* On the value of reading aloud, which, after all, was the most common way to read for longer than many of us realize. Not the best article that could’ve been written on the subject, nonetheless, it’s a point worth recalling.
* The opening of Rob Horning’s latest newsletter: ”The phone seems to demand the feed as a form — the endless flow of content, both as something we make and consume. The feed capitalizes on the personalized screen interface, the networkedness of the device, its portability and its immediacy, and resolves it all into a coherent experience that encapsulates the pleasures the phone can afford. The feed defines the sort of subjectivity that's sustainable through the kinds of intermediation that phones allow for. Whether we are consuming or creating it, the feed offers a coherent structure for the self, with a built-in, always implied audience. The stream of personalized content corresponds with the selfhood we can construct by posting; together these come to structure the nature of our self-awareness. We can understand ourselves in terms of what we consume in a feed and what we can post to it.”
* Need a reading list? In a recent essay recalling the life and work of Joseph Brodsky, who would’ve been 80 this year, Peter Filkins recalls how, as a student, he invited Brodsky to have a beer with him after an especially long seminar. Brodsky agreed and, upon sitting down, all Filkins could think to say was “What should I read?”: “Nodding his head in approval, he asked for pen and paper, and there in a little notebook I carried around he scribbled down a list: Edwin Arlington Robinson, Weldon Kees (underlined), Ovid, Horace, Virgil, Catullus, minor Alexandrian poets, Paul Celan, Peter Huchel, Georg Trakl (underlined), Antonio Machado, Umberto Saba, Eugenio Montale (underlined), Andrew Marvell, Ivor Gurney, Patrick Kavanagh, Douglas Dunne, Zbigniew Herbert (underlined), Vasco Popa, Vladimir Holan, Ingeborg Bachmann, “Gilgamesh,” Randall Jarrell, Vachel Lindsay, Theodore Roethke, Edgar Lee Masters, Howard Nemerov, Max Jacob, Thomas Trahern, and then a short list of essayists: Hannah Arendt, William Hazlitt, George Orwell, Elias Canetti (underlined and Crowds and Power added), E. M. Cioran (Temptation to Exist added), and finally the poet Les Murray added in my own hand after he suggested it. The sheet of paper, roughly the size of an iPhone but, as Brodsky would say, with far more and far greater information stored within it, remains tucked away inside my copy of A Part of Speech to this day.”
* Another title for your list: Alan Jacobs’s recently published Breaking Bread With the Dead: A Reader’s Guide to a More Tranquil Mind. Jacobs’s writing has long been for me both a source of insight and wisdom as well as a model of intellectual virtue. I could say more, but instead I will offer you another commendation of Jacobs’s work from Robin Sloan: “There is a general lament about our (‘our’) inability to converse across political and moral differences—across conflicting cosmologies, even. But these conversations are totally possible. In fact, they’re not particularly difficult. All they require is unshakeable integrity and deep trust
I have that kind of trust in the writer Alan Jacobs. I’ve been reading him for years, so this isn’t a snapshot impression; it’s built from a hundred examples, some of them vanishingly subtle, but all totally consistent. Again and again, I have seen him reject easy tribalisms, political and religious and aesthetic; resist the inviting flow of the moment; decline to dunk on his opponents. Again and again, in venues as official as books and magazines and as personal as blog posts and newsletters, he has written and argued with generosity and creativity and care.”
Re-framings
— From the Chinese novelist Zhang Ailing, writing in 1945 from amidst “a city devastated by Japanese aggression.” This passage is taken from Carl Mitcham’s discussion of Chinese technology in “Teaching with and Thinking After Illich on Tools”:
In this era, the old things are being swept away and the new things are still being born. But until this historical era reaches its culmination, all certainty will remain an exception. People sense that everything about their everyday life is a little out of order, out of order to a terrifying degree. All of us must live within a certain historical era, but this era sinks away from us like a shadow, and we feel we have been abandoned. In order to confirm our own existence, we need to take hold of something real, of something fundamental, and to that end we seek the help of an ancient memory, a memory of humanity that has lived through every era, a memory clearer and closer to our hearts than anything we might see gazing far into the future. And this gives rise to a strange apprehension about the reality surrounding us. We begin to suspect that this is an absurd and antiquated world, dark and bright at the same time. Between memory and reality there are awkward discrepancies, producing a solemn but subtle agitation, an intense but as yet indefinable struggle.
The Conversation
As you may have noted, this summer I hosted a virtual reading group on the paid subscriber side, which took up three of Ivan Illich’s books. On a whim I reached out to the philosopher Carl Mitcham to see if he might be willing to record an interview with me about Illich, with whom he had been good friends. Carl graciously agreed. Little did I know that this would open the door to my becoming acquainted with others who had been part of Illich’s circle of friends and collaborators, all of whom have been equally gracious and supportive. As it turns out, I’ve realized that I fortuitously stumbled upon a project: an oral history of Illich and his friendships. My interview with Carl was followed by one with Gustavo Esteva, and a few more are now in development. I’ll continue to make these available on the paid side, but I will also be thinking about what final and also public form these might take.
Within the next 48 hours, the temperature will dip into the mid-50s here in north Florida. I’m not sure that I can sufficiently convey the pleasure that I take from these first cool days of the year. They are a delight, especially when they arrive this early in the year.
In the midst of everything, I hope you have similar small but not insignificant pleasures attending your days.
Cheers,
Michael
It was my pleasure back in June to enjoy a conversation with Carl Mitcham about the life and work of Ivan Illich. A couple of weeks ago, I had the similar pleasure of speaking with Gustavo Esteva, an activist and scholar who, despite having earlier rejected Illich as a “reactionary priest,” went on to become Illich’s close friend and collaborator in the early 1980s. Gustavo is also the founder of the Universidad del la Tierra in the state of Oaxaca, Mexico.
My thanks to Dana Stuchul and Madhu Suri Prakash for their encouragement and for introducing me to Gustavo. I’ve been deeply appreciative of the warm support I’ve received from those who knew Illich. I’ve experienced it as a genuine extension of the hospitality that was so central to Illich’s practice.
On a separate but related note, those of you who indicated an interest in a Zoom discussion of In the Vineyard of the Text will be seeing a note from me soon beginning the task of co-ordinating a time that works for everyone. Also, not too late if you wanted to jump in but hadn’t contacted me yet. Please feel free to do so in the next day or two.
“People can change, but only within bounds. In contrast, the present industrial system is dynamically unstable. It is organized for indefinite expansion and the concurrent unlimited creation of new needs, which in an industrial environment soon become basic necessities … Such growth makes the incongruous demand that man seek his satisfaction by submitting to the logic of his tools. The demands made by tools on people become increasingly costly … Increasing manipulation of man becomes necessary to overcome the resistance of his vital equilibrium to the dynamic of growing industries; it takes the form of educational, medical, and administrative therapies. Education turns out competitive consumers; medicine keeps them alive in the engineered environment they have come to require; bureaucracy reflects the necessity of exercising social control over people to do meaningless work. The parallel increase in the cost of the defense of new levels of privilege through military, police, and insurance measures reflects the fact that in a consumer society there are inevitably two kinds of slaves: the prisoners of addiction and the prisoners of envy.”— Ivan Illich, Tools for Conviviality (1973)
Welcome to another installment of the Convivial Society. This time around some brief reflections on human beings entangled in technological systems.
You may have seen or heard about Elon Musk’s recent Neuralink demonstration involving some pigs, who weren’t altogether cooperative. Neuralink is the four-year-old Musk company working on a computer-brain interface. Musk claims that Neuralink technology will one day cure, among other ailments, blindness, paralysis, and mental illness. Additionally, Musk believes it will dramatically empower human beings by augmenting our mental and even physical capacities. In anticipation of the recent Neuralink event, his Twitter feed alluded to Matrix-like wonders to come. Musk also tends to tout this project as safeguard against the future threat of super-intelligent AI. It’s how we’ll keep pace with the machines. "With a high-bandwidth brain-machine interface,” Musk explained, “I think we can go along for the ride and effectively have the option of merging with AI.”
Needless to say, there is every reason to be skeptical about everyone one of those claims. As Antonio Regalado put it in his discussion of the event, it amounted to little more than neuroscience theater. That said, the hopes Musk has expressed are illustrative of a dynamic that has already been playing out in more prosaic everyday contexts. The nature of this dynamic becomes apparent when we ask a hypothetical question concerning computer-brain interfaces: Who is being plugged in to what? Or, to put it another way, who is the dominant partner, the computer or the brain? Are we plugging into a system that will serve our ends, or are we being better fitted to serve the interests of the technological system. I suspect that Musk would say the question misses the point, there is no dominant partner. Rather the relationship would be, as he put it, symbiotic. Clearly, he talks about it as if it will prove to be an enhancement of the human condition and one that will help us survive the threat of AI-induced obsolescence.
But we could just as easily imagine that the human interest will be superseded by the imperatives of the machine, that the person will be bent to the service and logic of the machine. There is ample precedent. When a system becomes sufficiently complex, the human element more often than not becomes a problem to be solved. The solution is to either remove the human element or otherwise re-train the person to conform and recalibrate their behavior to the specifications of the machine. Alternatively, society develops a variety of therapies to sustain the person who must now live within a techno-economic milieu that is hostile to human flourishing.
In other words, the end being served is not human flourishing, it is the functioning of the technological system. Musk’s rationale for Neuralink is just an overhyped case in point of this logic: the answer to the problem posed by technological systems that have grown dangerous is not to reconsider the advisability of building such systems, rather it is to further technologize the human being so as to assure survival in a technological milieu that has grown fundamentally hostile to human well-being. It clearly recalls the instinct to solve a crisis by escalation, which Ivan Illich identified in Tools for Conviviality.
In the same work, Illich wrote,
There are two ranges in the growth of tools: the range within which machines are used to extend human capability and the range in which they are used to contract, eliminate, or replace human functions. In the first, man as an individual can exercise authority on his own behalf and therefore assume responsibility. In the second, the machine takes over—first reducing the range of choice and motivation in both the operator and the client, and second imposing its own logic and demand on both.
Insidiously, these developments are typically packaged as either matters of convenience or liberation. But the promises never materialize, in part, because they veil a greater entanglement in systems and institutions that are ultimately designed to serve their own ends. In part, also, because it is never clear what exactly we are being liberated for other than further consumption of the products/services/goods offered to us by the techno-economic systems. It was a dynamic eloquently described by Lewis Mumford when, in the mid-twentieth century, he asked “Why has our age surrendered so easily to the controllers, the manipulators, the conditioners of an authoritarian technics?” Here is his answer:
The bargain we are being asked to ratify takes the form of a magnificent bribe. Under the democratic-authoritarian social contract, each member of the community may claim every material advantage, every intellectual and emotional stimulus he may desire, in quantities hardly available hitherto even for a restricted minority: food, housing, swift transportation, instantaneous communication, medical care, entertainment, education. But on one condition: that one must not merely ask for nothing that the system does not provide, but likewise agree to take everything offered, duly processed and fabricated, homogenized and equalized, in the precise quantities that the system, rather than the person, requires. Once one opts for the system no further choice remains. In a word, if one surrenders one’s life at source, authoritarian technics will give back as much of it as can be mechanically graded, quantitatively multiplied, collectively manipulated and magnified.
The temptation, in other words, has been to assume that the goods we are offered by the current techno-social regime are the goods that we, in fact, need to thrive as the sort of creatures we are. This is why any serious consideration of the questions raised by technology must eventually become a consideration of what it means to be human and what shape a just society should take. These are, of course, political questions of the first order, or at least they used to be until they were superseded by the imperatives of economic growth.
A few years ago, I suggested that there was a tradition of humanist technology criticism worth engaging. I contemplated at one point drawing up a proposal for something like a humanist tech criticism reader. Maybe someday something will come of that. The general idea that holds this tradition of critics together is the conviction that some account of what people are for and of the conditions under which they flourish should inform our evaluation of technology. I recognize now as I did then that this can be contested and contentious territory, but I fear that unless we figure out how to at least raise these questions we will proceed down a path toward de facto post-humanism.
At the end of my reflections a few years back, I suggested that a humanist critique of technology entails a preference for technology that (1) operates at a human scale, (2) works toward human ends, (3) allows for the fullest possible flourishing of a person’s capabilities, (4) does not obfuscate moral responsibility, and (5) acknowledges and respects certain limits inherent to the human condition.
I leave you with those observations today. I trust that they can at least be the point of departure for productive conversations.
News and Resources
* Are we already living in a tech dystopia? A few scholars respond to the question, including Jonathan Zittrain and David Golumbia. Repurposing Gibson’s well-known quip, I suggested some time ago that the dystopia is already here — it’s just not very evenly distributed.
* A short but suggestive post from Drew Austin revisiting McLuhan’s 1967 claim, “the city no longer exists except as a cultural ghost for tourists” in light of the pandemic: “… the city might be a ‘cultural !ghost for tourists’ to a public intellectual like McLuhan even as it remains a very real and immediate place for many others—those who are still ‘from somewhere.’ To Martínez, the ‘Zoom class’ is currently accelerating headlong into the dubious future of entirely rootless, technology-enabled nomadism and increasingly separating themselves from the people who must remain physically and culturally rooted in a specific somewhere.”
* A review of recent Big Tech patents. Two of note from Amazon: using augmented reality to put ads on your body and drones for home surveillance.
* Consider Hapbee, a wearable device that promises to “replicate different feelings by playing safe, low energy magnetic signals.” The feelings it purports to stimulate are “Happy, Alert, Pick Me Up, Relaxed, Calm and Sleepy.” Hapbee is paired with an app that allows you to “play your feelings any time, any where.” Spotify, but for your emotional life. That’s not a bad line, come to think of it. It is true, of course, that we do use Spotify (and much else, of course) to tune our emotional life. Considering, for arguments sake, that Hapbee delivers on its promise, is it really all that different than a mood enhancing playlist, or cup of coffee in the morning? At issue is whether or not means to an end are indifferent and interchangeable. The presumption that they are lies deep at the heart of technological culture. But the presumption, at least in some cases, especially cases of moral significance, is wrong. There’s the old distinction between goods that internal to a practice and those that are external to it. The latter of these we might think of as mercenary in nature. If I pursue a friendship for the sake of mutual affection and companionship, I’m pursuing the friendship for the sake of a good that is internal to the practice of friendship. If I pursue it for the sake of social capital that the friendship will bring me, I am pursuing it for goods that are external to the practice. And, of course, pursued in this way, the true good is never attained. With regards to less morally fraught ends than happiness, alertness say, a similar dynamic applies. What range of practices, what form of life, is conducive to a state of alertness or calm? Am I conducting myself, by choice or coercion, in such a way that I cannot live in a manner that may allow for calm or alertness? Does the device merely allow me to continue functioning in a self-destructive manner then? Not unlike how caffeine may allow me to remain marginally productive while getting inadequate sleep so that my longterm health is jeopardized? Thus concludes this brief and incomplete exercise in moral reasoning.
* From a 2019 post by Alan Jacobs that I just recently stumbled on again: “Facebook is the Sauron of the online world, Twitter the Saruman. Let’s rather live in Tom Bombadil’s world, where we can be eccentric, peculiar perhaps, without ambition, content to tend our little corner of Middle Earth with charity and grace. We’ve moved a long way from Tim Carmody’s planetary metaphor, which, as I say, I feel the force of, but whether what I’m doing ultimately matters or not, I’m finding it helpful to work away in this little highland garden, above the turmoil of the social-media sea, finding small beautiful things and caring for them and sharing them with a few friends. One could do worse.”
* Shannon Vallor on what the pandemic has revealed: “The lesson of COVID-19 is that scientific and technical expertise stripped away from humane wisdom—social, moral and political knowledge of what matters, what we value, what needs preserving, repairing and caring for together—is a mere illusion of security. It’s an expensive life raft lacking rations, a compass, a map or a paddle. It cannot carry us safely into the futures we all need to build, for our nations, humankind and for coming generations of life on this planet.”
* From “A History of Early Public Health Infographics,” an excerpt from Murray Dick’s recently published book, The Infographic:A History of Data Graphics in News and Communications: “From 1820 to 1830, an enthusiasm for statistics began to emerge across the western world, leading to an era of statistics concerned with reform. It was led by individuals who sought to disrupt what they saw as the chaos of politics and replace it with a new apolitical regime of empirical, observed fact. This new approach would come to be seen as a field of action, as an applied science, providing empirical weight to the new, intellectually dominant spirit of political economy.”
* Sarah Hendren on the tyranny of chairs, adapted from What Can a Body Do? How We Meet the Built World: “For most of human history, a mix of postures was the norm for a body meeting the world. Squatting has been as natural a posture as sitting for daily tasks, and lying down was a conventional pose for eating in some ancient cultures. So why has sitting in chairs persisted in so many modern cultures?”
Re-framings
— “A Bright Field” by the Anglo-Welsh poet R. S. Thomas, who died in 2000. I encountered the poem in Jeffrey Bilbro’s wonderful essay on Thomas, “Turn AsideThe Poetic Vision of R. S. Thomas”:
I have seen the sun break throughto illuminate a small fieldfor a while, and gone my wayand forgotten it. But that was the pearlof great price, the one field that hadthe treasure in it. I realize nowthat I must give all that I haveto possess it. Life is not hurrying
on to a receding future, nor hankering afteran imagined past. It is the turningaside like Moses to the miracleof the lit bush, to a brightnessthat seemed as transitory as your youthonce, but is the eternity that awaits you.
As I was reading Jeff’s essay and preparing to type these lines, I heard the door of my daughters’s room open. It’s late. They should be sleeping. It was not a welcome sound. I grumbled to myself about getting this newsletter done. Then my oldest stumbled bleary-eyed into my room, crawled on top of me, and lay her head on my chest, so naturally I, altogether chastised, turned aside to take in the light of that moment. — From Ursula K. Le Guin’s Always Coming Home, quoted by Alan Jacobs in “Handmind in Covidtide,” which is worth your time:
He thought of very little besides clay, and shaping, and glazing, and firing. It was a good thing for me to learn a craft with a true maker. It may have been the best thing I have done. Nothing we do is better than the work of handmind. When mind uses itself without the hands it runs the circle and may go too fast; even speech using the voice only may go too fast. The hand that shapes the mind into clay or written word slows thought to the gait of things and lets it be subject to accident and time.
The Conversation
Alright, so I didn’t quite get this to you in August. Nonetheless, consider this installment the second of last month’s. Two more to come in September. Is it really September? I’m glad for it. It’s the time of year that I start longing for the first cool breeze to signal the end of the Florida summer. It won’t come for a while yet, but I’m eagerly waiting.
Cheers,
Michael
“There are two ranges in the growth of tools: the range within which machines are used to extend human capability and the range in which they are used to contract, eliminate, or replace human functions. In the first, man as an individual can exercise authority on his own behalf and therefore assume responsibility. In the second, the machine takes over—first reducing the range of choice and motivation in both the operator and the client, and second imposing its own logic and demand on both. Survival depends on establishing procedures which permit ordinary people to recognize these ranges and to opt for survival in freedom, to evaluate the structure built into tools and institutions so they can exclude those which by their structure are destructive, and control those which are useful.”— Ivan Illich, Tools for Conviviality (1973)
I recently stumbled upon video of a 2018 seminar on the life and thought of Ivan Illich, which was held at Penn State, where Illich held an appointment as a visiting professor in the Department of Philosophy and the STS program from 1986 to 1996. Several of Illich’s friends and collaborators—including Carl Mitcham, whom I had the pleasure of speaking with this summer—were in attendance. In his opening comments, Sajay Samuel spoke about Illich’s work having reached its “hour of legibility.”
The same thought had occurred to me of late, although I had not put it quite so felicitously. I take him to mean that Illich’s work made even better sense and was even more compelling now than when the work was initially published. Two years later, it seems to me that the value of Illich’s work is even more apparent.
Over the past couple of months, believing that Ivan Illich’s thought indeed spoke with renewed urgency to our moment, I’ve revisited two of his earliest and best known books, Tools for Conviviality and Deschooling Society. As always, reading Illich was a bracing experience—refreshing, challenging, and provocative in the best sense.
There were three key themes that especially caught my attention this time around and on which I’ve continued to dwell. I thought it might be useful to discuss them here, even if only briefly.
Thresholds and Limits
Tools for Conviviality opens with a discussion of what Illich called “two watersheds” in medicine. The medical profession would be the subject of a later book, Medical Nemesis now published as The Limits of Medicine, but here, in a few brief pages, Illich anticipates that later argument. In his view, medicine passed through two watersheds in the twentieth century. Through the first lay significantly better outcomes and improved health through relatively basic developments and discoveries. Through the second, gains begin to be reversed by the less obvious social costs of the professionalization of medicine.
Illich opens with this discussion of medicine in order to illustrate a larger pattern he identifies throughout modern society. He observed that there came a point at which a tool or an institution reached a scale such that putative goods were undermined, gains were reversed, and the tool or institution became a threat to society.
In explaining the purpose of Tools for Conviviality, for example, Illich proposed the concept of “a multidimensional balance of human life which can serve as a framework for evaluating man’s relation to his tools.” “In each of several dimensions of this balance,” Illich wrote, “it is possible to identify a natural scale.” He goes on to add that “when an enterprise grows beyond a certain point on this scale, it first frustrates the end for which it was originally designed, and then rapidly becomes a threat to society itself.”
Tools for Conviviality ends up being, in part, Illich’s attempt to lay a foundation for identifying these scales and clarifying the nature of the implied limits. “Present research,” he observed,
is overwhelmingly concentrated in two directions: research and development for breakthroughs to the better production of better wares and general systems analysis concerned with protecting man for further consumption. Future research ought to lead in the opposite direction; let us call it counterfoil research. Counterfoil research also has two major tasks: to provide guidelines for detecting the incipient stages of murderous logic in a tool; and to devise tools and tool systems that optimize the balance of life, thereby maximizing liberty for all.
Likewise, in Deschooling Society, Illich wrote, “We need research on the possible use of technology to create institutions which serve personal, creative, and autonomous interaction and the emergence of values which cannot be substantially controlled by technocrats.” “We need,” he adds, “counterfoil research to current futurology.”
Harder to get funding for counterfoil research, I suspect. But it is not hard to imagine how useful it might be. At the very least, we would do well to have in our critical toolkit the concept of a threshold beyond which the value of a tool or institution is jeopardized, beyond which, in fact, what had been good and useful becomes counter-productive and destructive.
Illich allows for great deal of latitude in how such an insight might be applied. It would be possible, in his view, for tools or institutions to have what he called “an optimal, a tolerable, and a negative range.” Furthermore, he acknowledged that different societies would have different goals and ends and, thus, different ways of arriving at an appropriate techno-social configuration. “The criteria of conviviality are to be considered as guidelines,” Illich wrote, “to a continuous process by which a society’s members defend their liberty, and not as a set of prescriptions which can be mechanically applied.”
But it was clear to Illich that we must acknowledge that such limits and scales exist. Written in the early 70s, Deschooling Society especially makes frequent use of an analogy to the American war effort in Vietnam. Illich refers to escalation as “the American way of doing things,” and he hardly means it as a compliment.
So, for example, in the closing lines of the first chapter of Tools for Conviviality, Illich writes, “It has become fashionable to say that where science and technology have created problems, it is only more scientific understanding and better technology that can carry us past them.” He goes on: “The pooling of stores of information, the building up of a knowledge stock, the attempt to overwhelm the present problems by the production of more science is the ultimate attempt to solve a crisis by escalation.”
Solving a crisis by escalation seems not to have gone out of fashion. It signals, of course, a failure of imagination, but also an institutional imperative. What can an institution possibly offer you except more of itself? For example, the one remedy for the problems it has unleashed that Facebook cannot contemplate is suspending operations. What is never questioned is the underlying ideology that connection is an unalloyed good and we always need more of it.
What this ignores is the possibility that, as Illich argued, beyond a certain threshold more, bigger, faster simply becomes counter-productive and then destructive. It is a possibility to which we ought to be critically attuned.
Tools to Work With
In Tools for Conviviality, Illich offered a simple proposition: “People need new tools to work with rather than tools that ‘work’ for them.” He goes on to add that people “need technology to make the most of the energy and imagination each has, rather than more well-programmed energy slaves.”
Underlying this claim was Illich’s belief that western societies made a fundamental mistake in conceiving of machines as providing an alternative to slave labor.
“Between the High Middle Ages and the Enlightenment,” Illich argued,
the alchemic dream misled many otherwise authentic Western humanists. The illusion prevailed that the machine was a laboratory-made homunculus, and that it could do our labor instead of slaves. It is now time to correct this mistake and shake off the illusion that men are born to be slaveholders and that the only thing wrong in the past was that not all men could be equally so.”
Earlier Illich had explained how “for a hundred years we have tried to make machines work for men and to school men for life in their service.” “Now it turns out” Illich observed, “that machines do not ‘work’ and that people cannot be schooled for a life at the service of machines. The hypothesis on which the experiment was built must now be discarded.” That hypothesis, according to Illich, “was that machines can replace slaves.” He added: “The evidence shows that, used for this purpose, machines enslave men.”
This claim—that we need tools to work with rather than tools that work for us—exemplifies what I read as Illich’s concern for human dignity and autonomy, properly understood. It is abundantly clear as you read Illich’s work that he was not interested in an abstract or disinterested critique of technology or industrial civilization. One of the blurbs that often ends up on Illich’s books is from the Times Educational Supplement. It is a snippet of a sentence that reads “… a famous and savage critic of industrial society …” This is true, but only because he was a fierce advocate for a particular vision of human flourishing, which, in his view, industrial society demolished. At least that is how I read Illich.
One aspect of this vision involved the ability of men and women to provide for themselves, to take responsibility for their health and learning, to be self-directed with regards to work they valued and found meaningful. “Progress should mean growing competence in self-care rather than growing dependence,” Illich believed. He believed, too, that the liberation promised by modern industrial society amounted to a profound deskilling of human beings and their consequent dependence upon institutions that increasingly dictated the terms of their worth relative to standards and criteria that had little or nothing to do with the good of the people they claimed to serve.
The point resonates today in light of present discussion of automation. Illich understood that if we proceed on the assumption that we need better tools to work for us, we will eventually end up “degraded to the status of mere consumers.” Consider how debates about the merits of automation and the potential of mass technological unemployment often play out. Others more learned than me about this matter are free to correct me, but among those who worry about such things it appears that two “solutions” present themselves. Either universal basic income kicks in to make up for permanently lost wages or else automation renders goods and services so cheap that lower wages are offset. In either case, individuals are presumed to be mere consumers whose well-being depends chiefly on their capacity to procure goods and experiences. A condition aptly summed up by Illich when he feared that we were traveling along a path that would lead to “a further increase of useful things for useless people.”
By contrast, Illich argued that “People feel joy, as opposed to mere pleasure, to the extent that their activities are creative; while the growth of tools beyond a certain point increases regimentation, dependence, exploitation, and impotence.”
In Illich’s view, this approach fails to reckon with what people actually need. Yes, there are certain goods and services that people undoubtedly need, and these needs vary from one place to another. People, however, need more than this. According to Illich, “they need above all the freedom to make the things among which they can live, to give shape to them according to their own tastes, and to put them to use in caring for and about others.”
We’ll get to that last line about caring for and about others in the final point below, but for now let us consider how Illich’s ideals are at odds with a technological milieu in which we feel ourselves increasingly caught in networks designed to ostensibly empower us but only by actually making us all the more dependent on their operations. The consequences were material and psychological. “Present institutional purposes, which hallow industrial productivity at the expense of convivial effectiveness,” Illich warned, “are a major factor in the amorphousness and meaninglessness that plague contemporary society.”
Considering this aspect of Illich’s argument, I was reminded of Walker Percy’s essay “The Loss of the Creature.” The essay is too rich to adequately summarize, but suffice it to say that, like Illich, Percy feared that having accepted their role as consumers, individuals ceded their sovereignty over their own experience. Percy opens by explaining why it is so difficult for the sightseer to actually see the Grand Canyon. In short, because the sightseer does not approach the Grand Canyon as a sovereign knower, he has unburdened himself of that role in order to assume the role of consumer, in which role he approaches the canyon as an experience that has been overdetermined for him by the park service, postcards, brochures, photographs, etc.
“The highest point, the term of the sightseer's satisfaction,” Percy argues, “is not the sovereign discovery of the thing before him; it is rather the measuring up of the thing to the criterion of the preformed symbolic complex.”
But this was only one illustration of a more pervasive dynamic. “This loss of sovereignty,” Percy concludes,
is not a marginal process, as might appear from my example of estranged sightseers. It is a generalized surrender of the horizon to those experts within whose competence a particular segment of the horizon is thought to lie. Kwakiutls are surrendered to Franz Boas; decaying Southern mansions are surrendered to Faulkner and Tennessee Williams.
Percy, a bit further on, adds, “No matter what the object or event is, whether it is a star, a swallow, a Kwakiutl, a ‘psychological phenomenon,’ the layman who confronts it does not confront it as a sovereign person, as Crusoe confronts a seashell he finds on the beach.” Instead, Percy writes,
The highest role he can conceive himself as playing is to be able to recognize the title of the object, to return it to the appropriate expert and have it certified as a genuine find. He does not even permit himself to see the thing—as Gerard Hopkins could see a rock or a cloud or a field. If anyone asks him why he doesn't look, he may reply that he didn't take that subject in college (or he hasn't read Faulkner).
And with that last line, of course, Percy is, in the 1950s, anticipating elements of Illich’s critique of schooling. Indeed, he puts that matter more pointedly when he writes, “If we look into the ways in which the student can recover the dogfish (or the sonnet), we will see that they have in common the stratagem of avoiding the educator's direct presentation of the object as a lesson to be learned and restoring access to sonnet and dogfish as beings to be known, reasserting the sovereignty of knower over known.”
The confluence of Illich’s critique and Percy’s, becomes even more evident near the end of “The Loss of the Creature.” “The situation of the tourist at the Grand Canyon and the biology student,” Percy explains,
are special cases of a predicament in which everyone finds himself in a modem technical society-a society, that is, in which there is a division between expert and layman, planner and consumer, in which experts and planners take special measures to teach and edify the consumer. The measures taken are measures appropriate to the consumer: The expert and the planner know and plan, but the consumer needs and experiences.
Illich, who in the late 70s and early 80s wrote at length about what we might think of as the social construction of needs, and who, already in Deschooling Society, argued that the chief lesson of modern schooling is that we all need it and more of it, would readily agree with Percy. The chief difference, so far as I can see, is one of emphasis. Percy focuses on the surrender of experience to the intellectual class, Illich sees this same pattern at work in both the intellectual and material realms of human experience and abated by the nature of modern technology.
Illich’s response to this was his call for convivial tools, or tools “which give each person who uses them the greatest opportunity to enrich the environment with the fruits of his or her vision.” “In any society,” Illich claimed, “as conviviality is reduced below a certain level, no amount of industrial productivity can effectively satisfy the needs it creates among society’s members.” This is because of a fundamental paradox in consumer society: consumer society excels at generating and ostensibly satisfying ever more trivial needs while simultaneously eviscerating our capacity to satisfy our deepest and most pressing needs.
Interdependence
The idea of autonomy or the self-sufficiency of the person is only one side of the coin, however. The other is the fact that, for Illich, our independence from paternalistic institutions and manipulative tools is for the sake of our mutual inter-dependence. In Tools for Conviviality he writes, “I consider conviviality to be individual freedom realized in personal interdependence and, as such, an intrinsic ethical value.”
Industrial society had, in Illich’s view, deskilled us in the arts of both self-care and mutual care. Having outsourced care to institutions and the service industry, we were more helpless and more adrift, bereft not only of a measure of dignity but also of the deeply human consolations of giving and receiving help and comfort.
According to Illich, “People have a native capacity for healing, consoling, moving, learning, building their houses, and burying their dead.” Unfortunately, we had, in his view, ceded each of these capacities to the professional classes. What troubles me most about this development is not the loss of personal satisfactions and the sense of purpose that might arise from being useful to another. Rather, it is that these practices, which we hardly ever now undertake for one another, were also what we might think of as binding agents. Through my care for another I reach out beyond myself and even beyond the confines of my home to the wider community, to my neighbors.
My point here is that an experience of community is not so much a state to be inhabited as it is a condition to be achieved, and it is achieved by constant practice. By caring for my neighbor in a time of need, I forge a communal bond. My neighbor becomes less of an abstraction, he or she takes on flesh and blood as it were. Their history and my history are intertwined. We build up a narrative stock over time that further binds us in memory.
When we have outsourced all of our mutual care to institutions and professionals, these ties atrophy. We recede from a common world of mutual interdependence into our own private enclaves of consumption, unable either to care for ourselves or for our neighbors. Naturally, there are advantages to be gained by such a retreat. Mutual care can be hard, inconvenient, and possibly thankless work. It entails responsibilities and obligations. In Deschooling Society, Illich notes that “education for all means education by all.” Would we be prepared or willing to step into this role?
In the 60s and 70s, Illich had especially in view the schools, the medical profession, and the transportation industry. It is clear, however, that while these continue to inhibit the kind of capacity and skill in caring for one another that Illich prized, new challenges have presented themselves.
In the 79 theses he presented in “Attending to Technology,” Alan Jacobs, makes some important observations along these lines. He begins with a recent story that had been in the news:
When Danielle and Alexander Meitiv of Silver Spring, Maryland tried to teach their two children, ages six and ten, how to make their way home on foot from a mile away, the children were picked up by police and the parents charged with child neglect. Yet whether the Meitivs were right or wrong in the degree of responsibility they entrusted their children with, what they did is the opposite of neglect — it is thoughtful, intentional training of their children for responsible adulthood. They instructed their children with care and the children practiced responsible freedom before being fully entrusted with it. And then the state intervened before the lesson could be completed.
As Jacobs notes, the charges were eventually dropped, but Jacobs draws a powerful lesson from this episode. “I think this event is best described,” Jacobs explained, “as the state enforcing surveillance as the normative form of care.”
Jacobs goes on as follows:
But by enforcing surveillance as the normative form of care, the state effectively erases the significance of all other forms of care. Parents might teach their children nothing of value, no moral standards, no self-discipline, no compassion for others — but as long as those children are incessantly observed, then according to the state’s standards the parents of those children are good parents. And they are good because they are training their children to accept a lifetime of passive acceptance of surveillance.
I’m not sure if Jacobs would see as I do here a continuation of the kind of deskilling and outsourcing of care that Illich challenged so forcefully in the early 70s. But it seems to me that this episode and the rise of “surveillance as the normative form of care” both lie on a trajectory with those developments. Indeed, it seems to me that a recovery of Illich for the 21st century would involve just such an expansion of his argument. It would recognize how digital technology allowed institutions to further escalate their reach rather than reckon with the limits they had previously ignored. Already in the 70s, Illich perceived that modern institutions were entering a period of crisis, but he perceived an opportunity. “I believe that the present crisis of our major institutions ought to be welcomed as a crisis of revolutionary liberation,” Illich wrote.
“This world-wide crisis of world-wide institutions,” he added, “can lead to a new consciousness about the nature of tools and to majority action for their control. If tools are not controlled politically, they will be managed in a belated technocratic response to disaster.”
If these lines retain a sense of urgency, I would suggest it is because we are in a moment that was not unlike Illich’s. While digital technologies appeared as a radical challenge to analog institutions, they have also been used to prop up these old institutions, allowing them to persist in the mode of escalation. The manner in which they have been deployed has also amounted to a doubling down on the deskilling and alienation that was already underway under the regime of industrial age institutions. And now, we are once again confronted with a crisis of institutions. In some respects, it is a new crisis. But from another vantage point, it is the same crisis reignited after the digital extensions of analog institutions have reached their own limits.
Where does this leave us? I am personally struck by the persistence of the virtues of hospitality and friendship that run through Illich’s work. So I will give Illich the last words along these lines. In a conversation on Jerry Brown’s old radio show, Illich observed that in classical society, hospitality was “a condition consequent on a good society in politics.” Today, however, he believed that it “might be the starting point of politics.”
“But this is difficult,” he added
because hospitality requires a threshold over which I can lead you and TV, internet, newspaper, the idea of communication, abolished the walls and therefore also the friendship, the possibility of leading somebody over the door. Hospitality requires a table around which you can sit and if people get tired they can sleep … I do think that if I had to choose one word to which hope can be tied it is hospitality. A practice of hospitality recovering threshold, table, patience, listening, and from there generating seedbeds for virtue and friendship on the one hand. On the other hand radiating out for possible community, for rebirth of community.
The Convivial Society is free to all and supported by the generosity of readers. If you find this work helpful and valuable consider becoming a paid subscriber.
News and Resources
* Kate Klonick on the Facebook oversight board.
* Abeba Birhane on the algorithmic colonization of Africa.
* Anthony Townsend on the hundred year history of self-driving cars.
* Gil Kazimirov on how the ringtone “sparked the mobile revolution.”
* Renee DiResta on crowds and technology.
* Adrian Hon on what ARGs can teach us about QAnon.
* Sophie Haigney’s elegy for the landline in literature.
* Eric Levitz interviews Adam Tooze on the world after COVID-19.
* Paul Levinson recalls how he met Marshall McLuhan.
Re-framings
— From “Neil Postman's Advice on How to Live the Rest of Your Life,” compiled by Janet Sternberg.
10. Keep your opinions to a minimum.
It is not necessary for you to have an opinion on every public issue. Although you may be entitled to have an opinion, you probably are not qualified to have an opinion on most matters. Although middle-class America seems to require an opinion on everything, you will find it liberating to say the phrase “I don’t know enough about it to form an opinion.”
11. Carefully limit the information input you will allow.
Too little information is dangerous, but so is too much. As a general rule, do not take in any more information after seven or eight o’clock at night. You need protection from the relentless flow of information in modern American culture.This principle, by the way, explains the popularity of watching TV reruns, which provide amusement without new information.
12. Seek significance in your work, friends, and family, where potency and output are still possible.
Work, friends, and family are the areas where what you think and do matters. Avoid thinking too much about matters you cannot do anything about. It may help to remember that information used to be a survival necessity, not a commodity. Information used to be an agent or instrument for action, but nowadays, information is often inert — you cannot act on it. Thinking too much about things you cannot affect makes you feel impotent and trivia-centered. Try to dump useless information from your head.
— In a recent piece on language and brain-to-brain interfaces, Mark Dingemanse observed, “There is a fitting Zulu saying here: Umuntu ngumuntu ngabantu, or ‘A person is a person through other people.’” This recalled part of an exchange among Ivan Illich, Carl Mitcham, and Jerry Brown on a radio show Brown used to host in the 1990s. Here is Illich, discussing his then-recent work on Hugh of St. Victor, speaking about seeing himself in the pupil of another’s eye:
It is you making me the gift of that which Ivan is for you. That's the one who says "I" here. I'm purposely not saying, this is my person, this is my individuality, this is my ego. No. I'm saying this is the one who answers you here, whom you have given to him. This is how Hugh explains it here. This is how the rabbinical traditional explains it. That I cannot come to be fully human unless I have received myself as a gift and accepted myself as a gift of somebody who has, well today we say distorted me the way you distorted me by loving me. Now, friendship in the Greek tradition, in the Roman tradition, in the old tradition, was always viewed as the highest point which virtue can reach. Virtue meaning here the habitual facility of doing the good thing which is fostered by what the Greeks called politaea, political life, community life. I know it was a political life in which I wouldn't have liked to participate, with the slaves around and with the women excluded, but I still have to go to Plato or to Cicero. They conceived of friendship as a flowering, a supreme flowering of the interaction which happens in a good political society. This is what makes long experience so painful with you that every time we are together you make me feel most uncomfortable about my not being like you. I know it's not my vocation. It's your vocation. Structuring community and society in a political way. But I do not believe that friendship today can flower out, can come out, of political life. I do believe that if there is something like a political life to be, to remain for us, in this world of technology, then it begins with friendship. Therefore my task is to cultivate disciplined, self-denying, careful, tasteful friendships. Mutual friendships always. I and you and I hope a third one, out of which perhaps community can grow. Because perhaps here we can find what the good is. To make it short, while once friendship in our western tradition was the supreme flower of politics I do think that if community life if it exists at all today it is in some way the consequence of friendship cultivated by each one who initiates it. This is of course a challenge to the idea of democracy which goes beyond anything which people usually talk about, saying each one of you is responsible for the friendships he can develop because society will be as good as the political result of these friendships will be.
— Wendell Berry on freedom and friendship:
In our limitless selfishness, we have tried to define “freedom,” for example, as an escape from all restraint. But, as my friend Bert Hornback has explained in his book The Wisdom in Words, “free” is etymologically related to “friend.” These words come from the same Indo-European root, which carries the sense of “dear” or “beloved.” We set our friends free by our love for them, with the implied restraints of faithfulness or loyalty. And this suggests that our “identity” is located not in the impulse of selfhood but in deliberately maintained connections.
The Conversation
I don’t think you were checking your inbox each morning with bated breath wondering if the Convivial Society would ever appear there again, but perhaps you’ve noted that this installment is coming in rather late. Life happens, as they say, sometimes a lot of it all at once. It’s been one of those months—nothing terribly bad mind you, indeed, some very welcome developments mostly, but draining nonetheless. Suffice it to say, you have my apologies for this late issue and my commitment to still get a second installment out before the month closes.
Trust you all are well.
Cheers,
Michael
“I believe that a desirable future depends on our deliberately choosing a life of action over a life of consumption, on our engendering a lifestyle which will enable us to be spontaneous, independent, yet related to each other, rather than maintaining a lifestyle which only allows to make and unmake, produce and consume – a style of life which is merely a way station on the road to the depletion and pollution of the environment. The future depends more upon our choice of institutions which support a life of action than on our developing new ideologies and technologies.”— Ivan Illich, Deschooling Society (1971)
Programming note: Glad so many of you seem to have found the audio version useful. You are now able to follow the newsletter as a podcast on iTunes, Spotify, and Stitcher, if by “podcast” we understand simply me reading the main essay in a less than animated manner. Just click “Listen in Podcast app” above. Hope that covers most of you who find that feature useful. And sure, feel free to leave a rating or review if you’re so inclined.
It’s been just about thirty years since sociologist James Hunter published Culture Wars: The Struggle to Define America. The titular phrase has since become a staple of the public’s discourse about itself, although, as all such phrases tend to do, it has floated free of its native context in Hunter’s argument about the then-novel rifts in American public life.
Lately, I’ve found myself taking recourse to the phrase more than I ordinarily would. I’ve typically avoided it because the phrase itself seemed to encourage what it sought merely to describe. In other words, as an analytical frame, the phrase conditions us to think of cultural dynamics as warfare and thus locks us into the dysfunctions such a view entails. “Metaphors we live by” and what not. Despite my misgivings, however, the phrase is undoubtedly useful shorthand for much of what transpires in the public sphere. So, for example, when someone asks about the face mask fiasco here in the U.S., I’ve simply suggested, perhaps too glibly, that face coverings were regrettably enlisted into the culture wars. But here is another problem to consider. Precisely because it is so useful as shorthand, it may be deterring us from the work of thinking more carefully and more deeply about our situation. Concepts, after all, can both clarify and obscure.
But I’ve also been thinking about the “culture wars” frame because, whatever else we might say about such conflicts, they have not remained static during the nearly three decades since Hunter wrote his book. One key development, of course, is that those thirty years overlap almost exactly with the emergence of the digital public sphere. In fact, several weeks ago what seemed like a useful analogy occurred to me: The advent of digital media has been to the culture wars what the advent of industrialized weaponry was to conventional warfare.
With that thesis in mind, I thought it might be useful to revisit Hunter’s work to see if it can still shed some light on our present situation and, specifically, to explore what difference digital media has made to the conduct of the “culture wars.” For brevity and clarity’s sake, although I’m not sure I’ve succeeded on either front, I’ve chosen to outline a few key points for our consideration. So here we go.
1. The culture wars predate the rise of digital media
This may seem like an obvious point, but it’s worth emphasizing at the outset. One consequence of our immersion in the flow of digital media tends to be a heightened experience of presentism, involving both a foreshortening of our temporal horizons and the presumption that what we are experiencing must be novel or unprecedented. The danger in this case is that we mistake the culture wars for an effect of social media rather than understanding them as a longstanding feature of American public life with a complex, multi-faceted relationship to social media.
In 1991, Hunter, surveying familiar terrain—for example: the family, free speech, the arts, education, the supreme court, and electoral politics—amply documented most if not all of the features we tend to lament about the state of public discourse today: the polarization, the intractable and bitter nature of the debates, the underlying animosities, the absence of civility, the characterization of ideological opponents as enemies to be eradicated, and, notably given present debates, what Hunter called the “specter of intolerance” and the supposed “totalitarian ‘threat’” posed by political opponents.
Describing what he calls the “eclipse of the middle,” he also notes how a need to be “stirred and titillated” means that “public debate that is sensational is more likely to arouse and capture the attention of ordinary people than are methodical and reflective arguments.” “The net effect of loud, sensational clamor,” he adds, “is to mute more quiet and temperate voices.”
Hunter also commented on the high level of suspicion in public discourse: “In today’s climate of apprehension and distrust,” he wrote, “opinions that attempt to be distinctive and ameliorating tend to be classified with all others that do not affirm a loyalty to one’s own cause.” Hunter also offered an incisive analysis of the role media technology played in creating these conditions, but we’ll get to that in another point below.
Finally, while Hunter’s work popularized “culture war” talk, he did not coin the phrase, which, as he notes, dates back to the 1870s. The culture war concept has its origins in the German word Kulturkampf, or “cultural struggle,” which specifically designated a fight between Protestant and Catholic factions for control of the cultural and educational institutions of the newly unified German state.
As Hunter goes on to show, the origins of the American culture wars were also rooted in similar struggles between Protestants and Catholics and later between Christians and Jews in the 19th century. From our perspective, of course, these appear chiefly as intramural squabbles within a larger and decidedly western religious frame of reference. By the late 1980s, however, Hunter perceived an important shift in the nature of these conflicts running through American society and politics. Culture Wars was Hunter’s effort to better understand these developments and map their consequences.
It’s important to remind ourselves that culture war dynamics precede the emergence of social media because we should not fool ourselves into thinking that if only we managed to somehow reign in social media, a temperate and civil public sphere would emerge. Although, as I’ll discuss below, social media is obviously not helping matters.
2. The culture wars are rooted in competing sources of moral authority
It’s useful to recall, as Hunter insists, that the culture wars encompass deep and genuine disagreements about what is good, what is just, and what is beautiful. After opening with a series of dispatches from the front, as it were, Hunter invites us to consider the following: “What if these events are not just flashes of political madness but reveal honest concerns of different communities engaged in a deeply rooted cultural conflict?”
Hunter believed that such was indeed the case. “America,” he claimed, “is in the midst of a culture war that has had and will continue to have reverberations not only within public policy but within the lives of ordinary Americans everywhere.” This has undoubtedly proven to be the case, even as the nature of the culture wars has shifted once again.
Presently, however, we tend to speak of the “culture wars” in a more pejorative or dismissive sense and with more than a little exasperation. I understand the temptation to do so. In fact, I’ve used the term in this way on more than one occasion of late. And, in fact, there’s good reason to do so. Contemporary manifestations of the cultural wars are often characterized by seemingly trivial grievances, regarding, for example, what constitutes an appropriate Christmas/holiday greeting. But even in their mannerist manifestations, they still betray something of the underlying moral concerns. It is also the case—and this is I think the more relevant consideration—that the culture wars are now often driven by what we might think of as mercenary grifters, who inflame and exploit whatever sincerely held moral principles initially animated the concerns of ordinary citizens.
So, while one consequence of the digitization of cultural warfare is a generalized presumption of bad faith, we should bear in mind that the culture wars are at some level still rooted in deeply held moral beliefs grounded in competing and irreconcilable sources of moral authority. This is not to say, of course, that all participants and positions are morally equivalent. This is obviously not the case. But, if with any given culture war skirmish we fail to see what all the fuss is about, that is likely because we inhabit a different moral order than those whose opinions we find so baffling or distasteful.
The key point here is simply this: we will misunderstand our cultural situation if we reduce culture war issues, especially those we care little about, to cases of mere posturing, bad faith politicization, or sincere rubes being duped by nihilist operators.
3. Digital media has transformed the conduct of the culture wars
So if we can’t blame the culture wars on digital media, how exactly should we understand the relationship between the two?
As I suggested above, I think it’s useful to think of the relationship by analogy to the transformation of warfare by the introduction of industrialized warfare.
I’ll begin by noting that media technology plays an important role in Hunter’s analysis. “The media technology that makes public speech possible,” Hunter noted, “gives public discourse a life and logic of its own.” Here is how he put the matter in a lengthy statement that is italicized for emphasis in the text:
The polarization of contemporary public discussion is in fact intensified by and institutionalized through the very media by which that discussion takes place. It is through these media that public discourse acquires a life of its own; not only do the categories of public rhetoric become detached from the intentions of the speaker, they also overpower the subtleties of perspective and opinion of the vast majority of citizens who position themselves ‘somewhere in the middle’ of these debates.
Now, here comes the surprising bit. In a chapter titled Technology and Public Discourse, Hunter devotes the bulk of his discussion to … direct mail, you know, the countless postcards and fliers that arrive in your mailbox around election time. Hunter also talked about television and radio, but he argued that direct mail was the relatively novel media technology that was really stoking the culture wars. I don’t mean to belittle his discussion, far from it. It can be instructive to understand the effects of old media when they were new. And, what’s more, the choice of direct mail makes a striking point of comparison with the conditions of the digitized culture wars in which direct mail finds its closest analog in targeted email messages and social media ads. The relative sophistication, personalization, frequency, and scale of the latter nicely illustrate the consequences of digitization for the culture wars.
So, let’s come back to my analogy to industrialized warfare. While historians tend to locate the origins of industrialized warfare in the late stages of the American Civil War, it is not until World War I that we see the full effects of industrial technology on the conduct of war. Notable features of industrialization applied to warfare include significant advances in weaponry (the machine gun, long distance artillery, exploding shells, etc.), the development of steam powered iron-clad naval vessels, the deployment of armies by rail, instantaneous communication by telegraph, and, later, the advent of tanks, aircraft, and poison gas.
In short, the industrialization of warfare massively augmented the destructive capacity of modern armies by enhancing their speed, scale, and power. Additionally, industrialized war became total war, encompassing the whole of society and blurring the distinction between civilians and combatants. Finally, as all forms of mechanization tend to do, it further depersonalized the experience of battle by making it possible kill effectively at great distances. As a consequences of these developments, the norms, tactics, strategies, psychology, and consequences of modern war changed markedly.
The logic of the analogy is, of course, straightforward. Digital media has dramatically enhanced the speed, scale, and power of the tools by which the culture wars are waged and thus transformed their norms, tactics, strategies, psychology, and consequences.
Culture war skirmishes now unfold at a moments notice. The lines of battle form quickly on social media platforms. Tens of thousands of participants, not all of them human, are mobilized and deployed to the front. They work at scale, often in coordinated actions against certain individuals, working chiefly to discredit and discomfit, but also to confuse, incite, exhaust, demoralize. The older, perhaps always idealistic aims of persuasion and refutation are no longer adequate to the new situation. Moreover, skirmishes that become pitched battles spill out indefinitely, becoming black holes of attention, which become massive drains of resources and energy.
Along these lines we can see that the power of digital media lies in their immediacy and scale, but also in their ability to widen the war. Direct mail may have targeted you, but social media involves you directly in the action. Take up your memes, comrades. We are no longer mere spectators of battles waged for our allegiance by elite warriors of the political and intellectual classes. In the digitized front, we are all armed and urged to join the fray. The elites themselves quickly become the victims of the very cultural warfare they had once stoked to their advantage.
Digitization also yields total culture war. No aspect of our experience goes untouched. This is a consequence of both the wide-scale distribution of the weapons of cultural warfare but also of how these same tools erode the older, always tenuous divide between public and private life. Now, the culture wars are total in the sense that they are all-encompassing and unrelenting. It’s not so much that we’re always on the front lines, it’s that the front lines are always with us. And while it is true that the culture wars have always involved public debate of private matters, the digitized culture wars swallow up even more aspects of private life.
One way of thinking about this is along the lines of sociologist Erving Goffman’s old dramaturgical distinction between front stage social life and back stage private life. In the culture war setting, we might frame that distinction as social life on the frontlines and private life that unfolds in the relative safety of the rear. To the same degree that digital media has blurred the front stage/back stage distinction and involved us in the work of perpetual impression management, so, too, has digital media blurred the distinction between the frontline and the rear in the cultural wars and made all aspects of our experience potential fodder. This also explains the frivolity of some of our culture war skirmishes. The logic of escalation precipitated by digital tools demands that more and more of civilian life be drawn into the fray, regardless of how seemingly trivial it may be.
This is also a useful way of framing the so-called “cancel culture” debate. The debate is spurred precisely by the digitization of the culture wars, which has made it necessary to negotiate a new understanding of how the wars ought to be conducted. Who is a legitimate target? What is a proportionate response? What actions and opinions ought to be legitimately drawn in to fight? The old analog rules no longer work, and we have not arrived at a new consensus.
So while it would be a mistake to believe that digital media has generated the culture wars, it would be equally mistaken to believe that we are now merely experiencing the same old culture wars. It is clear that the new digital battlefield has radically altered the nature of cultural conflict.
It should be clear, too, that the digitized culture wars give every indication of being interminable by nature and design. Given that the culture wars are rooted in longstanding moral and ideological conflicts stemming from fundamentally irreconcilable sources of moral authority, they will not simply peter out. There is little incentive for deescalation (other than mere exhaustion), and it is hard to imagine what exactly a truce might look like, much less a genuine peace or reconciliation. Given that the platforms that sustain the digitized culture war stand to profit from its proliferation and that the culture wars arise from and, however inordinately, answer the basic human need to take meaningful and morally consequential action, especially in a media-political regime that would otherwise render us morally anesthetized producers and consumers—then to that same degree they will tend to persist unabated.
4. Digital media has realigned the culture war
While it’s important to understand how digital media has transformed the way the culture wars are conducted, what I tend to find most interesting and significant is how the lines of the culture war are being redrawn and alliances reconfigured. Again, Hunter’s work was especially useful thirty years ago in explaining how the culture wars were redrawing the lines of culture conflict. We need a similar effort to understand why our old categories no longer work as a guide to the current socio-political field. But, while I think this is the more interesting and important terrain, I’m afraid that what I’ve got to offer definitely feels far more tentative and speculative. That said, here are a few things to consider.
First, by way of background, Hunter recognized that the increasingly acrimonious cultural wars of the late 20th century differed from earlier instances because American society had witnessed both a proliferation of sources of moral authority and, consequently, a realignment of the traditional actors into new configurations. He recognized that the old lines separating Protestants from Catholics and varieties of Protestant denominations from each other no longer held firm. The new distribution of moral authority cut across the old institutional lines. Hunter identified two key groupings which he labeled the small-o orthodox and the small-p progressive. They were characterized chiefly by whether they located moral authority in external and traditional sources, as in the case of the orthodox, or in the determinations of the self or the deliverances of scientific rationalism, as in the case of the progressives. (It’s important to note that Hunter was using the terms orthodox and progressive in an idiosyncratic manner that overlaps with but is not equivalent to how the words were used then or now.) In the then-emerging culture war Hunter was mapping, orthodox Catholics, for example, were more likely to find common cause with orthodox Baptists and orthodox Jews than they were with their ostensible co-religionists, progressive Catholics. It is striking to recall that in the mid-1990s a well-known Catholic moral philosopher wrote a book titled Ecumenical Jihad urging those Hunter would call orthodox Christians, Jews, and Muslims to join forces on the culture war front. It’s hard to think of a better example of the kind of realignment Hunter was analyzing.
Second, while Hunter focused on the way that media technology was deployed to further the existing causes of the new culture war coalitions, I think it’s important to understand how the digitization of the culture wars is itself generating new configurations. Direct mailers, for example, targeted existing mailing lists. In some sense, direct email works the same way even if has enhanced capabilities. Targeted social media ads seems like an almost qualitatively different technique; at least it is less reliant on an existing mailing list. Critically, however, digital media itself generates new identities and groups in a way that postal technology did not. This latter consequences of digital media itself engenders profound changes in the configuration of the culture wars, scrambling what had been the traditional Left/Right spectrum in American politics and generating what would have seemed like bizarre partnerships and affinities across those old lines.
Third, in Hunter’s analysis, the orthodox/progressive divide arose when the dominance of the old religious/theological consensus was challenged by a new locus of moral-cultural authority in subjective experience and scientific rationality. I would suggest that one of the transmutations we are witnessing can be attributed to the splintering of the old new locus of authority into its constituent parts: subjective experience now to some degree set against a modernist version of scientific rationality. So, New Atheist types who in another age bore the mantle of progressive resistance to traditional authority are now cast as conservative defenders of a traditional and oppressive morality.
In fact, to the degree that the older individualist spirit of scientific rationalism can be understood as the foundation of the modern moral order, what we are witnessing is precisely its displacement by a new, still-emerging moral order. But again, I would argue that its decline was not simply a function of an intellectual victory on the old terms. It was rather a product of the tacit challenge posed by the experience of digital media to both the primacy of the modern ideal of individualism and to the rules of rationalist discourse in the public sphere.
Fourth, while digital media facilitates the emergence of virtually constituted small scale groups of affinity, it tends to have a disintegrating effect on the cohesion of diverse, large scale bodies such as a nation state. Consequently, another flash point around which the lines of the culture war are redrawn may be understood in terms of the diminishing plausibility of the nation state as the product of a shared history and shared ideals and, thus, as a locus of identity. Once this shared history is contested and the shared ideals lose their hold on the public imagination, one might either seek to reground national identity along ethno-racial lines or else abandon or demote the ideal of national identity and patriotism.
Finally, another aspect of the emerging terrain may be described as the difference between those committed to technocratic modes of governance directed at the perpetuation of patterns of production and consumption, on the one hand, and, on the other, those animated by explicitly moral concerns about justice and equality, between those, in other words, who are determined to exercise authority without responsibility and those who desire the satisfactions of meaningful action toward the realization of justice and goodness as they understand it.
The deeper critique here may be to recognize that the culture wars, while rooted to some important degree in the genuine moral concerns of ordinary citizens, are themselves the product of the longstanding industrialization of politics and the triumph of technique. In both the case of institutionalization and the capture of politics by technique, the operations of the system become the system’s reason for being. Industrialized politics are politics scaled up to a level that precludes the possibility of genuine and ordinary human action and thus becomes increasingly unresponsive to human well-being. The culture wars are in this analysis a symptom of the breakdown of politics as the context within which fellow citizens navigate the challenges of a common life. In the place of such genuine politics, the culture wars offer us the often destructive illusion of politically significant action.
The Convivial Society is free to all and supported by the generosity of readers. If you find this work helpful and valuable consider becoming a paid subscriber.
News and Resources
* Sean McDonald on what he has termed “technology theater”: “There’s a well-documented history of the tendency to hype distracting, potentially problematic technology during disaster response, so it’s concerning, if not surprising, to see governments turning again to new technologies as a policy response to crisis. Expert public debates about the nuances of technologies, for example, provide serious political cover; they are a form of theatre — ‘technology theatre.’ The term builds on security expert Bruce Schneier’s ‘security theatre,’ which refers to security measures that make people feel secure, without doing anything to protect their security.”And: “The ultimate vulnerability for democracy isn’t a specific technology, it’s when we stop governing together. The technological responses to the COVID-19 pandemic aren’t technologically remarkable. They are notable because they shed light on the power grabs by governments, technology companies and law enforcement. Even in the best of circumstances, very few digitally focused government interventions have transparently defined validation requirements, performed necessity analyses or coordinated policy protections to address predictable harms.”
* Jackson Lears on “Quantifying Vitality: The Progressive Paradox”: “Our days became numbered long before the rise of Big Data and algorithmic governance. Indeed, the creation of statistical selves in the service of state and corporate bureaucracies was well underway by the early twentieth century, in the midst of what US historians still call the Progressive Era (in deference to the self-description of the reformers who dominated it). Eli Cook, Sarah Igo, Dan Bouk, and other gifted young historians have begun to explore sorting and categorizing institutions that branched out from their nineteenth-century predecessors, which had focused mainly on criminals and deviants. The new sorters were more catholic in their scope—life insurance actuaries quantifying the risks of insuring individual policyholders, pollsters using survey data in an attempt to construct a ‘majority man’ or ‘average American’—with their efforts culminating in the most ambitious tabulating scheme of all, the Social Security system, in 1935 …. The difference between Progressive Era biopolitics and contemporary biopolitics involves the intensification and acceleration of tendencies underway for more than a century—more powerful technology, but similar strategies for management and surveillance of the population.”This essay appeared in the latest issue of the Hedgehog Review given over to questioning the quantified life.
* “The Atlas of Surveillance is a database of the surveillance technologies deployed by law enforcement in communities across the United States. This includes drones, body-worn camera, automated license plate readers, facial recognition, and more.”
* July 16th was the 75th anniversary of the Trinity test, otherwise known as the first successful detonation of a nuclear weapon. Here are two pieces on the subject: “What If the Trinity Test Had Failed?” / “A Bomb In the Desert.”
* Nick Paumgarten writes a compelling essay (2008) about elevators with the story of Nicholas White, who in 1999 was trapped in one for nearly two days, as the frame: “Two things make tall buildings possible: the steel frame and the safety elevator. The elevator, underrated and overlooked, is to the city what paper is to reading and gunpowder is to war. Without the elevator, there would be no verticality, no density, and, without these, none of the urban advantages of energy efficiency, economic productivity, and cultural ferment.”
* Zito Madu, drawing on film and literature, reflects on the question of justice and race: “Each time I engage in these recurring protests, I think about how absurd they are. Not the protests in themselves, but the fact that they have to exist. The demand seems so simple, like Souleiman asking for his wages, that needing to make it is degrading. It is begging for something that already belongs to you.”
* Swiss police automated crime predictions but has little to show for it.
* “I Am a Model and I Know That Artificial Intelligence Will Eventually Take My Job”
* Call for papers from the International Journal of Illich Studies for their next issue: Conviviality for the Day After “Normal.”
* Shortlist for best astronomy photographs of the year.
* Cambridge University has digitized its archive relating to the excavation of the ancient city of Mycenae.
* “Reading Station” by Charles Hindley & Co., сirca 1890:
Re-framings
— “We live in a world where there is more and more information, and less and less meaning,” writes Jean Baudrillard in Simulacra and Simulation (1981). “Information devours its own content,” he adds, “It devours communication and the social.” More:
Rather than creating communication, [information] exhausts itself in the act of staging communication. Rather than producing meaning, it exhausts itself in the staging of meaning. A gigantic process of simulation that is very familiar. The nondirective interview, speech, listeners who call in, participation at every level, blackmail through speech: ‘You are concerned, you are the event, etc.’ More and more information is invaded by this kind of phantom content, this homeopathic grafting, this awakening dream of communication. A circular arrangement through which one stages the desire of the audience, the antitheater of communication, which, as one knows, is never anything but the recycling in the negative of the traditional institution, the integrated circuit of the negative. Immense energies are deployed to hold this simulacrum at bay, to avoid the brutal desimulation that would confront us in the face of the obvious reality of a radical loss of meaning.
— Mark Boyle writes about the “not so simple life,” that is a life without most modern technologies. “As Kirkpatrick Sale wrote in Human Scale,” Boyle explained, “my wish became ‘to complexify, not simplify.’” He’s made choices the majority of us will not and probably cannot make. But we may still learn something from his experience. There were several passages I could have excerpted. Here is one of them:
As I have no clock, my relationship with time has changed dramatically. Things do take longer. There is no electric kettle to make my tea in three minutes, no supermarket to pop into for bread and pizza. But here’s the odd bit: I find myself with more time. Writing with a pencil, I can’t get distracted by clickbait or advertising. Life has a more relaxed pace, with less stress. I feel in tune not only with seasonal rhythms but also with my own body’s rhythm. Instead of an alarm clock, I wake up to the sounds of birds, and I’ve never slept better. If I want to drop everything and go hiking, I can. I am finally learning to “be here now.” There’s more diversity, less repetition. Mindfulness is no longer a spiritual luxury, but an economic necessity. While this may not be the most profitable career path, it’s good for my own bottom line: happiness.
The Conversation
Folks, this was long and it took me a bit longer to compose. I hope you found the effort and delay worthwhile. As always, your feedback is welcome. Feel free to reply to this email. And, naturally, feel free and encouraged to share this newsletter as you see fit.
I hope you and yours are well.
Take care,
Michael
We are presently in the midst of another wave of free speech/cancellation discourse, this one prompted by an open letter published in Harper’s warning against a rising tide of illiberal constraints on free expression.
While debates about free speech are as old as the idea of free speech, a case could be made that they have taken on a different character in recent years. This may be a matter of frequency and intensity, but I suspect that the nature of the debate has shifted substantively as well. It seems that more recent clashes have less to do with specific applications of the principle than with the relative merits of the principle itself.
If so, it should not come as a surprise. When the technological infrastructure sustaining public speech is radically altered, so too is the experience and meaning of speech. Because this debate is framed by the conditions of the Database—the superabundant, practically infinite assemblage of data in our externalized collective memory, otherwise known as the internet—it is nearly impossible to navigate through every continuously unfolding aspect of even a seemingly narrow and contained instance like the Harper’s letter. So I won’t even make the attempt. Instead, I’m going to take a path that has been less frequently trod by examining a handful of underlying dynamics driving the controversy.
To be clear, I’m not suggesting that I can explain the causes of the debate, they are many and complex. Much less do I aim to settle the debate one way or the other. In fact, if I’m right, the debate can’t properly be settled at all. Rather, I aim to understand the deeper material conditions that generate the context for the debate.
My overarching thesis regarding free speech crisis discourse, including debates about “cancel culture,” can be put this way: this is what you get when the word is re-animated under the conditions of digital re-enchantment.
That’s a pretty jargon-heavy claim, so it obviously needs to be unpacked. I’ll start the process by distinguishing the two key theoretical components: the re-animated word, on the one hand, and digital re-enchantment on the other. These two distinct but related developments together generate the conditions driving our seemingly intractable and increasingly acrimonious free speech skirmishes.
By speaking of the re-animated word, I’m thinking in terms of media ecology and the basic premise is that we experience speech differently depending on the medium that bears it. Speech grounded in the face-to-face encounter is one thing. Speech inscribed in writing is another. And, more to the point for our purposes, print produces an experience of speech distinct from the experience of speech generated by digital media. It is in this difference that we find the root of our present re-litigation of the nature and value of free speech. Our previously regnant ideals regarding freedom of speech arose in the context of print culture and they are now, for better and for worse, floundering in the context of digital media.
In short, writing, and especially print, renders the word seemingly inert and thing-like. It tames the word in a very specific sense: by removing it from the potentially volatile and emotionally laden context of the face-to-face encounter. The difference has less to do with the content of the printed word than with its phenomenology, or how we experience it. It is absolutely true that you can find all manner of vitriolic and combative speech in print, as is evidenced, for example, in the political pamphleteering of the early republic. But, experientially, it is one thing to encounter this content in written form at a temporal and spatial remove from the author, whose very significance becomes dubious, and it is another to encounter these words directly and immediately from the mouth of the speaker, whose personal significance is unavoidable. In other words, even the plausibility of the claim that you should challenge the ideas and not the person, for example, is sustained by the conditions of print.
Let’s reflect on this a bit further. What is a word? This is not a trick question or a sophomoric dorm room provocation. When you think of a word, what do you think of? I’d be willing to bet that if you were asked to think of the word “cat,” for example, you would almost certainly think of the three letters C-A-T (or whatever the equivalent might be in your native tongue). Now ask yourself what would be the answer to that question in the era before writing was invented? Clearly not a set of symbols.
When you think of a word as a set of letters, you’re thinking of the word as an inert, lifeless thing. Before the introduction of writing, the word was not a thing but an event. It was powerful and effected irreversible change. Nothing better illustrates these different attitudes to the word than when modern readers encounter the biblical narrative of Isaac and his two sons, Jacob and Esau. When Jacob deceives his father into conferring his spoken blessing on him rather than Esau, the eldest son, a modern reader is likely to ask, well, why not just take it back, they’re just words. But when the word is an event rather than a thing, you can’t just take them back just as you can’t undo an event.
Not surprisingly, then, modern free speech ideals are historically correlated with emergence and internalization of print culture. Print encourages the notion that the content of ideas can easily be and, indeed, ought to be distinguished from the one who presents them. Print abstracts the act of communication from the lived experience of communicators and thus fosters the sense that words alone can do no harm. The well-known proverb about sticks and stones, for example, is most plausible in the context of print culture (and, in fact, seems to arise precisely in this context). The time and space necessary to the labor of communicating in print itself has a diminishing effect on the felt intensity of communication.
Digital media changes all of this. It places the word back into the heated context of relative immediacy. It is true, of course, that most digital communication still happens at a physical remove, but the temporal remove is collapsed, renewing a measure of immediacy to the act of speaking in public. Moreover, the word is reanimated in the sense that it becomes newly active: active and ephemeral on the screen, enlivened by image and audio, and active in its intensified emotional consequences. To speak into the digital public sphere is to potentially invite an immediate and intense assault not simply upon your ideas but upon you and your livelihood and well-being because, after all, you and your ideas and your words are now more tightly bound together.
There is a paradox here, though, that is worth noting. In one respect, one might just as easily say that digital speech does nothing, transpiring as it does in a hyperreal context. If the word once again takes on the aspect of an event, it is may be more like a pseudo-event. If so, then, this dynamic, too, threatens to escalate and intensify the character of online speech. The more powerless it appears as speech, the greater the temptation to ratchet up its intensity and escalate its hostile character.
So, then, the reanimated word is a different beast than the printed word. Consequently, when we internalize its dynamics, we’re likely to begin with a different set of assumptions about freedom of speech than those fostered by print culture.
The matter of digital re-enchantment is a bit more complex, but I’ll try to keep this relatively brief.
In the sociological tradition, modernity is characterized by the disenchantment of the world. This is a matter of serious debate, which I’ll sidestep here, but, needless to say I think there is a good case that can still be made for the theory. The general idea is that in the modern world, we’re less likely to think the forest is populated by fairies, that magical amulets can ward off disease, that a relic can protect us on a journey, or that evil spirits can bring harm upon us. The enchanted world was also a locus of meaning and significance as opposed to disenchanted modern world, which appears chiefly as raw material for our technological projects.
For my purposes, I’m especially interested in the way that philosopher Charles Taylor incorporates disenchantment theory into his account of modern selfhood. The enchanted world, in Taylor’s view, yielded the experience of a porous, and thus vulnerable self. The disenchanted world yielded an experience of a buffered self, which was sealed off, as the term implies, from beneficent and malignant forces beyond its ken. The porous self depended upon the liturgical and ritual health of the social body for protection against the such forces. Heresy was not merely an intellectual problem, but a ritual problem that compromised what we might think of, in these times, as herd immunity to magical and spiritual forces by introducing a dangerous contagion into the social body. The answer to this was not simply reasoned debate but expulsion or perhaps a fiery purgation.
Just as digital media reanimates the word, so to does it re-enchant the world, although in a very different specific sense. Taking Taylor’s model as a template, it reverses the conditions that sustained the plausibility of the buffered self. In the digitally re-enchanted world, as I wrote in a recent essay for The New Atlantis,
“we are newly aware of operating within a field of inscrutable forces over which we have little to no control. Though these forces may be benevolent, they are just as often malevolent, undermining our efforts and derailing our projects. We often experience digital technologies as determining our weal and woe, acting upon us independently of our control and without our understanding …
We are troubled not by spirits but by bots and opaque algorithmic processes, which alternately and capriciously curse or bless us. In the Digital City, individuals may be refused credit, passed over for job interviews, or denied welfare on the basis of systems built on digital data against which they have little to no recourse.
We are, in other words, vulnerable, and our autonomy is compromised by the lines of technologically distributed agency that intersect our will and desires.
This means, then, that the experience of the self that emerges out of this technologically enchanted milieu more resembles the porous self of the previously enchanted world than the buffered self that corresponded to disenchanted modernity. And the newly porous self is more closely correlated to the virtues of communally regulated speech while the buffered self was more neatly aligned with the spirit of individualized free speech idealism.
There’s obviously a great deal more that could be said about free speech in digital contexts. All of the following deserve careful consideration: the scale and immediacy of consequences in terms of both actions elicited and retributions exacted, the shifting power differentials occasioned by digital media, the precarity of employment, the form of digital platforms designed to elicit passionate engagement and discourage thoughtful conversation, the presumption of bad faith engendered by the overtly performative character of communication on digital platforms, the collapsing of different communities with their distinctive codes for speech and conduct into one digital space, as well as the relative permanence memory and lack of obscurity generated by searchable databases.
That said, I think the deeper undercurrents shaping how we experience the word in relation to the self that I’ve outlined here play an important role in setting the stage for our free speech travails.
I write The Convivial Society on a patronage model. The main offerings remain public, but I welcome the support of those who find value in the work. Paid subscribers also get access to our newly launched reading groups. In any case, I’m toying with the platform’s features, so here’s a discount offer for any of you who care to kick in a one year subscription.
If you’re reading this and you’re not already on the mailing list, by all means please do sign up for the free plan and you’ll still get pretty much everything I write on here.
In addition to my scribblings here and elsewhere, I occasionally give talks about the role technology plays in our private and social lives. If there’s a Q&A time afterwards, one of the questions I’m most likely to get will be about how parents should regulate their kid’s use of digital devices. Sometimes the underlying anxiety and frustration is palpable.
For a long time, I was hesitant to address these sorts of questions because I wasn’t a parent myself, and I had enough good sense to know that it was best not to opine on how to raise children if you didn’t have some firsthand experience. Having now been a parent for nearly five years, I feel a bit less sheepish about addressing some of these questions, and, of course, the questions have also taken on a more personal and urgent quality.
I don’t think I’ve got this business figured out, of course. Far from it. But I have a few thoughts on the matter that might be helpful. And, honestly, while these will be framed by the question of children and technology, I think you’ll find the underlying principles more broadly applicable.
First, let’s get a few preliminary observations out of the way. Raising children can be a challenging, thankless, anxiety-ridden affair. Most of us are doing our best, often with limited resources and support. The last thing any parent needs is to be made to feel badly about one more ostensible failure or shortcoming on their part. This is especially true during a pandemic, which has radically restructured household arrangements and routines for parents and children both. So, please, do not hear any of what follows as anything more than one parent, given his own circumstances and aspirations, thinking out loud about these questions in the event that it proves helpful to other parents thinking through these same issues.
The following considerations are generally ordered from those claims that I think are pretty solid and broadly useful to those that stem a bit more idiosyncratically from my own perspective. And, no, in case you’re wondering, I don’t live up to these in my own practice as a parent, but I still aspire to their fuller realization as the vagaries of life allow. Finally, these are, for me, certainly not rules to be followed, but ideals to be daily negotiated in the trenches. Comments are open to all, and I’d be happy to read your own thoughts on these matters.
* Resist technocratic models of what it means to raise a childIn my experience, parents are almost always looking for concrete and practical advice to follow, which is the kind I’m least likely to offer. Not because I like to be factitious, but because I think it’s important to recognize how questions about how much screen time is too much, for example, actually hint at a more subtle consequence of the technological framing of the task of raising children. In other words, while we focus on specific devices in our children’s lives, we sometimes miss the technocratic spirit we are tempted to bring to the task of raising children.This spirit was captured rather well a few years back by Alison Gopnick, who distinguished between two kinds of parents: carpenters and gardeners. Gopnick has a rather specific set of anxious middle class parents in view, but the distinction she offers is useful nonetheless. In the carpenter model, parents tend to view raising children as an engineering problem in which the trick is to apply the right techniques in order to achieve the optimal results. In this view, “parenting” is something you do. It is work. And the point of the work is to manufacture a child to certain specifications as if the child herself were simply a bit of raw, unformed material. In the gardening model, parents do not conceive of their children as a lump of clay to be fashioned at will. The focus isn’t on “parenting” as an activity, but on being a parent as a relationship structured by love. While the carpenter by their skill achieves a level of mastery and control over the materials, the gardener recognizes that they cannot ultimately control what the seed will become, that much is given. They can only provide the conditions that will be most conducive to a plant’s flourishing. Of course, any discussion that starts with “There are two kinds of x” will undoubtedly have its limitations, but I think it’s useful to remember that we do not make our children, we receive them as gifts. Naturally, this does not alleviate us of our responsibilities toward them. Far from it. But it does change how we experience those responsibilities, and it does relieve us of a particular set of anxieties that inevitably accompany any project aimed at the mastery of recalcitrant reality. Parents have enough to worry about without also accepting the anxieties that stem from the assumption that we can perfectly control who our children will become by the proper application of a various techniques.
* Resist a reactionary approach to technologyIn this arena, but may be as a general rule, it’s better to let your choices flow from what you are for rather than what you are against. In other words, when thinking about something like children and smartphones, say, it’s better to imagine yourself working toward particular goods you would like to see materialize in your child’s life than simply proscribing the use of smartphones out of some justifiable but murky apprehension. Don’t get me wrong, it’s not that there’s no such thing as “too much time on a smartphone,” it’s just that figuring what that means can’t happen in abstraction from a larger vision of what is good. “Too much” implies a relative standard. Relative to what, then? Is there also “too little”? What would “just right” look like? I don’t believe it’s possible to answer those questions, or questions like them, in the abstract. The point is to ask yourself what are the goods you desire for your children and your family. With those clearly in view, you can then think more deliberately about how certain tools and devices move you toward the good or undermine its realization. Of course, implicit in this is the assumption that we will have some fairly clear sense of what we’re for, as well as a decent grasp of how our tools can become morally and intellectually formative (or de-formative). Infants and toddlers won’t be able to deliberate about such matters with you, but my sense is that the sooner you’re able to bring children into some meaningful conversation about this kind of thing the better. Invite them to pursue the good and teach them by example to subject their use of any tool or device to that higher end. In this way we can inoculate them against one of the most pervasive disorders of a technological society, the temptation to make technology an end in itself.
* Resist technologies that erode the space for childhoodI’m a fan of Neil Postman, but I tend to have a few more quibbles than usual with his 1982 book, The Disappearance of Childhood, in which Postman argues that childhood (and adulthood) as it had been imagined in modern western societies was tied to print culture. Consequently, as print culture fades in the face of electronic media, Postman argued its attendant models of childhood (and adulthood) faded, too. Quibbles aside, I think there’s something to the claim that certain techno-social configurations generate different experiences of childhood. It also seems that the experiences of and boundaries separating childhood, adolescence, and adulthood have been in flux. I sometimes talk about this in terms of what I’ve called the professionalization of childhood and the infantilization of adulthood. The professionalization of childhood is related to the technocratic modes of parenting for safety and optimization discussed above. It’s evident in the amount of resources, time, and expertise that, in certain segments of society, is often brought to bear upon every aspect of a child’s life. So we do well to think about the qualities and experiences that constitute a desirable childhood, one that neither rushes children toward the responsibilities, pressures, and anxieties of adulthood nor fails to adequately prepare them for such. It’s a delicate balance to be sure, but it seems to me that children must be allowed to be children if they are then to grow into a reasonably mature and stable adulthood. Along these lines let me quote at length from Robert Pogue Harrison:
“It may appear as if the world now belongs mostly to the younger generations, with their idiosyncratic mindsets and technological gadgetry, yet in truth, the age as a whole, whether wittingly or not, deprives the young of what youth needs most if it hopes to flourish. It deprives them of idleness, shelter, and solitude, which are the generative sources of identity formation, not to mention the creative imagination. It deprives them of spontaneity, wonder, and the freedom to fail. It deprives them of the ability to form images with their eyes closed, hence to think beyond the sorcery of the movie, television, or computer screen. It deprives them of an expansive and embodied relation to nature, without which a sense of connection to the universe is impossible and life remains essentially meaningless. It deprives them of continuity with the past, whose future they will soon be called on to forge.”I realize Harrison makes a number of sweeping claims in those few lines. I’m not suggesting we accept them at face value, but I am suggesting that they’re worth contemplating, especially with a view to the role of technology in these dynamics.
* Resist technologically mediated liturgies of consumptionI probably take inordinate umbrage at the little carts with a “Shopper in Training” flag at Trader Joe’s. But, honestly, the “Shopper in Training” thing really irks me. Many of the challenges presented by digital technologies stem from their participation in already existing socio-economic patterns of endless consumption and the effort to initiate children into these same patterns. To whatever degree the use of a certain technology amounts to participation in a liturgy of endless consumption, I would think twice about adopting it. This one is tough, I admit. But at the very least, a balance of sorts ought to be struck between activities and technologies of preservation and production, however simple or rudimentary.
* Be skeptical of running unprecedented social experiments on childrenWhile the social scientific data is still being gathered, analyzed, and debated, it is evident that we are running a society-wide experiment on our children by immersing them in a world of digital devices without any clear sense of the long-term consequences. Whether we’re talking about ubiquitous visual stimulation, unrelenting documentation, networks of monitoring and surveillance from infancy to adolescence, or offloading our care of children to AI assistants, for example, I’m not keen on thoughtlessly submitting children to this experiment. There’s no need to be alarmist here, although sometimes that may not be altogether unreasonable, but we should be judiciously skeptical and cautious. In practice this means being a bit suspicious about the panoply of devices and tools we introduce into our children’s experience, even from the earliest days of their life. And don’t forget to consider not only the tools that mediate your child’s experience, but also those that mediate your experience of being a parent.
* Embrace limitsIf you’ve been reading my stuff for any length of time, you know this is a principle that is near and dear to my own understanding of human flourishing. In short, I think we do well to respect certain limits implicit in our embodied status as creatures in a material world. I tend to think it is good for our minds and our bodies when we don’t flagrantly disregard foundational rhythms associated with our earth-bound existence. Chiefly, this amounts to finding ways to better order or experience of time and place and human relationships. In the modern world, of course, we tend to experience limits as taunts inviting their own transgression. This is, in my view, a destructive dead end. Better to see things as Wendell Berry puts it: “[O]ur human and earthly limits, properly understood, are not confinements but rather inducements to formal elaboration and elegance, to fullness of relationship and meaning. Perhaps our most serious cultural loss in recent centuries is the knowledge that some things, though limited, are inexhaustible. For example, an ecosystem, even that of a working forest or farm, so long as it remains ecologically intact, is inexhaustible. A small place, as I know from my own experience, can provide opportunities of work and learning, and a fund of beauty, solace, and pleasure — in addition to its difficulties — that cannot be exhausted in a lifetime or in generations.”Consequently, I hope to both demonstrate and convey to my own children a way of being with technology which resists the temptations of a self-defeating pursuit of limitlessness and a willingness to receive time as a gift rather than an enemy to be defeated.
* Embrace convivial toolsNeedless to say, none of this is about being anti-technology. Rather, it’s about being judicious in our introduction of technology to our children. So if we are thinking about what tools or technologies to invite into the life of a family, Ivan Illich’s concept of convivial tools gives us a good guide. “I choose the term ‘conviviality,’” Illich wrote, “to designate the opposite of industrial productivity. I intend it to mean autonomous and creative intercourse among persons, and the intercourse of persons with their environment; and this in contrast with the conditioned response of persons to the demands made upon them by others, and by a man-made environment. I consider conviviality to be individual freedom realized in personal interdependence and, as such, an intrinsic ethical value.” Elsewhere, Illich writes, “Convivial tools are those which give each person who uses them the greatest opportunity to enrich the environment with the fruits of his or her vision. Industrial tools deny this possibility to those who use them and they allow their designers to determine the meaning and expectations of others.”Albert Borgmann’s focal things and focal practices would work just as well here. The point is to embrace tools that generate a deep, skillful, and satisfying engagement with the world, tools which also sustain a substantive experience of community, belonging, and membership.
* Cultivate wonder Wonder at the world that is is an indispensable feature of childhood that adults should fight to preserve. The best way I know to do this is is simply to attend lovingly to the world on the assumption that it has something of value to disclose to us and a reservoir of beauty to enrich our lives. As I’ve mentioned recently, attention is one of our most precious resources and we should do what we can to help our children become good stewards of this resource. So, I encourage myself and my children to look, to listen, to smell, to taste, to touch. I want them, just as I want myself, to cultivate a capacity for literally care-ful attention, an attentiveness that stems from a deep care for the world and those we share it with.
* Tell stories, read poetryTake this one as a kind of added bonus. Good stories and poems do more than convey “content.” By their form, they embody, sustain, elicit, and encourage the very habits and virtues discussed above. To go a step further, I’d add memorize poetry.
Fin. I hope you found this useful. Again, I welcome your own thoughts, critical or otherwise, on these matters. I’m ready to learn from what you’ve discovered in your own experience.
“Never has the individual been so completely delivered up to a blind collectivity, and never have men been less capable, not only of subordinating their actions to their thoughts, but even of thinking. Such terms as oppressors and oppressed, the idea of classes–all that sort of thing is near to losing all meaning, so obvious are the impotence and distress of all men in face of the social machine, which has become a machine for breaking hearts and crushing spirits, a machine for manufacturing irresponsibility, stupidity, corruption, slackness and, above all, dizziness. The reason for this painful state of affairs is perfectly clear. We are living in a world in which nothing is made to man’s measure; there exists a monstrous discrepancy between man’s body, man’s mind and the things which at present time constitute the elements of human existence; everything is in disequilibrium.”— Simone Weil, “Oppression and Liberty” (1955)
Programming note: On the recommendation of a friend, I’m experimenting with Substack’s podcast tool. You’ll note that nothing has changed with regards to the content, except that you now have the option to listen to the main essay should that prove more convenient. If you hit play above you’ll be taken to the webpage for the newsletter from which the audio will play. Nothing fancy, just me reading the text. If you have any thoughts on this, feel free to pass them along.
In the wake of the American failure to contain or manage COVID-19, I’ve begun to encounter the recurring refrain, “We’re going to have to learn how to live with this virus.” The tone may be indignant, exasperated, defiant, but the general point is the same: the virus is with us for the foreseeable future and people need to figure out how best to get on with their lives.
Regrettably, this is probably correct. A web of interconnected failures, stemming from the highest levels of government down to individual citizens, have more or less assured this outcome. We can hope for a vaccine to arrive sooner rather than later. We can hope for better treatment options. We can hope the virus unexpectedly fizzles out, “despite ourselves” as Zeynep Tufekci recently put it. But, as she added, hope is not a plan, and we’re more than likely stuck with COVID for at least another year.
But that’s not what I’m going to talk about here. Rather, I want to begin by discussing how this sentiment, “We’re going to have to learn how to live with this virus,” suddenly struck me as a useful way of framing an approach to the personal, social, and global challenges posed by the present configuration of digital society—challenges to the conduct of our everyday lives, to fabric of our communities, and to political and economic order.
So here’s the thing, we’re going to have to learn how to live with digital technology. We can hope for legislative action and regulation. We can hope for a radical transformation of the industry stemming from a labor insurgency at tech companies. We can hope that a renewed focus on humane technology may bear fruit in the long run. We can hope that digital technology, despite ourselves, doesn’t (further) accelerate the corruption of the political and social order. But hope is not a plan, and we’re more than likely stuck with the existing techno-social configuration of digital technology for the foreseeable future.
Don’t get me wrong. Just as I support efforts to develop a vaccine, discover therapeutic options, or restore governmental leadership to manage COVID-19, so too do I find merit in the various efforts I mentioned above to better navigate the social consequences of digital technology. But in the same way that I cannot simply hope and do nothing with regards to COVID-19, so I cannot simply hope for these various measures regarding digital technology to materialize and do nothing myself in the meantime.
For one thing, I am personally ill-positioned to do very much of consequence with regards to efforts either to develop anti-viral therapies, for example, or to draft legislation to regulate the tech industry. It’s not that I can’t do anything at all, of course. I can donate to organizations supporting vaccine research and I can contact my representative. But, these actions will not help me today or tomorrow or next month.
Over the last few years, I found myself occasionally writing in defense of a multi-faceted response to the challenges of digital technology. Chiefly, this amounted to a defense of individualized efforts to address such challenges from those who insisted that such efforts were unnecessary, on the one hand, and, on the other, from those who believed them to be inadequate and perhaps even counter-productive. I readily granted that individualized action alone was insufficient to the full range and scope of the challenges in view, and I granted, too, that we should resist a consumerist framing of the problem in which better informed, ethical consumption would be the answer to our problems. But I was baffled by those who in their defense of collective and political action seemed bent on discrediting individualized or even localized action.
It now seems to me that COVID-19 presents an opportunity to make an instructive variation of the case I sought to make in these instances. The health threat is collective and it requires all manner of responses in order to be met, and some of those responses materialize at the level of individual or household choices. Precisely because of the interdependent nature of human society, not despite of it, we are urged to act responsibly with a view not only to our own health but to the health of our neighbors and our community. Our membership in a community of mutual inter-dependencies does not diminish the need for personal responsibility, it heightens it.
Consider, too, how the same veiled distribution of consequences plagues our response to the virus and to the various manifestations of digital infrastructure. I must think of the virus not only as a threat to me, which I may be free to discount, but as a threat to others through me. Likewise, I must think of certain digital technologies in light of the unequally distributed consequences to which my personal choices may contribute. Perhaps I have no reason to fear any adverse effects from my adoption of a front proch Ring camera, but I must be able to imagine how the widespread adoption of this technology will have adverse effects for already marginalized members of the community and how it further depletes the fund of communal trust.
So here is the paradox: Certain digital technologies should be resisted not merely for their personal consequences, which may be negligible for certain individuals, but for their collective consequences. But for this reason, I should not simply wait for collective action, I should personally resist these tools in order to mitigate their deleterious consequences locally. Will my resistance alone solve the challenge posed by these tools? Obviously not. Should that keep me from doing what I can to confront the problem? Again, in my view, obviously not. Similarly, will my wearing a mask make the coronavirus disappear? No. Bu should that keep me from wearing one? No, again.
I’m reminded of Solzhenitsyn’s rule for the common citizen seeking to live with integrity in a repressive regime: “Let the lie come into the world, let it even triumph. But not through me.” He thought the artist could do more, but this much at least the average person could pledge to do.
Ivan Illich’s discussion of the question of public versus private ownership of industrial technology is also instructive. “It is equally distracting,” Illich wrote in Tools for Conviviality,
to suggest that the present frustration is primarily due to the private ownership of the means of production, and that the public ownership of these same factories under the tutelage of a planning board could protect the interest of the majority and lead society to an equally shared abundance. As long as Ford Motor Company can be condemned simply because it makes Ford rich, the illusion is bolstered that the same factory could make the public rich. As long as people believe that the public can profit from cars, they will not condemn Ford for making cars. The issue at hand is not the juridical ownership of tools, but rather the discovery of the characteristic of some tools which make it impossible for anybody to ‘own’ them. The concept of ownership cannot be applied to a tool that cannot be controlled.”
Now substitute Mark Zuckerberg or Jack Dorsey or Jeff Bezos for Henry Ford. The illusion to be combatted is that the tool itself is not at least part of the problem and that if it were only managed more ethically or regulated more effectively we could retain the benefits it confers while sidestepping its ills. What may be harder to countenance is the possibility that the tool itself may be destructive and corrosive of society, that its ills are essential to its nature rather than accidental.
So where does this leave us? It leaves us in the position of having to figure out how we are going to live with digital technologies rather than simply waiting for resolutions we cannot effect and which may or may not materialize.
To put this another way, yes, the most important problems we face are far greater than you or me. Yes, they require ambitious collective action. But when that action is not forthcoming, then our response cannot be to do nothing at all.
So where to begin? There are any number of possible answers, and they will vary greatly depending on your own circumstances. But allow me to make a modest suggestion: begin with your attention, because it may be that everything else will flow from this.
Attention is something I’ve written about on numerous occasions, so I’m hesitant about taking up the theme again here. But it’s been awhile, maybe two years, since last I wrote about it at any length, and I remain convinced that it’s a critical and urgen issue. So, as they say, hear me out.
I won’t comment on digital distractedness or social media platforms designed for compulsive engagement or the inability to get through a block of text without checking your smartphone 16 times or endless doomscrolling, as it’s now fashionable to call it (really just a new form of the old vice acedia), or our self-loathing tweets about the same. These matter only to the degree that we believe our attention ought to be directed toward something else, that in these instances it is somehow being misdirected or squandered. Attention, like freedom, is an instrumental and penultimate good, valuable to the degree that it unites us to a higher and substantive good. Perfect attention in the abstract, just as perfect freedom in the abstract, is at best mere potentiality. They are the conditions of human flourishing rather than its realization.
David Foster Wallace, who I realize has become a polarizing figure, was nonetheless right, in my view, to understand attention as constituting a form of freedom. “The really important kind of freedom,” Wallace claimed, “involves attention and awareness and discipline, and being able truly to care about other people and to sacrifice for them over and over in myriad petty, unsexy ways every day.” “That is real freedom,” Wallace claimed, and I’m inclined to agree.
Freedom that’s worth a damn is the freedom to attend with care to what matters. “Effort is the currency of care,” as Evan Selinger so eloquently put it some time ago, and, I would add, the preeminent form such effort takes is attention. And, yes, of course, I’m going to quote Simone Weil again, “Attention is the rarest and purest form of generosity.” What is jeopardized when our capacity for attention is compromised and hijacked is not our ability to read through War and Peace but rather our ability to care for ourselves, our neighbors, and our world as we should.
This, then, at least gives us a useful heuristic by which we might think about attention. Does it feel to you as if you are free in the deployment of your attention throughout any given day? Allow me here to speak out of my own experience: I know that it often doesn’t feel that way to me. I frequently find myself attending to what I know I shouldn’t or unable to attend to what I should. This is not a function of external coercion, strictly speaking. I experience it chiefly as a failure of will, as a form of unfreedom stemming from a regime of conditioning to which I’ve submitted myself more or less willingly.
And I feel the loss. The loss of focus, yes. The loss of productivity, yes. But also the loss the world and the loss of some version of myself to which I aspire.
I find myself needing constantly to ask, “What is worthy of my attention?” or, better, “What is worthy of my attention given what I claim to love, what I aim to accomplish, and who I hope to become?” If by our attention we grant the object of our attention some non-trivial power over the shape of our thoughts, feelings, and actions, then this may be one of the most important questions we can ask ourselves.
Several years ago, reflecting on this very matter, I wrote about the need for what I then called attentional austerity. Austerity is not a warm or appealing concept, of course. But once again, Illich can help us better frame the matter. “‘Austerity,’” he writes, “has also been degraded and has acquired a bitter taste, while for Aristotle or Aquinas it marked the foundation of friendship.” “In the Summa Theologica,” Illich continued, “Thomas deals with disciplined and creative playfulness … [defining] ‘austerity’ as a virtue which does not exclude all enjoyments, but only those which are distracting from or destructive of personal relatedness.” “For Thomas,” Illich concluded, “‘austerity’ is a complementary part of a more embracing virtue, which he calls friendship or joyfulness. It is the fruit of an apprehension that things or tools could destroy rather than enhance [graceful playfulness] in personal relations.”
From this perspective, then, austerity becomes not a deprivation but a virtue in service of a greater good and a higher joy, a virtue we do well to recover.
As we draw to a close, I want to add that it is not only a matter of consciously and austerely ordering my attention toward some greater good, of wresting it back from an environment that has become an elaborate Skinner box, it is also good for me to cultivate a form of expectant attentiveness to what is, a form of attention that commits itself to seeing the world before me.
The Polish poet Czeslaw Milosz once observed that “In ancient China and Japan subject and object were understood not as categories of opposition but of identification.” “This is probably the source,” he speculated, “of the profoundly respectful descriptions of what surround us, of flowers, trees, landscapes, for the things we can see are somehow a part of ourselves, but only by virtue of being themselves and preserving their suchness, to use a Zen Buddhist term.”
Further on in the same essay he wrote about the wonder that arises when, as he put it, “contemplating a tree or a rock or a man, we suddenly comprehend that it is, even though it might not have been.” This kind of wonder, a wonder at the givenness of things, the sheer gratuity of existence is perhaps its own reward as well as the gateway to the love of wisdom, as the ancient philosophers believed.
I hear in Milosz’s words an invitation, an invitation to step away as I am able from the patterns of digitally mediated reality, which while not without its modest if diminishing satisfactions, can overwhelm other crucial modes of perception and being.
The question of attention in the age of digital media may ultimately come down to the question of limits, the acceptance of which may be the condition of a more enduring joy and satisfying life. What digital media promises on the other hand is an experience of limitlessness exemplified by the infinite scroll. It tempts us to become gluttons of the hyperreal. There is always more, and much of it may even seem urgent and critical. But we cannot attend to it all, nor should we. I know this, of course, but I need to remind myself more frequently than I’d care to admit.
Pine Trees, Hasegawa Tōhaku, c. 1595
News and Resources
* Ten years after publishing The Shallows, Nicholas Carr talks to Ezra Klein about the book and its enduring relevance.
* In “How We Lost Our Attention,” Matthew Crawford explored how our understanding of attention was shaped by early modern philosophical polemics in epistemology and political theory.
* A few years back, Alan Jacobs presented 79 theses with commentary on the subject of attention and digital technology. They will repay whatever time you can give them, much here to spur thought and reflection. I was delighted to be part of a colloquium in which Jacobs presented these reflections and to offer a response, which you may find here.
* This essay probably could’ve been about half as long but it includes some interesting reflections on the rise of closed group chats:“As Facebook, Twitter and Instagram become increasingly theatrical – every gesture geared to impress an audience or deflect criticism – WhatsApp has become a sanctuary from a confusing and untrustworthy world, where users can speak more frankly. As trust in groups grows, so it is withdrawn from public institutions and officials. A new common sense develops, founded on instinctive suspicion towards the world beyond the group.”
* I stumbled recently on two recent essays by Bruno Maçães, who was Portugal’s secretary of state for European affairs from 2013 to 2015. The first, “The Attack of the Civilization State,” appeared in a Noēma, a recently launched journal from the Berggruen Institute. The second, “The Great Pause Was an Economic Revolution,” appeared in Foreign Policy. I found both to be stimulating and I may have more to say about both of them in the future, although I confess they are outside my own areas of relative expertise. This in the former essay caught my attention: “Western civilization was to be a civilization like no other. Properly speaking, it was not to be a civilization at all but something closer to an operating system … Its principles were meant to be broad and formal, no more than an abstract framework within which different cultural possibilities could be explored … Tolerance and democracy do not tell you how to live — they establish procedures, according to which those big questions may later be decided.”This recalled some of what I attempted to articulate in a 2017 post, “I will put it this way: liberal democracy is a “machine” for the adjudication of political differences and conflicts, independently of any faith, creed, or otherwise substantive account of the human good. It was machine-like in its promised objectivity and efficiency.”
* A look back at the advent of the Walkman: “The Walkman instantly entrenched itself in daily life as a convenient personal music-delivery device; within a few years of its global launch, it emerged as a status symbol and fashion statement in and of itself. ‘We just got back from Paris and everybody’s wearing them,’ Andy Warhol enthused to the Post. Boutiques like Bloomingdale’s had months-long waiting lists of eager customers. Paul Simon ostentatiously wore his onstage at the 1981 Grammys; by Christmas, they were de-rigueur celebrity gifts, with leading lights like Donna Summer dispensing them by the dozens. There had been popular electronic gadgets before, such as the pocket-sized transistor radios of the fifties, sixties, and seventies. But the Walkman was in another league.”
* Not the usual sort of link here, but I appreciated this essay about the enduring insights of the ancient Roman historian, Tacitus—“Thrones Wreathed in Shadow: Tacitus and the Psychology of Authoritarianism”: “Shame, guilt, a lingering sense of powerlessness, and self-loathing: These are all emotions common to individuals living under tyranny. And, for all his literary brilliance and psychological acumen, Tacitus is no exception to this rule. In The Annals, when the historian describes the soul of a tyrant such as Tiberius, which he poetically envisions as crisscrossed with deep ‘lacerations’ and ‘wounds,’ he projects this state of invisible scarification onto Roman society as a whole. Indeed, the historian’s genius lies in his demonstration of how authoritarianism is, first and foremost, a collective malady — one that infects almost everyone, from the maniacal tyrant to the stolid local official, anonymous informer, or jeering spectator at the local theater. As Tacitus notes in a moving passage of The Annals, ‘the ties of our common humanity had been dissolved by the force of terror; and the rising surge of brutality drove compassion away.’”
Re-framings
— From a review of two recent books about walking, In Praise of Walking and In Praise of Paths :
I am a city walker, which is to say I walk to root myself. I define my neighborhood by walking, both its boundaries and my place within them, my connection to community. Even in the middle of a lockdown, I am out most mornings, to get exercise, yes, but also to remind myself of where I am. This is the hard part — to pay attention, to remain in the present, to look outward as well as inward, now from behind the forbidding filter of my face mask, while recognizing, as Torbjorn Ekelund reflects in “In Praise of Paths: Walking Through Time and Nature,” that “the path is order in chaos.”
— Talk of paths recalled an old post in which I reflected on the way of the tourist and the way of the pilgrim as paradigmatic modes of experience:
The way of the tourist is to consume; the way of the pilgrim is to be consumed.* To the tourist the journey is a means. The pilgrim understands that it is both a means and an end in itself. The tourist and the pilgrim experience time differently. For the former, time is the foe that gives consumption its urgency. For the latter, time is a gift in which the possibility of the journey is actualized. Or better, for the pilgrim time is already surrendered to the journey that, sooner or later, will come to its end. The tourist bends the place to the shape of the self. The pilgrim is bent to shape of the journey.
— From Richard Thomas’s “From Porch to Patio” (1975). As you read this consider what an interesting coda the advent of Ring makes to this argument:
“In this transition from porch to patio there is an irony. Nineteenth-century families were expected to be public and fought to achieve their privacy. Part of the sense of community that often characterized the nineteenth-century village resulted from the forms of social interaction that the porch facilitated. Twentieth-century man has achieved the sense of privacy in his patio, but in doing so he has lost part of his public nature which is essential to strong attachments and a deep sense of belonging or feelings of community.”
The Conversation
We’re half-way through the year now. What a year. I could not have imagined, for one thing, that I was going spend so much time commenting on a virus. But I also did not foresee this newsletter growing quite the way it has or that so many of you would support the work. So thanks. Thanks for reading. Thanks for letting others know about the newsletter.
Take care,
Michael
P.S. Recent subscribers may be interested in a collection of essays I put together at the end of last year. It’s free to download, you can pay something for it if you like.
Hello everyone,
First off, that was a great first thread on Tools for Conviviality.
Secondly, for session two, we’ll have our synchronous discussion tomorrow night (Monday, June 29th) at 8 PM (EST). I’ll post another thread that should arrive in your inbox around 7:30 PM.
Thirdly, here is audio of a conversation I was honored to have with Carl Mitcham, a philosopher of technology and close associate of Ivan Illich’s going back to the 1970s. I should note that this is my first venture into podcast-like interview territory, which I readily admit is not exact something I’m naturally gifted for. That said, Carl relates some great anecdotes about Illich and sheds a little light on some questions that arose in the first thread (e.g., Illich was a bit naive and ill-informed in his praise of China in Tools). Carl closes with some reflections on his own experience teaching a semester a year in China.
Anyway, enjoy that. Also, I’m thinking of offering the main newsletter essay in this same audio format with accompanying text, of course. If you have any thoughts on that, let me know!
Looking forward to chatting with all who are able tomorrow evening,
Michael
En liten tjänst av I'm With Friends. Finns även på engelska.