Resident Theologian

About the Blog

Brad East Brad East

A.I., TikTok, and saying “I would prefer not to”

Finding wisdom in Bartleby for a tech-addled age.

Two technology pieces from last week have stuck with me.

Both were at The New York Times. The first was titled “How TikTok Changed America,” a sort of image/video essay about the platform’s popularity and influence in the U.S. The second was a podcast with Ezra Klein called “How Should I Be Using A.I. Right Now?,” an interview with Ethan Mollick.

To be clear, I skimmed the first and did not listen to the second; I only read Klein’s framing description for the pod (my emphases):

There’s something of a paradox that has defined my experience with artificial intelligence in this particular moment. It’s clear we’re witnessing the advent of a wildly powerful technology, one that could transform the economy and the way we think about art and creativity and the value of human work itself. At the same time, I can’t for the life of me figure out how to use it in my own day-to-day job.

So I wanted to understand what I’m missing and get some tips for how I could incorporate A.I. better into my life right now. And Ethan Mollick is the perfect guide…

This conversation covers the basics, including which chatbot to choose and techniques for how to get the most useful results. But the conversation goes far beyond that, too — to some of the strange, delightful and slightly unnerving ways that A.I. responds to us, and how you’ll get more out of any chatbot if you think of it as a relationship rather than a tool.

These two pieces brought to mind two things I’ve written recently about social media and digital technology more broadly. The first comes from my New Atlantic essay, published two years ago, reviewing Andy Crouch’s book The Life We’re Looking For (my emphases again):

What we need is a recommitment to public argument about purpose, both ours and that of our tools. What we need, further, is a recoupling of our beliefs about the one to our beliefs about the other. What we need, finally, is the resolve to make hard decisions about our technologies. If an invention does not serve the human good, then we should neither sell it nor use it, and we should make a public case against it. If we can’t do that — if we lack the will or fortitude to say, with Bartleby, We would prefer not to — then it is clear that we are no longer makers or users. We are being used and remade.

The other comes late in my Commonweal review, published last summer, of Tara Isabella Burton’s book Self Made:

It may feel to some of us that “everyone,” for example, is on Instagram. Only about 15 percent of the world is on the platform, however. That’s a lot of people. Yet the truth is that most of the world is not on it. The same goes for other social media. Influencer culture may be ubiquitous in the sense that most people between the ages of fifteen and thirty-five are affected by it in some way. But that’s a far cry from digitally mediated self-creation being a universal mandate.

Even for those of us on these apps, moreover, it’s possible to opt out. You don’t have to sell yourself on the internet. You really don’t. I would have liked Burton to show us why the dismal story she tells isn’t deterministic—why, for example, not every young woman is fated to sell her image on OnlyFans sooner or later.

The two relevant phrases from these essay reviews: You really don’t and Bartleby’s I would prefer not to. They are quite simply all you need in your toolkit for responding to new technologies like TikTok and generative A.I.

For example, the TikTok piece states that half of Americans are on the app. That’s a lot! Plenty to justify the NYT treatment. I don’t deny it. But do you know what that claim also means? That half of us aren’t on it. Fifty percent. One out of every two souls. Which is the more relevant statistic, then? Can I get a follow-up NYT essay about the half of us who not only aren’t tempted to download TikTok but actively reject it, can’t stand it, renounce it and all its pomp?

The piece goes further: “Even if you’ve never opened the app, you’ve lived in a culture that exists downstream of what happens there.” Again, I don’t deny it or doubt it. It’s true, to my chagrin. And yet, the power of such a claim is not quite what it seems on first glance.

The downstream-influence of TikTok works primarily if and as one is also or instead an active user of other social media platforms (as well as, perhaps, cable news programs focused on politics and entertainment). I’m told you can’t get on YouTube or Instagram or Twitter or Facebook without encountering “imported” content from TikTok, or “local” content that’s just Meta or Google cribbing on TikTok. But what if, like me, you don’t have an account on any of these platforms? What if you abstain completely from all social media? And what if you don’t watch Fox News or MSNBC or CNN or entertainment shows or reality TV?

I was prepared, reading the NYT piece, to discover all the ways TikTok had invaded my life without my even realizing it. It turns out, though, that I don’t get my news from TikTok, or my movie recommendations, or my cooking recipes, or my fashion advice(!), or my politics, or my Swiftie hits, or my mental health self-diagnoses, or my water bottle, or my nightly entertainment before bed—or anything else. Nothing. Nada. Apparently I have been immune to the fifteen “hottest trends” on TikTok, the way it invaded “all of our lives.”

How? Not because I made it a daily goal to avoid TikTok. Not because I’m a digital ascetic living on a compound free of wireless internet, smart phones, streaming TV, and (most important) Gen Z kiddos. No, it’s because, and more or less only because, I’m not on social media. Turns out it isn’t hard to get away from this stuff. You just don’t download it. You just don’t create an account. If you don’t, you can live as if it doesn’t exist, because for all intents and purposes, for your actual life, it doesn’t.

As I said: You really don’t have to, because you can just say I would prefer not to. All told, that’s enough. It’s adequate all on its own. No one is forcing you to do anything.

Which brings us to Ezra Klein.

Sometimes Klein seems like he genuinely “gets” the scale of the threat, the nature of the digital monstrosity, the power of these devices to shape and rewire our brains and habits and hearts. Yet other times he sounds like just another tech bro who wants to maximize his digital efficiencies, to get ahead of the masses, to get a silicon leg up on the competition, to be as early an adopter as possible. I honestly don’t get it. Does he really believe the hype? Or does he not. At least someone like Tyler Cowen picks a lane. Come join the alarmist train, Ezra! There’s plenty of room! All aboard!

Seriously though, I’m trying to understand the mindset of a person who asks aloud with complete sincerity, “How should I incorporate A.I. into my life ‘better’?” It’s the “should” that gets me. Somehow this is simultaneously a social obligation and a moral duty. Whence the ought? Can someone draw a line for me from this particular “is” to Klein’s technological ought?

In any case, the question presumes at least two things. First, that prior to A.I. my life was somehow lacking. Second, that just because A.I. exists, I need to “find a place for it” in my daily habits.

But why? Why would we ever grant either of these premises?

My life wasn’t lacking anything before ChatGPT made its big splash. I wasn’t feeling an absence that Sam Altman could step in to fill. There is no Google-shaped hole in my heart. As a matter of fact, my life is already full enough: both in the happy sense that I have a fulfilling life and in the stressful sense that I have too much going on in my life. As John Mark Comer has rightly pointed out, the only way to have more of the former is through having less of the latter. Have more by having less; increase happiness by jettisoning junk, filler, hurry, hoarding, much-ness.

Am I really supposed to believe that A.I.—not to mention an A.I. duplicate of myself in order (hold gag reflex) to know myself more deeply (I said hold it!) in ways I couldn’t before—is not just one more damn thing to add to my already too-full life? That it holds the secrets of self-knowledge, maximal efficiency, work flow, work–life balance, relational intimacy, personal creativity, and labor productivity? Like, I’m supposed to type these words one after another and not snort laugh with derision but instead take them seriously, very seriously, pondering how my life was falling short until literally moments ago, when A.I. entered my life?

It goes without saying that, just because the technology exists, I don’t “need” to adopt or incorporate it into my life. There is no technological imperative, and if there were it wouldn’t be categorical. The mere existence of technology is neither self-justifying nor self-recommending. And must I add that devoting endless hours of time, energy, and attention to learning this latest invention, besides stealing those hours from other, infinitely more meaningful pursuits, will undeniably be superseded and almost immediately made redundant by the fact that this invention is nowhere near completion? Even if A.I. were going to improve daily individual human flourishing by a hundredfold, the best thing to do, right now, would be absolutely nothing. Give it another year or ten or fifty and they’ll iron out the kinks, I’m sure of it.

What this way of approaching A.I. has brought home to me is the unalterably religious dimension of technological innovation, and this in two respects. On one side, tech adepts and true believers approach innovation not only as one more glorious step in the march of progress but also as a kind of transcendent or spiritual moment in human growth. Hence the imperative. How should I incorporate this newfangled thing into my already tech-addled life? becomes not just a meaningful question but an urgent, obvious, and existential one.

On the other side, those of us who are members of actual religious traditions approach new technology with, at a minimum, an essentially skeptical eye. More to the point, we do not approach it expecting it to do anything for our actual well-being, in the sense of deep happiness or lasting satisfaction or final fulfillment or ultimate salvation. Technology can and does contribute to human flourishing but only in its earthly, temporal, or penultimate aspects. It has nothing to do with, cannot touch, never can and never will intersect with eternity, with the soul, with the Source and End of all things. Technology is not, in short, a means of communion with God. And for those of us (not all religious people, but many) who believe that God has himself already reached out to us, extending the promise and perhaps a partial taste of final beatitude, then it would never occur to us—it would present as laughably naive, foolish, silly, self-deceived, idolatrous—to suppose that some brand new man-made tool might fix what ails us; might right our wrongs; might make us happy, once and for all.

It’s this that’s at issue in the technological “ought”: the “religion of technology.” It’s why I can’t make heads of tails of stories or interviews like the ones I cited above. We belong to different religions. It may be that there are critical questions one can ask about mine. But at least I admit to belonging to one. And, if I’m being honest, mine has a defensible morality and metaphysics. If I weren’t a Christian, I’d rather be just about anything than a true believing techno-optimist. Of all religions on offer today, it is surely the most self-evidently false.

Read More
Brad East Brad East

Screentopia

A rant about the concern trolls who think the rest of us are too alarmist about children, screens, social media, and smartphones.

I’m grateful to Alan for writing this post so I didn’t have to. A few additional thoughts, though. (And by “a few thoughts” I mean rant imminent.)

Let me begin by giving a term to describe, not just smartphones or social media, but the entire ecosystem of the internet, ubiquitous screens, smartphones, and social media. We could call it Technopoly or the Matrix or just Digital. I’ll call it Screentopia. A place-that-is-no-place in which just about everything in our lives—friendship, education, finance, sex, news, entertainment, work, communication, worship—is mediated by omnipresent interlinked personal and public devices as well as screens of every size and type, through which we access the “all” of the aforementioned aspects of our common life.

Screentopia is an ecosystem, a habitat, an environment; it’s not one thing, and it didn’t arrive fully formed at a single point in time. It achieved a kind of comprehensive reach and maturity sometime in the last dozen years.

Like Alan, I’m utterly mystified by people who aren’t worried about this new social reality. Or who need the rest of us to calm down. Or who think the kids are all right. Or who think the kids aren’t all right, but nevertheless insist that the kids’ dis-ease has little to nothing to do with being born and raised in Screentopia. Or who must needs concern-troll those of us who are alarmed for being too alarmed; for ascribing monocausal agency to screens and smartphones when what we’re dealing with is complex, multicausal, inscrutable, and therefore impossible to fix. (The speed with which the writer adverts to “can’t roll back the clock” or “the toothpaste ain’t going back in the tube” is inversely proportional to how seriously you have to take him.)

After all, our concern troll asks insouciantly, aren’t we—shouldn’t we be—worried about other things, too? About low birth rates? And low marriage rates? And kids not playing outside? And kids presided over by low-flying helicopter parents? And kids not reading? And kids not dating or driving or experimenting with risky behaviors? And kids so sunk in lethargy that they can’t be bothered to do anything for themselves?

Well—yes! We should be worried about all that; we are worried about it. These aren’t independent phenomena about which we must parcel out percentages of our worry. It’s all interrelated! Nor is anyone—not one person—claiming a totality of causal explanatory power for the invention of the iPhone followed immediately by mass immiseration. Nor still is anyone denying that parents and teachers and schools and churches are the problem here. It’s not a “gotcha” to counter that kids don’t have an issue with phones, parents do. Yes! Duh! Exactly! We all do! Bonnie Kristian is absolutely right: parents want their elementary and middle school–aged kids to have smartphones; it’s them you have to convince, not the kids. We are the problem. We have to change. That’s literally what Haidt et al are saying. No one’s “blaming the kids.” We’re blaming what should have been the adults in the room—whether the board room, the PTA meeting, the faculty lounge, or the household. Having made a mistake in imposing this dystopia of screens on an unsuspecting generation, we would like, kindly and thank you please, to fix the problem we ourselves made (or, at least, woke up to, some of us, having not been given a vote at the time).

Here’s what I want to ask the tech concern trolls.

How many hours per day of private scrolling on a small glowing rectangle would concern you? How many hours per day indoors? How many hours per day on social media? How many hours per day on video games? How many pills to get to sleep? How many hours per night not sleeping? How many books per year not read? How many friends not made, how many driver’s licenses not acquired, how many dates and hangouts not held in person would finally raise a red flag?

Christopher Hitchens once wrote, “The North Korean state was born at about the same time that Nineteen Eighty-Four was published, and one could almost believe that the holy father of the state, Kim Il Sung, was given a copy of the novel and asked if he could make it work in practice.” A friend of mine says the same about our society and Brave New World. I expect people have read their Orwell. Have they read their Huxley, too? (And their Bradbury? And Walter M. Miller Jr.? And…?) Drugs and mindless entertainment to numb the emotions, babies engineered and produced in factories, sex and procreation absolutely severed, male and female locked in perpetual sedated combat, books either censored or an anachronistic bore, screens on every wall of one’s home featuring a kind of continuous interactive reality TV (as if Real Housewives, TikTok, and Zoom were combined into a single VR platform)—it’s all there. Is that the society we want? On purpose? It seems we’re bound for it like our lives depended on it. Indeed, we’re partway there already. “Alarmists” and “Luddites” are merely the ones who see the cliff’s edge ahead and are frantically pointing at it, trying to catch everyone’s attention.

But apparently everyone else is having too much fun. Who invited these killjoys along anyway?

Read More
Brad East Brad East

All together now: social media is bad for reading

A brief screed about what we all know to be true: social media is bad for reading.

We don’t have to mince words. We don’t have to pretend. We don’t have to qualify our claims. We don’t have to worry about insulting the youths. We don’t have to keep mum until the latest data comes in.

Social media, in all its forms, is bad for reading.

It’s bad for reading habits, meaning when you’re on social media you’re not reading a book. It’s bad for reading attention, meaning it shrinks your ability to focus for sustained periods of time while reading. It’s bad for reading desires, meaning it makes the idea of sitting down with a book, away from screens and images and videos and sounds, seem dreadfully boring. It’s bad for reading style, meaning what literacy you retain while living on social media is trained to like all the wrong things and to seek more of the same. It’s bad for reading ends, meaning you’re less likely to read for pleasure and more likely to read for strictly utilitarian reasons (including, for example, promotional deals and influencer prizes and so on). It’s bad for reading reinforcement, meaning like begets like, and inserting social media into the feedback loop of reading means ever more of the former and ever less of the latter. It’s bad for reading learning, meaning your inability to focus on dense, lengthy reading is an educational handicap: you quite literally will know less as a result. It’s bad for reading horizons, meaning the scope of what you do read, if you read at all, will not stretch across continents, cultures, and centuries but will be limited to the here and now, (at most) the latest faux highbrow novel or self-help bilge promoted by the newest hip influencers; social media–inflected “reading” is definitionally myopic: anti-“diverse” on principle. Finally, social media is bad for reading imitation, meaning it is bad for writing, because reading good writing is the only sure path to learning to write well oneself. Every single writing tic learned from social media is bad, and you can spot all of them a mile away.

None of this is new. None of it is groundbreaking. None of it is rocket science. We all know it. Educators do. Academics do. Parents do. As do members of Gen Z. My students don’t defend themselves to me; they don’t stick up for digital nativity and the wisdom and character produced by TikTok or Instagram over reading books. I’ve had students who tell me, approaching graduation, that they have never read a single book for pleasure in their lives. Others have confessed that they found a way to avoid reading a book cover to cover entirely, even as they got B’s in high school and college. They’re not proud of this. Neither are they embarrassed. It just is what it is.

Those of us who see this and are concerned by it do not have to apologize for it. We don’t have to worry about being, or being accused of being, Luddites. We’re not making this up. We’re not shaking our canes at the kids on the lawn. We’re not ageist or classist or generation-ist or any other nonsensical application of actual prejudices.

The problem is real. It’s not the only one, but it’s pressing. Social media is bad in general, it’s certainly bad for young people, and it’s unquestionably, demonstrably, and devastatingly bad for reading.

The question is not whether it’s a problem. The question is what to do about it.

Read More
Brad East Brad East

A tech-attitude taxonomy

A taxonomy of eleven different dispositions to technological development, especially in a digital age.

I’ve been reading Albert Borgmann lately, and in one essay he describes a set of thinkers he calls “optimistic pessimists” about technology. It got me thinking about how to delineate different positions and postures on technology, particularly digital technology, over the last century. I came up with eleven terms, the sixth one serving as a middle, “neutral” point with five on each side—growing in intensity as they get further from the center. Here they are:

  1. Hucksters: i.e., people who stand to profit from new technologies, or who work to spin and market them regardless of their detrimental effects on human flourishing.

  2. Apostles: i.e., true believers who announce the gospel of new technology to the unconvinced; they win converts by their true faith and honest enthusiasm; they sincerely believe that any and all developments in technology are good and to be welcomed as benefiting the human race in the short-, medium-, and long-term.

  3. Boosters: i.e., writers and journalists in media and academia who toe the line of the hucksters and apostles; they accuse critics and dissenters from the true faith of heresy or, worse, of being on the wrong side of history; they exist as cogs in the tech-evangelistic machine, though it’s never clear why they are so uncritical, since they are rarely either apostles or hucksters themselves.

  4. Optimists: i.e., ordinary people who understand and are sympathetic with thoughtful criticisms of new technologies but who, at the end of the day, passively trust in progress, in history’s forward march, and in the power of human can-do spirit to make things turn out right, including the challenges of technology; they adopt new technology as soon as it’s popular or affordable.

  5. Optimistic pessimists: i.e., trenchant and insightful critics of technopoly, or the culture wrought by technology, who nonetheless argue for and have confidence in the possibility of righting the ship (even, the ship righting itself); another term for this group is tech reformers.

  6. Naive neutrals: i.e., people who have never given a second thought to the challenges or perils of technology, are fundamentally incurious about them, and have no “position” to speak of regarding the topic; in practice they function like optimists or boosters, but lack the presence of considered beliefs on the subject.

  7. Pessimistic optimists: i.e., inevitabilists—this or that new technology may on net be worse for humanity, but there’s simply nothing to do about it; pushing back or writing criticism is for this group akin to a single individual blowing on a forest fire; technological change on this view is materialist and/or deterministic; at most, you try to see it for what it is and manage your own individual life as best you can; at the same time, there’s no reason to be Chicken Little, since this has always been humanity’s lot, and we always find a way to adapt and adjust.

  8. Pessimists: i.e., deep skeptics who see technological development in broadly negative terms, granting that not all of it is always bad in all its effects (e.g., medicine’s improvement of health, extension of life spans, and protection from disease); these folks are the last to adopt a new technology, usually with resentment or exasperation; they hate hucksters and boosters; they are not determinists—they think human society really can make social and political choices about technology ordered toward the common good—but know that determinism almost always wins in practice; their pessimism pushes them to see the downsides or tradeoffs even in the “best” technological developments.

  9. Doomsdayers: i.e., it’s all bad, all the time, and it’s clear as day to anyone with eyes to see and ears to hear; the internet is a bona fide harbinger of the apocalypse and A.I. is no-joke leading us to Skynet and the Matrix; the answer to new technology is always, therefore, a leonine Barthian Nein!; and any and all dissents and evidence to the contrary are only so much captivity to the Zeitgeist, heads stuck in the sand, paid-for shilling, or delusional “back to the land” Heidegerrian nostalgia that is impossible to live out with integrity in a digital age.

  10. Opt-outers: i.e., agrarians and urban monastics in the spirit of Wendell Berry, Ivan Illich, and others who pursue life “off the grid” or at least off the internet; they may or may not be politically active, but more than anything they put their money where their mouth is: no TV or wireless internet in the home, no smart phone, no social media, and a life centered on hearth, earth, family, children, the local neighborhood, a farm or community garden, so on and so forth; they may be as critical as pessimists and doomsdayers, but they want to walk the walk, not just talk the talk, and most of all they don’t want the technopoly to dictate whether or not, in this, their one life, it can be a good one.

  11. Resisters: i.e., leaders and foot soldiers in the Butlerian Jihad, whether this be only in spirit or in actual social, material, and political terms (IRL, as they say).

Cards on the table: I’m dispositionally somewhere between #7 and #8, with occasional emotional outbursts of #9, but aspirationally and even once in a while actually a #10.

Read More
Brad East Brad East

Quit social porn

Samuel James is right: the social internet is a form of pornography. That means Christians, at least, should get off—now.

In the introduction to his new book, Digital Liturgies: Rediscovering Christian Wisdom in an Online Age, Samuel James makes a startling claim: “The internet is a lot like pornography.” He makes sure the reader has read him right: “No, that’s not a typo. I did not mean to say that the internet contains a lot of pornography. I mean to say that the internet itself—i.e., its very nature—is like pornography. There’s something about it that is pornographic in its essence.”

Bingo. This is exactly right. But let’s take it one step further.

A few pages earlier, James distinguishes the internet in general from “the social internet.” That’s a broader term for what we usually refer to as “social media.” Think not only Facebook, Twitter, Instagram, TikTok, et al, but also YouTube, Slack, Pinterest, Snapchat, Tumblr, perhaps even LinkedIn or Reddit and similar sites. In effect, any online platform that (a) “connects” strangers through (b) public or semi-public personal profiles via (c) proprietary algorithms using (d) slot-machine reward mechanisms that reliably alter one’s (e) habits of attention and (f) fame, status, wealth, influence, or “brand.” Almost always such a platform also entails (g) the curation, upkeep, reiteration, and perpetual transformation of one’s visual image.

This is the social internet. James is right to compare it to pornography. But he doesn’t go far enough. It isn’t like pornography. It’s a mode of pornography.

The social internet is social porn.

By the end of the introduction, James pulls his punch. He doesn’t want his readers off the internet. Okay, fine. I’m on the internet too, obviously—though every second I’m not on it is a second of victory I’ve snatched from defeat. But yes, it’s hard to avoid the internet in 2023. We’ll let that stand for now.

There is no good reason, however, to be on the social internet. It’s porn, after all, as we just established. Christians, at least, have no excuse for using porn. So if James and I are right that the social internet isn’t just akin to pornography but is a species of it, then he and I and every other Christian we know who cares about these things should get off the social internet right now.

That means, as we saw above, any app, program, or platform that meets the definition I laid out. It means, at a minimum, deactivating and then deleting one’s accounts with Facebook, Twitter, Instagram, and TikTok—immediately. It then means thinking long and hard about whether one should be on any para-social platforms like YouTube or Pinterest or Slack. Some people use YouTube rarely and passively, to watch the occasional movie trailer or live band performance, say, or how-to videos to help fix things around the house. Granted, we shouldn’t be too worried about that. But what about people who use it the way my students use it—as an app on their phone with an auto-populated feed they scroll just like IG or TT? Or what about active users and influencers with their own channels?

Get off! That’s the answer. It’s porn, remember? And porn is bad.

I confess I have grown tired of all the excuses for staying on the social internet. Let me put that differently: I know plenty of people who do not share my judgment that the social internet is bad, much less a type of porn. In that case, we lack a shared premise. But many people accept the premise; they might even go so far as to affirm with me that the social internet is indeed a kind of porn: just as addictive, just as powerful, just as malformative, just as spiritually depleting, just as attentionally sapping. (Such claims are empirical, by the way; I don’t consider them arguable. But that’s for another day.) And yet most of the people I have in mind, who are some of the most well-read and up-to-date on the dangers and damages of digital media, continue not only to maintain their social internet accounts but use them actively and daily. Why?

I’m at a point where I think there simply are no more good excuses. Alan Jacobs remarked to me a few years back, when I was wavering on my Twitter usage, that the hellsite in question was the new Playboy. “I subscribe for the articles,” you say. I’m sure you do. That might play with folks unconcerned by the surrounding pictures. For Christians, though, the gig is up. You’re wading through waist-high toxic sludge for the occasional possible potential good. Quit it. Quit the social internet. Be done with it. For good.

Unlike Lot’s wife, you won’t look back. The flight from the Sodom of the social internet isn’t littered with pillars of salt. The path is free and clear, because everyone who leaves is so happy, so grateful, the only question they ask themselves is what took them so long to get out.

Read More
Brad East Brad East

A decision tree for dealing with digital tech

Is the digital status quo good? If not, our actions (both personal and institutional) should show it.

Start with this question:

Do you believe that our, and especially young people’s, relationship to digital technology (=smartphones, screens, the internet, streaming, social media) is healthy, functional, and therefore good as is? Or unhealthy, dysfunctional, and therefore in need of immediate and drastic help?

If your answer is “healthy, functional, and good as is,” then worry yourself no more; the status quo is A-OK. If you answered otherwise, read on.

Now ask yourself this question:

Do the practices, policies, norms, and official statements of my institution—whether a family, a business, a university, or a church—(a) contribute to the technological problem, (b) maintain the digital status quo, or (c) interrupt, subvert, and cut against the dysfunctional relationship of the members of my institution to their devices and screens?

If your answer is (a) or (b) and yet you answered earlier that you believe our relationship to digital technology is in serious need of help, then you’ve got a problem on your hands. If your answer is (c), then well done.

Finally, ask yourself this:

How does my own life—the whole suite of my daily habits when no one’s looking, or rather, when everyone is looking (my spouse, my roommate, my children, my coworkers, my neighbors, my pastors, and so on)—reflect, model, and/or communicate my most basic beliefs about the digital status quo? Does the way I live show others that (a) I am aware of the problem (b) chiefly within myself and (c) am tirelessly laboring to respond to it, to amend my ways and solve the problem? Or does it evince the very opposite? So that my life and my words are unaligned and even contradictory?

At both the institutional and the personal level, it seems to me that answering these questions honestly and following them to their logical conclusions—not just in our minds or with our words but in concrete actions—would clarify much about the nature of our duties, demands, and decisions in this area of life.

Read More
Brad East Brad East

The tech-church show

A reflection on two issues raised by the recent viral clip of a prominent pastor lecturing his listeners not to treat public worship as a “show.”

A week or two ago a clip went viral of a prominent pastor lecturing his listeners, during his sermon, about treating Sunday morning worship like a show. I didn’t watch it, and I’m not going to comment about the pastor in question, whom I know nothing about. Here’s one write-up about it. The clip launched a thousand online Christian thinkpieces. A lot of hand-wringing about churches that put on worship as a show simultaneously wanting congregants not to see worship as a show.

Any reader of my work knows I couldn’t agree more. But I don’t want to pile on. I want to use the occasion to think more deeply about two issues it raises for the larger landscape of churches, public worship, and digital technology.

First: Should churches understand themselves to be sites of resistance against the digital status quo? That is, given their context, are churches in America called by God to be a “force for good” in relation to digital technology? And thus are they called to be a “force opposed” to the dominance of our lives—which means the lives of congregants as well as their nonbelieving neighbors—by digital devices, screens, and social media?

It seems to me that churches and church leaders are not clear about their answer to this question. In practice, their answer appears to be No. The digital status quo obtains outside the walls of the church and inside them. There is no “digital difference” when you walk inside a church—at least a standard, run-of-the-mill low-church, evangelical, or Protestant congregation. (The Orthodox have not yet been colonized by Digital, so far as I can tell. For Catholics it depends on the parish.)

In and of itself, this isn’t a problem, certainly not of consistency. If a church doesn’t think Digital’s dominion is a problem, then it’s only natural for Digital to reign within the church and not only without. You’d never expect such a church to be different on this score.

The problem arises when churches say they want to oppose believers’ digital habits, dysfunctions, and addictions while reproducing those very habits within the life of the church, above all in the liturgy. That’s a case of extreme cognitive dissonance. How could church leaders ever expect ordinary believers to learn from the church how to amend their digital lives when church leaders themselves, and the church’s public worship itself, merely model for believers their own bad habits? When, in other words, church members’ digital lives, disordered as they are, are simply mirrored back to them by the church and her pastors?

To be clear, I know more than a few Christians, including ministers, who don’t share my alarm at the reign of Digital in our common life. They wouldn’t exactly endorse spending four to eight hours (or more) per day staring at screens; they don’t deny the ills and errors of pornography and loss of attention span via social media and other platforms. But they see bigger fish to fry. And besides (as they are wont to say), “It’s here to stay. It’s a part of life. We can live in denial or incorporate its conveniences into church life. It’s inevitable either way.”

Personally, I think that’s a steaming pile of you-know-what. But at least it’s consistent. For anyone, however, who shares my alarm at the role of Digital in our common life—our own, our neighbors’, our children’s, our students’—then the inconsistency of the church on this topic is not only ludicrous but dangerous. It’s actively aiding and abetting the most significant problem facing us today while pretending otherwise. And you can’t have it both ways. Either it’s a problem and you face it head on; or it’s not, and you don’t.

Second: Here’s an exercise that’s useful in the classroom. It helps to get students thinking about the role of technology in the liturgy.

Ask yourself this question: Which forms and types of technology, and how much of them, could I remove from Sunday morning worship before it would become unworkable?

Another way to think about it would be to ask: What makes my church’s liturgy different, technologically speaking, than an instance of the church’s liturgy five hundred years ago?

Certain kinds of technology become evident immediately: electricity and HVAC, for starters. In my area, many church buildings would be impossible to worship in during a west Texas summer: no air and no light. They’d be little more than pitch-black ovens on the inside.

Start on the other end, though. Compare Sunday morning worship in your church today to just a few decades ago. Here are some concrete questions.

  • Could you go (could it “work”) without the use of smartphones?

  • What about video cameras?

  • What about spotlights and/or dimmers?

  • What about the internet?

  • What about screens?

  • What about computers?

  • What about a sound board?

  • What about electric amplification for musical instruments?

  • What about wireless mics?

  • What about microphones as such?

This list isn’t meant to prejudge whether any or all of these are “bad” or to be avoided in the liturgy. I’m happy to worship inside a building (technology) with A/C (technology) and electricity (technology)—not to mention with indoor plumbing available (also technology). Microphones make preaching audible to everyone, including those hard of hearing. And I’ve not even mentioned the most consequential technological invention for the church’s practice of worship: the automobile! Over the last century cars revolutionized the who and where and how and why of church membership and attendance. (In this Luddite’s opinion, clearly for the worse. Come at me.)

In any case, whatever one makes of these and similar developments, the foregoing exercise is meant to force us to reckon with technology’s presence in worship as both contingent and chosen. It is contingent because worship is possible without any/all of them. I’ve worshiped on a Sunday morning beneath a tree in rural east Africa. The people walked to get there. No A/C. No mics. No screens. No internet. Certainly no plumbing. Not that long ago in this very country, most of the technology taken for granted today in many churches did not even exist. So contingency is crucial to recognize here.

And because it is contingent, it is also chosen. No one imposed digital technology, or any other kind, on American churches. Their leaders implemented it. It does not matter whether they understood themselves to be making a decision or exercising authority. They were, whether they knew it or not and whether they liked it or not. It does not matter whether they even had a conversation about it. The choice was theirs, and they made it. The choice remains theirs. What has been done can be undone. No church has to stream, for example. Some never started. Others have stopped. It’s a choice, as I’ve written elsewhere. Church leaders should own it and take responsibility for it rather than assume it’s “out of their hands.”

Because the use and presence of digital technology in the church’s liturgy is neither necessary nor imposed—it is contingent and chosen—then the logical upshot is this: Church leaders who believe that digital technology is a clear and present danger to the well-being and faithfulness of disciples of Christ should act like it. They should identify, recognize, and articulate the threats and temptations of digital dysfunction in their lives and ours; they should formulate a vision for how the church can oppose this dysfunction, forcefully and explicitly; and they should find ways to enact this opposition, both negatively (by removing said dysfunction from within the church) and positively (by proposing and modeling alternative forms of life available to believers who want relief from their digital addictions).

What they should not do is say it’s a problem while avoiding dealing with it. What they should not do is leave the status quo as it is. What they should not do is accept Digital’s domination as inevitable—as somehow lying outside the sphere of the reign and power of Christ.

What they should not do is look the other way.

Read More
Brad East Brad East

Quitting the Big Five

Could you quit all the companies that make up Silicon Valley’s Big Five? How hard would it be to reduce your footprint down to only one of them?

In a course I teach on digital tech and Christian practice, I walk through an exercise with students. I ask them to name the Big Five (or more) Silicon Valley companies that so powerfully define and delimit our digital lives. They can also name additional apps and platforms that take up time and space in their daily habits. I then ask them:

Supposing you continued to use digital technology—supposing, that is, you did not move onto a tech-free country ranch, unplugged from the internet and every kind of screen—how many of these Big Tech companies could you extract yourself from without serious loss? Put from another angle, what is the fewest possible such companies you need to live your life?

In my own life, I try to implement a modest version of this. I like to daydream, however, about a more radical version. Let me start with the former then turn to the latter.

In my own life, here’s my current entanglement with the Big Tech firms:

Meta: None whatsoever (I don’t have a Facebook or Instagram account), with the exception of WhatsApp, which is useful for international and other types of communication. Recently, though, I’ve been nudging those I talk to on WhatsApp to move to another app, so I could quit Zuckerberg altogether.

Microsoft: I use Word (a lot) and PowerPoint (some) and Excel (a bit). Though I’m used to all three, I could live without them—though I’d have two decades’ worth of Word files I’d need to archive and/or convert.

Google: I’ve had the same Gmail account for fifteen years, so it would be a real loss to give it up. I don’t use GoogleMaps or any other of Google’s smartphone apps. I use GoogleDocs (etc.) a bit, mostly when others want to collaborate; I avoid it, though, and would not miss it.

Amazon: I’m an Amazon originalist: I use it for books. We pay for Prime. We also use it to buy needs and gifts for our kids and others. For years I threw my body in front of purchasing an Alexa until my household outvoted me just this summer. Alas.

Apple: Here’s where they get me. I have an iPhone and a MacBook, and I finally gave in and started backing up with a paid account on iCloud. I use iPhoto and Messages and FaceTime and the rest. I’m sure my household will acquire an iPad at some point. In a word, I’m Apple-integrated.

Others: I don’t have TikTok or any other social media accounts. My household has a family Spotify account. I personally use Instapaper, Freedom, and Marco Polo. I got Venmo this summer, but I lived without it for a decade, and could delete it tomorrow. I use Dropbox as well as another online storage business. We have various streaming platforms, but they’ve been dwindling of late; we could live with one or two.

Caveat: I’m aware that digital entanglement takes more than one form, i.e., whether or not I have an Amazon or Gmail or Microsoft (or IBM!) “account,” I’m invariably interacting with, using, and possibly paying for their servers and services in a variety of ways without my even knowing it. Again, that sort of entanglement is unavoidable absent the (Butlerian/Benedictine) move to the wireless ranch compound. But I wanted to acknowledge my awareness of this predicament at least.

Okay. So what would it look like to minimize my formal Big Five “footprint”?

So far as I can see it, the answer is simple: Commit exclusively to one company for as many services as possible.

Now, this may be seriously unwise. Like a portfolio, one’s digital assets and services may be safest and best utilized when highly diversified. Moreover, it’s almost literally putting one’s eggs in a single basket: what if that basket breaks? What if the one company you trust goes bust, or has its security compromised, or finds itself more loyal to another country’s interests than one’s own, so on and so forth?

All granted. This may be a foolish endeavor. That’s why I’m thinking out loud.

But supposing it’s not foolish, it seems to me that the simplest thing to do, in my case, would be to double down on Apple. Apple does hardware and software. They do online storage. They do TV and movies. They do music and podcasts. They’re interoperable. They have Maps and email and word processors and slideshows and the rest—or, if I preferred, I could always use third-party software for such needs (for example, I already use Firefox, not Safari or Chrome).

So what would it take, in my situation, to reduce my Big Tech footprint from five toes to three or two or even just one?

First, delete WhatsApp. Farewell, Meta!

Second, switch to Keynote and TextEdit (or Pages or Scrivener) and some unknown spreadsheet alternative, or whatever other programs folks prefer. Adios, Microsoft!

Third, download my Gmail archive and create a new, private, encrypted account with a trusted service. Turn to DuckDuckGo with questions. Turn to Apple for directions. Avoid YouTube like the plague. Adieu, Google!

Fourth, cancel Prime, ditch the Alexa, use local outlets for shopping, and order books from Bookshop.org or IndieBound.org or directly from publishers and authors. Get thee behind me, Bezos!

Fifth and finally, pray to the ghost of Steve Jobs for mercy and beneficence as I enter his kingdom, a humble and obedient subject—bound for life…

Whether or not it would be wise, could I seriously do this? I’m sort of amazed at how not implausible it sounds. The hardest thing would be leaving Microsoft Word behind, just because I’ve never used anything else, and I write a lot. The second hardest would be losing the speed, cheapness, and convenience of Amazon Prime for ordering books—but then, that’s the decision that would be best for my soul, and for authors, and for the publishing industry in general. As for life without Gmail, that would be good all around, which is why it’s the step I’m most likely to follow in the next few years.

In any case, it’s a useful exercise. “We” may “need” these corporations, at least if we want to keep living digital lives. But we don’t need all of them. We may even not need more than one.

Read More
Brad East Brad East

Tech bubble

From what I read online, I appear to live in a tech bubble: everyone’s addicted to it while knowing it’s bad. Are there really people who aren’t addicted? Are there really others who are addicted, but think it’s good?

Lately it’s occurred to me that I must live in an odd sort of tech bubble. It has two components.

On one hand, no one in my context (a medium-sized city in west Texas) lives in any way “free” from digital technology. Everyone has smartphones, laptops, tablets, and televisions with streaming apps. Most little kids have Kindles or iPads; most 10-12-years olds have phones; nearly every middle school has a smartphone. Women are on Instagram and TikTok, men are on Twitter and YouTube. Boys of every age play video games (Switch, XBOX, PS5), including plenty of dads. Adults and kids alike are on their phones during church, during sporting events, during choir performances. Kids watch Disney+ and PBS Kids; parents watch Max and Netflix. Screens and apps, Amazon and Spotify, phones and tablets galore: this is just daily ordinary life. There are no radicals among us. No slices of life carved out. I don’t know anyone without a TV, much less without wireless internet. I don’t know anyone without a smartphone! Life in west Texas—and everywhere else I’m aware of, at least in the Bible Belt—is just like this. No dissenting communes. No screen-free spaces. I’m the campus weirdo for not permitting devices in my classroom, and doubly so for not using a Learning Management System. Nor am I some hard-edged radical. I’m currently typing on a MacBook, and when I leave my office, I’ll listen to an audiobook via my iPhone.

In other words, whenever anyone tells me that the world I’ve just described isn’t normal, isn’t typical, isn’t entrenched and established and nigh unavoidable—I think, “Okay, we simply live in different worlds. I’d like to come tour yours. I’ve not seen it with my own eyes before.” I’m open to being wrong. But I admit to some measure of skepticism. In a nation of 330 million souls, is it meaningful to point to this or that solitary digital experimenter as a site of resistance? And won’t they capitulate eventually anyway?

But maybe not. What do I know?

Here’s the other hand, though. Everyone I know, tech-addled and tech-saturated though they be, everyone agrees that digital technology and social media are a major problem, perhaps the most significant social challenge, facing all of us and especially young people today. No one thinks it’s “no big deal.” No one argues that their kids vegging out on video games all day does nothing to their brains. No one pretends that Instagram and TikTok and Twitter are good for developing adolescents. No one supposes that more screen time is better for anyone. They—we—all know it’s a problem. They—we—just aren’t sure what to do about it. And since it seems such an enormously complex and massive overarching matrix, by definition a systemic problem calling for systemic solutions, mostly everyone just keeps on with life as it is. A few of us try to do a little better: quantifying our kids’ screen time; deleting certain apps; resisting the siren song of smartphones for 12-year-olds. But those are drops in the bucket. No one disputes the nature or extent of the problem. It’s just that no one knows how to fix it; or at least no one has the resolve to be the one person, the one household, in a city of 120,000 to say No! to the whole shebang. And even if there were such a person or household, they’d be a one of one. An extraordinary exception to the normative and unthreatened rule.

And yet. When I read online, I discover that there are people—apparently not insignificant in number?—who do not take for granted that the ubiquity and widespread use of social media, screens, and personal devices (by everyone, but certainly by young people) is a bad thing. In fact, these people rise in defense of Silicon Valley’s holy products, so much so that they accuse those of us worried about them of fostering a moral panic. Any and all evidence of the detrimental effects of teenagers being online four, six, eight hours per day is discounted in advance. It’s either bad data or bad politics. Until very recently I didn’t even realize, naive simpleton that I am, that worrying about these things was politicized. That apparently you out yourself as a reactionary if … checks notes … you aren’t perfectly aligned with the interests of trillion-dollar multinational corporations. That it’s somehow right-wing, rather than common-sense, to want children and young people to move their bodies, to be outdoors, to talk to one another face to face, to go on dates, to get driver’s licenses, to take road trips, to see concerts, to star gaze, to sneak out at night(!), to go to restaurants, to go to parks, to go on walks, to read novels they hold in their hands, to look people in the eye, to play the guitar, to go camping, to visit national parks, to play pick-up basketball, to mow the yard, to join a protest march, to tend a garden, to cook a meal, to paint, to leave the confines of their bedrooms and game rooms, to go to church, to go on a picnic, to have a first kiss—must I go on? No, because everyone knows these are reasonable things to want young people to do, and to learn to do, and even (because there is no other way) to make mistakes and take real risks in trying to learn to do. I know plenty of conservatives and plenty of progressives and all of them, not an exception among them, want their kids off social media, off streaming, off smartphones—on them, at a minimum, much less—and want them instead to do something, anything, out there in the bright blue real world we all share and live in together.

I must allow the possibility, however, that I inhabit a tech bubble. There appear to be other worlds out there. The internet says so. In some of them, I’m told, there are tech-free persons, households, and whole communities enjoying life without the tyrannous glare of the Big Five Big Brother staring back at them through their devices. And in other worlds, running parallel to these perhaps, tech is as omnipresent as it is in my neck of the woods, yet it is utterly benign, liberating, life-giving, and above all enhancing of young people’s mental health. The more screens the better, in that world. To know this is to be right-thinking, which is to say, left-thinking: enlightened and progressive and educated. To deny it is right-thinking in the wrong sense: conservative and benighted and backwards.

Oh, well. Perhaps I’ll visit one of these other worlds someday. For the time being, I’m stuck in mine.

Read More
Brad East Brad East

Local church bans smartphones

What if churches showed Jonathan Haidt proof of concept for his clarion call to K–12 schools to ban smartphones? Let’s start now.

Just kidding. But why not? The headline of the latest Atlantic piece by Jonathan Haidt reads: “Ban Phones From All Schools.” The updated version now says: “Get Phones Out of School Now.” (Another one, from earlier: “Phones at School Are a Disaster.” Indeed they are. But why all these different titles for the same piece?)

My question: If smartphones are so bad for school-aged kids, K–12, isn’t it likely they’re just as bad, if not worse, for kids in churches? And not only for 18-year-olds and younger, but for everyone?

What if churches took the lead here, instead of serving once again as a lagging indicator for the wider culture? What if the one place in America where screens and devices, smartphones and social media were not ubiquitous—were not even present at all—was your neighborhood congregation? Humble and out of fashion and perhaps deplorable, that congregation, but not, adamantly and openly and unapologetically not, part of the technological crisis afflicting our society?

Granted, no church is going to ask for your phone at the door. No church is going to frisk you for an iPhone. No church is going to require handing over your Android as a condition of entering the building.

Short of that, churches could do a lot to discourage parishioners from using phones in their buildings or even bringing them inside.

They could begin by not making it a requirement. For parents of young children, having a phone has become a nonnegotiable; you’re expected to be reachable at any moment, given your child’s behavior or needs during worship or Sunday school.

They could begin by not making smartphones an assumption. For example, by placing physical Bibles in (ahem) Bible classes as well as the sanctuary. By not using QR codes. By not inviting people to “get your phones and open your Bible app” in order to read along with the passage from Scripture.

They could begin by not featuring smartphones within worship. For example, by reading from physical books or programs or print-outs rather than from one’s personal device. By not texting during worship—ever, at all, for any reason. (If you’re someone who is on call, a physician or police officer or what have you, you’re an exception here; at the same time, if you get a call, then step out and take it!) By not, God help me, letting your child play games on your phone during the liturgy. By not, God grant me strength, playing them yourself.

They could begin by communicating, clearly, gently, but directly, that the church has a vision for the role of digital technology within the life of Christian discipleship and that it is the job of the church to form and educate the faithful in accordance with that vision. Not in the service of scrupulosity or works righteousness. In the service, rather, of equipping followers of Jesus to be strong and resilient believers in the face of the greatest challenge facing this generation—especially its young people. And given that vision and formation, it follows that within this community digital technology in general, and screens and smartphones in particular, are not “anything goes.” Not “no holds barred” or “live and let live.” That would be irresponsible. Instead, the church is to be on the vanguard of resisting billion- and trillion-dollar corporations’ bald-faced attempts to suck our souls, our wallets, and our attentions dry. How, after all, can we disciples be wise and patient and alert and unanxious women and men of prayer, who dwell in the word of God, who know how to be still, who listen for the voice of Christ’s Spirit—how can we be any of these things if every second of our lives is fixated on our screens, eyes scrolling indefinitely and infinitely for the latest image, the latest scandal, the latest outrage? How can we be different from anybody else if here, in the midst of God’s people, on the Lord’s Day, gathered to worship in the Spirit, we can’t let go of our digital addictions for even one hour?

Ban devices, I say, from all churches. Beat the schools to it. Show the world we see the problem. Show the world we want to fix it in ourselves before fixing it in others. Show the world we mean business. Get smartphones out of churches now. Show Prof. Haidt proof of concept. Leave Apple and Google and Meta in the car. Be blessedly free for ninety minutes (or more!). Give God your all. Model it for your kids. Demonstrate that it’s possible.

Is it? Could it happen? In your church and mine?

All I can say is, the Lord has done stranger things before…

Read More
Brad East Brad East

A.I. fallacies, academic edition

A dialogue with an imaginary interlocutor regarding A.I., ChatGPT, and the classroom.

ChatGPT is here to stay. We should get used to it.

Why? I’m not used to it, and I don’t plan on getting used to it.

ChatGPT is a tool. The only thing to do with a tool is learn how to use it well.

False. There are all kinds of tools I don’t know how to use, never plan on using, and never plan to learn to use.

But this is an academic tool. We—

No, it isn’t. It’s no more an academic tool than a smart phone. It’s utterly open-ended in its potential uses.

Our students are using it. We should too.

No, we shouldn’t. My students do all kinds of things I don’t do and would never do.

But we should know what they’re up to.

I do know what they’re up to. They’re using ChatGPT to write their papers.

Perhaps it’s useful!

I’m sure it is. To plagiarize.

Not just to plagiarize. To iterate. To bounce ideas off of. To outline.

As I said.

That’s not plagiarism! The same thing happens with a roommate, or a writing center, or a tutor—or a professor.

False.

Because it’s an algorithm?

Correct.

What makes an algorithm different from a person?

You said it. Do I have to dignify it with an answer?

Humor me.

Among other things: Because a human person—friend, teacher, tutor—does not instantaneously provide paragraphs of script to copy and paste into a paper. Because a human person asks questions in reply. Because a human person prompts further thought, which takes time. ChatGPT doesn’t take time. It’s the negation of temporality in human inquiry.

I’d call that efficiency.

Efficiency is not the end-all, be-all.

It’s good, though.

That depends. I’d say efficiency is a neutral description. Like “innovation” and “creativity.” Sometimes what it describes is good; sometimes what it describes is bad. Sometimes it’s hard to tell which, at least at first.

Give me a break. When is efficiency a bad thing?

Are you serious?

Yes.

Okay. A nuclear weapon is efficient at killing, as is nerve gas.

Give me another break. We’re not talking about murder!

I am. You asked me about cases when efficiency isn’t desirable.

Fine. Non-killing examples, please.

Okay. Driving 100 miles per hour in a school zone. Gets you where you want to go faster.

That’s breaking the law, though.

So? It’s more efficient.

I can see this isn’t going anywhere.

I don’t see why it’s so hard to understand. Efficiency is not good in itself. Cheating on an exam is an “efficient” use of time, if studying would have taken fifteen hours you’d rather have spent doing something else. Fast food is more efficient than cooking your own food, if you have the money. Using Google Translate is more efficient than becoming fluent in a foreign language. Listening to an author on a podcast is more efficient than reading her book cover to cover. Listening to it on 2X is even more efficient.

And?

And: In none of these cases is it self-evident that greater efficiency is actually good or preferable. Even when ethics is not involved—as in killing or breaking the law—efficiency is merely one among many factors to consider in a given action, undertaking, or (in this case) technological invention. The mere fact that X is efficient tells us nothing whatsoever about its goodness, and thus nothing whatsoever about whether we should endorse it, bless it, or incorporate it into our lives.

Your solution, then, is ignorance.

I don’t take your meaning.

You want to be ignorant about ChatGPT, language models, and artificial intelligence.

Not at all. What would make you think that?

Because you refuse to use it.

I don’t own or use guns. But I’m not ignorant about them.

Back to killing.

Sure. But your arguments keep failing. I’m not ignorant about A.I. I just don’t spend my time submitting questions to it or having “conversations” with it. I have better things to do.

Like what?

Like pretty much anything.

But you’re an academic! We academics should be knowledgeable about such things!

There you go again. I am knowledgeable. My not wasting time on ChatGPT has nothing to do with knowledge or lack thereof.

But shouldn’t your knowledge be more than theoretical? Shouldn’t you learn to use it well?

What does “well” mean? I’m unpersuaded that modifier applies.

How could you know?

By thinking! By reading and thinking. Try it sometime.

That’s uncalled for.

You’re right. I take it back.

What if there are in fact ways to use AI well?

I guess we’ll find out, won’t we?

You’re being glib again.

This time I’m not. You’re acting like the aim of life, including academic life, is to be on the cutting edge. But it’s not. Besides, the cutting edge is always changing. It’s a moving target. I’m an academic because I’m a dinosaur. My days are spent doing things Plato and Saint Augustine and Saint Thomas and John Calvin spend their days doing. Reading, writing, teaching. I don’t use digital technology in the first or the third. I use it in the second for typing. That’s it. I don’t live life on the edge. I live life moving backwards. The older, the better. If, by some miracle, the latest greatest tech gadgetry not only makes itself ubiquitous and unavoidable in scholarly life but also materially and undeniably improves it, without serious tradeoffs—well, then I’ll find out eventually. But I’m not holding my breath.

Whether or not you stick your head in the sand, your students are using ChatGPT and its competitors. Along with your colleagues, your friends, your pastors, your children.

That may well be true. I don’t deny it. If it is true, it’s cause for lament, not capitulation.

What?

I mean: Just because others are using it doesn’t mean I should join them. (If all your friends jumped off a bridge…)

But you’re an educator! How am I not getting through to you?

I’m as clueless as you are.

If everyone’s using it anyway, and it’s already being incorporated into the way writers compose their essays and professors create their assignments and students compose their papers and pastors compose their sermons and—

I. Don’t. Care. You have yet to show me why I should.

Okay. Let me be practical. Your students’ papers are already using ChatGPT.

Yes, I’m aware.

So how are you going to show them how to use it well in future papers?

I’m not.

What about their papers?

They won’t be writing them.

Come again?

No more computer-drafted papers written from home in my classes. I’m reverting to in-class handwritten essay exams. No prompts in advance. Come prepared, having done the reading. Those, plus the usual weekly reading quizzes.

You can’t be serious.

Why not?

Because that’s backwards.

Exactly! Now you’re getting it.

No, I mean: You’re moving backwards. That’s not the way of the future.

What is this “future” you speak of? I’m not acquainted.

That’s not the way society is heading. Not the way the academy is heading.

So?

So … you’ll be left behind.

No doubt!

Shouldn’t you care about that?

Why would I?

It makes you redundant.

I fail to see how.

Your teaching isn’t best practices!

Best practices? What does that mean? If my pedagogy, ancient and unsexy though it may be, results in greater learning for my students, then by definition it is the best practice possible. Or at least better practice by comparison.

But we’re past all that. That’s the way we used to do things.

Some things we used to do were better than the way we do them now.

That’s what reactionaries say.

That’s what progressives say.

Exactly.

Come on. You’re the one resorting to slogans. I’m the one joking. Quality pedagogy isn’t political in this sense. Are you really wanting to align yourself with Silicon Valley trillionaires? With money-grubbing corporations? With ed-tech snake-oil salesmen? Join the rebels! Join the dissidents! Join the Butlerian Jihad!

Who’s resorting to rhetoric now?

Mine’s in earnest though. I mean it. And I’m putting my money where my mouth is. By not going with the flow. By not doing what I’m told. By resisting every inch the tech overloads want to colonize in my classroom.

Okay. But seriously. You think you can win this fight?

Not at all.

Wait. What?

You don’t think you can win?

Of course not. Who said anything about winning?

Why fight then?

Likelihood of winning is not the deciding factor. This is the long defeat, remember. The measure of action is not success but goodness. The question for my classroom is therefore quite simple. Does it enrich teaching and learning, or does it not? Will my students’ ability to read, think, and speak with wisdom, insight, and intellectual depth increase as a result, or not? I have not seen a single argument that suggests using, incorporating, or otherwise introducing my students to ChatGPT will accomplish any of these pedagogical goals. So long as that is the case, I will not let propaganda, money, paralysis, confusion, or pressure of any kind—cultural, social, moral, administrative—persuade me to do what I believe to be a detriment to my students.

You must realize it’s inevitable.

What’s “it”?

You know.

I do. But I reject the premise. As I already said, I’m not going to win. But my classroom is not the world. It’s a microcosm of a different world. That’s the vision of the university I’m willing to defend, to go to the mat for. Screens rule in the world, but not in my little world. We open physical books. I write real words on a physical board. We speak to one another face to face, about what matters most. No laptops open. No smartphones out. No PowerPoint slides. Just words, words, words; texts, texts, texts; minds, minds, minds. I admit that’s not the only good way to teach. But it is a good way. And I protect it with all my might. I’m going to keep protecting it, as long as I’m able.

So you’re not a reactionary. You’re a fanatic.

Names again!

This time I’m the one kidding. I get it. But you’re something of a Luddite.

I don’t reject technology. I reject the assumption that technology created this morning should ipso facto be adopted this evening as self-evidently essential to human flourishing, without question or interrogation or skepticism or sheer time. Give me a hundred years, or better yet, five hundred. By then I’ll get back to you on whether A.I. is good for us. Not to mention good for education and scholarship.

You don’t have that kind of time.

Precisely. That’s why Silicon Valley boosterism is so foolish and anti-intellectual. It’s a cause for know-nothings. It presumes what it cannot know. It endorses what it cannot perceive. It disseminates what it cannot take sufficient time to test. It simply hands out digital grenades at random, hoping no one pulls the pin. No wonder it always blows up in their face.

We’ve gotten off track, and you’ve started sermonizing.

I’m known to do that.

Should we stop?

I think so. You don’t want to see me when I really get going. You wouldn’t like me when I’m angry.

Read More
Brad East Brad East

The take temptation

There is an ongoing series of essays being slowly published in successive issues of The New Atlantis I want to commend to you. They’re by Jon Askonas, a friend who teaches politics at Catholic University of America. The title for the series as a whole is “Reality: A Post-Mortem.” The essays are a bit hard to describe, but they make for essential reading.

There is an ongoing series of essays being slowly published in successive issues of The New Atlantis I want to commend to you. They’re by Jon Askonas, a friend who teaches politics at Catholic University of America. The title for the series as a whole is “Reality: A Post-Mortem.” The essays are a bit hard to describe, but they make for essential reading. They are an attempt to diagnose the root causes of, and the essential character of, the new state of unreality we find ourselves inhabiting today. The first, brief essay lays out the vision for the series. The second treats the gamified nature of our common life, in particular its analogues in novels, role-playing games, and alternate reality games (ARGs). The latest essay, which just arrived in my mailbox, is called “How Stewart Made Tucker.” Go read them all! (And subscribe to TNA, naturally. I’ve got an essay in the latest issue too.)

For now, I want to make one observation, drawing on something found in essay #2.

Jon writes (in one of a sequence of interludes that interrupt the main flow of the argument):

Several weeks have gone by since you picked your rabbit hole [that is, a specific topic about which there is much chatter but also much nonsense in public discourse and social media]. You have done the research, found a newsletter dedicated to unraveling the story, subscribed to a terrific outlet or podcast, and have learned to recognize widespread falsehoods on the subject. If your uncle happens to mention the subject next Thanksgiving, there is so much you could tell him that he wasn’t aware of.

 You check your feed and see that a prominent influencer has posted something that seems revealingly dishonest about your subject of choice. You have, at the tip of your fingers, the hottest and funniest take you have ever taken.

1. What do you do?

a. Post with such fervor that your followers shower you with shares before calling Internet 911 to report an online murder.

b. Draft your post, decide to “check” the “facts,” realize the controversy is more complex than you thought, and lose track of real work while trying to shoehorn your original take into the realm of objectivity.

c. Private-message your take, without checking its veracity, to close friends for the laughs or catharsis.

d. Consign your glorious take to the post trash can.

2. How many seconds did it take you to decide?

3. In however small a way, did your action nudge the world toward or away from a shared reality?

Let’s call this gamified reinforcement mechanism “the take temptation.” It amounts to the meme-ification of our common life and, therefore, of the common good itself. Jon writes earlier in the essay, redescribing the problem behind the problem:

We hear that online life has fragmented our “information ecosystem,” that this breakup has been accelerated by social division, and vice versa. We hear that alienation drives young men to become radicalized on Gab and 4chan. We hear that people who feel that society has left them behind find consolation in QAnon or in anti-vax Facebook groups. We hear about the alone-togetherness of this all.

What we haven’t figured out how to make sense of yet is the fun that many Americans act like they’re having with the national fracture.

Take a moment to reflect on the feeling you get when you see a headline, factoid, or meme that is so perfect, that so neatly addresses some burning controversy or narrative, that you feel compelled to share it. If it seems too good to be true, maybe you’ll pull up Snopes and check it first. But you probably won’t. And even if you do, how much will it really help? Everyone else will spread it anyway. Whether you retweet it or just email it to a friend, the end effect on your network of like-minded contacts — on who believes what — will be the same.

“Confirmation bias” names the idea that people are more likely to believe things that confirm what they already believe. But it does not explain the emotional relish we feel, the sheer delight when something in line with our deepest feelings about the state of the world, something so perfect, comes before us. Those feelings have a lot in common with how we feel when our sports team scores a point or when a dice roll goes our way in a board game.

It’s the relish of the meme, the fun of the hot take—all while the world burns—that Jon wants us to see so that he, in turn, can explain it. I leave the explanation to him. For my part, I’m going to do a bit of moralizing, aimed at myself first but offered here as a bit of stern encouragement to anyone who’s apt to listen.

The moral is simple: The take temptation is to be resisted at all costs, full stop. The take-industrial complex is not a bit of fun at the expense of others. It’s not a victimless joke. It is nothing less than your or my small but willing participation in unraveling the social fabric. It is the false catharsis that comes from treating the goods in common we hope to share as a game, to be won or lost by cheap jokes and glib asides. Nor does it matter if you reserve the take or meme for like-minded friends. In a sense that’s worse. The tribe is thereby reinforced and the Other thereby rendered further, stranger, more alien than before. You’re still perpetuating the habit to which we’re all addicted and from which we all need deliverance. You’re still feeding the beast. You’re still heeding the sly voice of the tempter, whose every word is a lie.

The only alternative to the take temptation is the absolutely uncool, unrewarding, and unremunerative practice of charity for enemies, generosity of spirit, plainness of prose, and perfect earnestness in argument. The lack of irony is painful, I know; the lack of sarcasm, boring; the lack of grievance, pitiful. So be it. Begin to heal the earth by refusing to litter; don’t wish the world rid of litter while tossing a Coke can out the window.

This means not reveling in the losses of your enemies, which is to say, those friends and neighbors for whom Christ died with whom you disagree. It means not joking about that denomination’s woes. It means not exaggerating or misrepresenting the views of another person, no matter what they believe, no matter their character, no matter who they are. It means not pretending that anyone is beyond the pale. It means not ridiculing anyone, ever, for any reason. It means, practically speaking, not posting a single word to Twitter, Instagram, Facebook, or any other instrument of our digital commons’ escalating fracture. It means practicing what you already know to be true, which is that ninety-nine times out of one hundred, the world doesn’t need to know what you think, when you think it, by online means.

The task feels nigh impossible. But resistance isn’t futile in this case. Every minor success counts. Start today. You won’t be sorry. Nor will the world.

Read More
Brad East Brad East

Deflating tech catastrophism

There’s no better way to deflate my proclivities for catastrophism—a lifelong project of my long-suffering wife—than writers I respect appealing to authorities like St. Augustine and Wendell Berry. And that’s just what my friends Jeff Bilbro and Alan Jacobs have done in two pieces this week responding to my despairing reflections on digital technology, prompted by Andy Crouch’s wonderful new book, The Life We’re Looking For.

There’s no better way to deflate my proclivities for catastrophism—a lifelong project of my long-suffering wife—than writers I respect appealing to authorities like St. Augustine and Wendell Berry. And that’s just what my friends Jeff Bilbro and Alan Jacobs have done in two pieces this week responding to my despairing reflections on digital technology, prompted by Andy Crouch’s wonderful new book, The Life We’re Looking For.

I’m honored by their lovely, invigorating, and stimulating correctives. I think both of them are largely right, and what anyone reading Crouch-on-tech, East-on-Crouch, Bilbro-on-East-on-Crouch, East-on-tech, Jacobs-on-East-on-tech, etc., will see quickly is how much this conversation is a matter of minor disagreements rendered intelligible in light of shared first principles. How rare it is to have more light than heat in online (“bloggy”) disputations!

So thanks to them both. I don’t want to add another meandering torrent of words, as I’m wont to do, so let me aim for clarity (I would say concision, but then we all know that’s not in play): first in what we agree about, second in what we perhaps don’t.

Agreements:

  1. Andy’s book is fantastic! Everyone should buy it and do their utmost to implement its wisdom in their lives and the lives of their households.

  2. The measure of a vision of the good life or even its enactment is not found in its likelihood either (a) to effect massive political transformation or (b) to elicit agreement and adoption in a high percentage of people’s lives.

  3. Digital is not the problem per se; Mammon is. (Both Jeff and Alan make this point, but I’ll quote Alan here: “the Digital is a wholly-owned subsidiary of Mammon.” I’ll be pocketing that line for later use, thank you very much.)

  4. We cannot expect anything like perfection or wholesale “health”—of the technological or any other kind—in this life. Our attempts at flourishing will always be imperfect, fallible, and riddled with sin.

  5. Christians are called to live in a manner distinct from the world, so the task of resisting Mammon’s uses of Digital falls to us as a matter of discipleship to Christ regardless of the prospects of our success.

  6. Actual non-metaphorical revolutionary political change, whether bottom-up or top-down, is not in the cards, and (almost?) certainly would bring about an equally unjust or even worse state of affairs. Swapping one politico-technological regime for another turns out to mean little more than: meet the new boss, same as the old boss. A difference in degree, not in kind.

  7. What we need is hope, and Christians have good grounds for hope—though not for optimism, short of the Kingdom.

  8. What is possible, in faith and hope, here and now, is a reorientation (even a revolution) of the heart, following Augustine. That is possible in this life, because Christ makes it possible. Jeff, Alan, and Andy are therefore asking: Which way are we facing? And what would it take to start putting one foot in front of the other in the right direction? Yes. Those are the correct questions, and they can be answered. And though I (I think defensibly) use the language principalities and powers with respect to Digital, I do not disagree that it is not impossible—check out those negatives piling up one on another—for our digital technologies to be bent in the direction of the good, the true, and the beautiful. Which is to say, toward Christ’s Kingdom.

Now to disagreements, which may not amount to disagreements; so let us call them lingering queries for further pondering:

  1. For whom is this vision—the one outlined above and found in Andy’s book—meant? That is, is it meant for Christians or for society as a whole? I can buy that it is meant for the church, for some of the theological premises and commitments I’ve already mentioned. I’m less persuaded, or perhaps need persuading, that it is one that “fits” a globalized secular liberal democracy, or at least ours, as it stands at the moment.

  2. Stipulate that it is not impossible for this vision to be implemented by certain ordinary folks (granting, with Jeff, that Christians are called not to be normies but to be saints: touché!). I raised questions of class in my review and my blog posts, but I didn’t see class come up in Jeff or Alan’s responses. My worry, plainly stated, is that middle-to-upper-middle-class Americans with college degrees, together with all the familial and social and financial capital that comes along with that status, are indeed capable of exercising prudence and discipline in their use of digital technology—and that everyone else is not. This is what I meant in the last post when I drew attention to the material conditions of Digital. It seems to me that the digital architecture of our lives, which in turn generates the social scripts in which and by which we understand and “author” our lives, has proven most disastrous for poor and working class folks, especially families. They aren’t the only people I mean by “normies,” but they certainly fall into that category. It isn’t Andy et al’s job to have a fix for this problem. But I do wonder whether they agree with me here, that it is not inaccurate to describe one’s ability to extricate oneself even somewhat from Digital’s reach as being a function of a certain class and/or educational privilege.

  3. In which case, I want to ask the practical question: How might we expand our vision of the good life under Mammon’s Digital reign to include poor and working class families?—a vision, in other words, that such people would find both attractive and achievable.

  4. If pursuing the good life is not impossible, and if it begins with a reorientation of the heart to the God we find revealed in Christ, then it seems to me that—as I believe Jeff, Alan, and Andy all agree—we cannot do this alone. On one hand, as we’ve already seen, we require certain material conditions. On the other hand, we need a community. But that word is too weak. What we need is the church. This is where my despairing mood comes in the back door. As I’ve written elsewhere, the church is in tatters. I do not look around and see a church capable of producing or sustaining, much less leading, prudent wisdom in managing the temptations of Digital. I see, or at least I feel, abject capitulation. Churches might be the last place I’d look for leadership or help here. Not because they’re especially bad, but because they’re the same as everyone else. I mean this question sincerely: Is your local church different, in terms of its use of and reliance on and presumptions about technology, than your local public schools, your local gyms, your local coffee shops? Likewise, are your church’s leaders or its members different, in terms of their relationship to Digital, than your non-Christian neighbors? If so, blessings upon you. That’s not my experience. And in any case, I don’t mean this as some sort of trump card. If our churches are failing (and they are), then it’s up to us to care for them, to love them, and to do what we can to fix what’s ailing them, under God. Moreover, the promise of Christ stands, whatever the disrepair of the church in America: the gates of hell shall not prevail against his people. That is as true now as it ever was, and it will remain true till the end of time. Which means, I imagine my friendly interlocutors would agree, that we not only may have hope, but may trust that God’s grace will be sufficient to the tasks he’s given us—in this case, the task of being faithful in a digital age. Yes and amen to all of that. The point I want to close with is more practical, more a matter of lived experience. If we need (a) the spiritual precondition of a reasonably healthy church community on top of (b) the material precondition of affluence-plus-college in order (c) to adopt modest, though real, habits of resistance to Mammon-cum-Digital … that’s a tall order! I hereby drop my claim that it is not doable, along with my wistful musings about a Butlerian Jihad from above. Nevertheless. It is profoundly dispiriting to face the full height of this particular mountain. Yes, we must climb it. Yes, it’s good know I’ve got brothers in arms ready to do it together; we don’t have to go it alone. But man, right now, if I’m honest, all I see is how high the summit reaches. So high you can’t see to the top of it.

Read More
Brad East Brad East

Tech for normies

On Monday The New Atlantis published my review essay of Andy Crouch’s new book, The Life We’re Looking For. The next day I wrote up a longish blog post responding to my friend Jeff Bilbro’s comment about the review, which saw a discrepancy between some of the critical questions I closed the essay with and an essay I wrote last year on Wendell Berry. Yesterday I wrote a seemingly unrelated post about the difference between radical churches (urban monastics, intentional communities, house churches, all to varying degrees partaking of the Hauerwasian or Yoderian style of ecclesiology) and what I called “church for normies.”

On Monday The New Atlantis published my review essay of Andy Crouch’s new book, The Life We’re Looking For. The next day I wrote up a longish blog post responding to my friend Jeff Bilbro’s comment about the review, which saw a discrepancy between some of the critical questions I closed the essay with and an essay I wrote last year on Wendell Berry. Yesterday I wrote a seemingly unrelated post about the difference between radical churches (urban monastics, intentional communities, house churches, all to varying degrees partaking of the Hauerwasian or Yoderian style of ecclesiology) and what I called “church for normies.”

That last post was of a piece with the first two, however, and provides some deep background to where I was coming from in answering some of Jeff and Andy’s questions. For readers who haven’t been keeping up with this torrent of words, my review of Andy’s book was extremely positive. The primary question it left me with, though, was (a) whether his beautiful vision of humane life in a technological world is possible, (b) whether, if it is possible, it is possible for any but the few, and (c) whether, however many it turns out to be possible for, it is liable to make a difference to any but those who take up the costly but life-giving challenge of enacting said vision—that is, whether it is likely or even possible to be an agent of change (slow or fast) in our common social and political and ecclesial life.

I admit that my stance evinces a despairing tone or even perspective. But let’s call it pessimistic for now. I’m pessimistic about the chances, here and now, for many or even any to embody the vision Andy lays out in his book—a vision I find heartening, inspiring, and apt to our needs and desires if we are to flourish as human beings in community.

Given my comments about church for normies yesterday, I thought I would write up one final post (“Ha! Final!” his readers, numbered in the dozens, exclaimed) summing up my thoughts on the topic and putting a pinch of nuance on some of my claims—not to say the rhetoric or metaphors will be any less feisty.

Here’s a stab at that summing up, in fourteen theses.

*

1. Digital technology is misunderstood if it is categorized as merely one more species of the larger genus “technology,” to which belong categories or terms like “tool,” “fire,” “wheel,” “writing,” “language,” “boat,” “airplane,” etc. It is a beast of its own, a whole new animal.

2. Digital technology is absolutely and almost ineffably pervasive in our lives. It is omnipresent. It has found its way into every nook and cranny of our homes and workplaces and spheres of leisure.

3. The ubiquity of Digital (hereafter capitalized as a power unto itself) is not limited to this or that sort of person, much less this or that class. It’s everywhere and pertains to everyone, certainly in our society but, now or very soon, in all societies.

4. Digital’s hegemony is neither neutral nor a matter of choice. It constitutes the warp and woof of the material conditions that make our lives possible. Daycares deploy it. Public schools feature it. Colleges make it essential. Rare is the job that does not depend on it. One does not choose to belong to the Domain of Digital. One belongs to it, today, by being born.

5. Digital is best understood, for Christians, as a principality and power. It is a seductive and agential force that lures and attracts, subdues and coopts the will. It makes us want what it wants. It addicts us. It redirects our desires. It captures and controls our attention. It wants, in a word, to eat us alive.

6. If the foregoing description is even partially true, then finding our way through the Age of Digital, as Christians or just as decent human beings, is not only an epochal and heretofore unfaced challenge. It entails the transformation of the very material conditions in which our lives consist. It is a matter, to repeat the word I use in my TNA review, of revolution. Anything short of that, so far as I can tell, is not rising to the level of the problem we face.

7. At least three implications follow. First, technological health is not and cannot be merely an individual choice. The individual, plainly put, is not strong enough. She will be overwhelmed. She will be defeated. (And even if she is not—if we imagine the proverbial saint moving to the desert with a few other hermits—then the exception proves the rule.)

8. Second, modest changes aren’t going to cut it. Sure, you can put your phone in grayscale; you can limit your “screen time” as you’re able; you can ask Freedom to block certain websites; you can discipline your social media usage or even deactivate your accounts. But we’re talking world-historical dominance here. Nor should we kid ourselves. Digital is still ruling my life whether or not I subscribe to two instead of six streaming platforms, whether or not I’m on my phone two hours instead of four every day, whether or not my kids play Nintendo Switch on the TV but not in handheld mode. Whenever we feel a measure of pride in these minor decisions, we should think of this scene:

Do we feel in charge? We are not.

9. Third, our households are not the world, and we live in the world, even if we hope not to be of it. Even if my household manages some kind of truce with the Prince of this Age—I refer to the titans of Silicon Valley—every member of my household departs daily from it and enters the world. We know who’s in charge there. In fact, if you don’t count time sleeping, the members of my own house live, week to week, more outside the home than they do inside it. Digital awaits them. It’s patient. It’ll do its work. Its bleak liturgies have all the time in the world. We just have to submit. And submit we do, every day.

10. But the truth is that the line between household and world runs through every home. We bring the world in with us through the front door. How could it be otherwise? Amazon’s listening ears and Netflix’s latest streamer and Google’s newest unread email and Spotify’s perfect algorithm—they’re all there, at home, in your pocket or on the mantle or in the living room, staring you down, calling your name, summoning and inquiring and inviting, even teaching. Their formative power is not out there. It’s in here. Every home I’ve ever entered, It was there, whose name is Legion, the household gods duly honored and made welcome.

11. Jeff rightly pushed back on this “everyone” and “everywhere” line in my earlier post. I should be clear that I’m not exaggerating: while I have read of folks who don’t have TVs or video games or tablets or smartphones or wireless internet, I haven’t personally met any. But I allow that some exist. This means that, to some extent, tech-wise living is possible. But for whom? For how many? That’s the question.

12. The fundamental issue, then, is tech for normies. By which I mean: Is tech-wise living possible for ordinary people? People who don’t belong to intentional communities? People without college or graduate degrees? People who aren’t married or aren’t in healthy marriages, or who are parents but unmarried? It is possible for working-class families? For families whose parents work double shifts, or households with a single parent who works? For kids who go to daycare or public school? For folks who attend churches that themselves encourage and even require constant active smartphone use? (“Please read along on your Bible app”; “Please register your child at this kiosk, we’ll send a text if we need you to come pick her up.”) From the bottom of my heart, with unfeigned sincerity, I do not believe that it is. And if it is not, what are we left with?

13. This is what I mean when I refer to matching the scale of the problem. Ordinary people live according to antecedent material conditions and social scripts, both of which precede and set the terms for what individuals and families tacitly perceive to characterize “a normal life.” But the material conditions and the social scripts that define our life today are funded, overwritten, and determined by Digital. That is why, for example, the child of friends of mine here in little ol’ Abilene, Texas, was one of exactly two high school freshmen in our local public high school who did not have a smartphone—and why, before fall semester was done, they bought him one. Not because the peer pressure was too intense. Because the pressure from teachers and administrators and coaches was insurmountable. Assignments weren’t being turned in, grades were falling, rehearsals and practices were being missed, all because the educational ecosystem had begun, sometime in the previous decade, to presuppose the presence of a smartphone in the hand, pocket, purse, or backpack of every single student and adult in the school. It is now the center around which all else orbits. The pull, the need, to buy a smartphone proves, in the end, irresistible. It doesn’t matter what you, the individual, or y’all, the household, want. Resistance is futile.

14. Now. Must this lead to despair? Does this imply that resistance to evil is impossible? That there is nothing to be done? That we are at the end of history? No. Those conclusions need not follow necessarily. I don’t think that digital technology as such or in every respect is pure evil. This isn’t the triumph of darkness over light. My children watching Encanto or playing Mario Kart is not the abomination of desolation, nor is my writing these words on a laptop. My point concerns the role and influence and ubiquity of Digital as a power and force in our lives and, more broadly, in our common life. It is that that is diabolical. And it is that that is a wicked problem. Which means it is not a problem that individuals or families have the resources or wherewithal to address on their own—any more than, if the water supply in the state of Texas dried up, this or that person or household could “choose” to resolve the issue on their own. This is why I insisted in my original review that there is something inescapably political, even top-down, about a comprehensive or potentially successful response to Digital’s reign over us. Yes, by all means we should begin trying to rewrite some of the social scripts, so far as our time and ability permit. (I’m less sanguine even here, but I grant that it’s possible in small though important ways.) Nevertheless the material conditions must change for any such minor measures to take hold, not just at wider scale but in the lives of ordinary people. If you’re willing to accept the metaphor of addiction—and I think it’s more than a metaphor in this case—then what we need is for the authorities to turn off the supply, to clamp down on the free flow of the drug we all woke up one day to realize we were hooked on. The thing about a drug is that it feels good. We’re all jonesing for one more hit, click by click, swipe by swipe, like by like. What we need is rehab. But few people check themselves in voluntarily. What most addicts need, most of the time, is what most of us, today, need above all.

An intervention.

Read More
Brad East Brad East

Tech-wise BenOp

My friend Jeff Bilbro has raised a question about my review essay in The New Atlantis of Andy Crouch’s new book, The Life We’re Looking For: Reclaiming Relationship in a Technological World. He sees a real tension between the critical questions I pose for Crouch at the end of the review and my essay last year for The Point, in which I defend Wendell Berry against the charge of quietism or apolitical inaction (lodged, in this case, by critic George Scialabba).

My friend Jeff Bilbro has raised a question about my review essay in The New Atlantis of Andy Crouch’s new book, The Life We’re Looking For: Reclaiming Relationship in a Technological World. He sees a real tension between the critical questions I pose for Crouch at the end of the review and my essay last year for The Point, in which I defend Wendell Berry against the charge of quietism or apolitical inaction (lodged, in this case, by critic George Scialabba). If, that is, I argue that Berry is right to insist that living well is worth it even when losing is likely—in other words, when the causes in which one believes and for which one advocates are unlikely to win the day—am I not being inconsistent in criticizing Crouch’s proposal for failing to match the scale of the problem facing us in digital technology? Am I not taking up the role of Scialabba and saying, in so many words, “Lovely prose; bad advice”?

I don’t believe I am, but the question is a sharp one, and I’m on the hook for it. Let me see if I can explain myself.

First, note well that my review is overwhelmingly positive and that I say repeatedly in the closing sections of the essay that Crouch’s proposal is a sensible one; that it may, in fact, be the best on offer; and that it is worth attempting to implement whether or not there is a more scalable alternative to be preferred.

Second, my initial criticism concerns audience. In effect I am asking: Who can put this vision into practice? Who is capable of doing it? Whom is it for? With respect to Berry/Scialabba, that question is immaterial. Scialabba isn’t frustrated or confused by Berry’s intended audience; he actively does not want Berry to be successful in persuading others to adopt his views, because doing so would drain the resources necessary for mass political activism to be effective. Put differently, the Berryan vision is possible, though strenuous. Whereas it isn’t clear to me that Crouch’s vision is possible at all—or at least the question of for whom it may be possible is unclear to me.

Third, then, I want to up the ante on the Crouchian project by comparing its scale to the scale of the problem facing us, on one hand, and by asking after its purpose, on the other. It seems to me that The Life We’re Looking For does believe, or presuppose, that the Tech-Wise BenOp (or, if we want to uncouple Crouch from Dreher, the Pauline Option) has the power to effect, or is ordered to, the transformation of our common life, our culture, etc. Granted that such transformation may take decades or centuries, transformation is clearly in view. But this, too, is distinct from Berry’s stance. Berry does not believe his vision of the good life is a recipe for transformation. He does believe that large-scale transformation is impossible apart from local and even personal transformation. That, however, is a different matter than proposing a means for change. In sum, Berry believes that (1) the good life is worth living whatever the future may hold, (2) the good life is not a plan for change, and (3) the possibility of change requires the integration of national and local, cultural and personal, theoretical and practical. I affirm all this. But these points are distinct from (though not opposed to) Crouch’s proposal.

Returning to scale helps to clarify the difference. I admit in the review that it may genuinely be impossible to match the scale of the problem of digital technology without grave injustice. Nonetheless I hold that, given that scale, I cannot see how Crouch’s Pauline Option is a live possibility for any but saints. And as I say there, salvation from the tyranny of tech “must be for normies, not heroes.”

Let me make this more personal. Across my entire life I have not known a single household or family that fits the vision of being “tech wise” as laid out in either this book or Crouch’s previous book. Whether the folks in question were single, married, or parents, whether they were Christians or not, whether they were affluent or not, whether they were Texan or not, whether they were suburban or not, whether they were educated or not—the inside of the home and the habits of the household were all more or less the same, granting minor differences. Everyone has multiple TVs. Everyone has laptops and tablets. Everyone has video games. Everyone has smart phones. Everyone subscribes to streaming services. Everyone watches sports. Everyone is on social media. Everyone, everyone, everyone. No exceptions. The only differences concern which poison one prefers and how much time one gives to it.

I’m not throwing stones. This description includes me. I assume it includes you, too. The hegemony of the screen is ubiquitous, an octopus whose tentacles encircle and invade every one of our homes. No one, not one is excluded.

Some folks are more intentional than others; some of them even succeed in certain practices of moderation. But does it really make a difference? Is it really anything to write home about? Does it mark these homes off from their neighbors? Not at all. I repeat: Not once have I entered a single home that even somewhat resembles the (already non-extreme!) vision of tech-wisdom on offer in the pages of Crouch’s books.

This is what I mean by scale. It’s like we’re all on the bottom of the ocean, but some of us are a few yards above the rest. Are such persons technically closer to the surface? Sure. Are they still going to drown like the rest of us? Absolutely.

*

I hope all this makes clear that I’m not contesting the wisdom or goodness or beauty of Crouch’s vision of households nurturing a technological revolution in nuce. I want to join such a resistance movement. But does it exist? More to the point, is it possible?

What I’ve come to believe is that, more or less full stop, it is not possible—so long, that is, as our households remain occupied territory. The flag of Silicon Valley waves publicly and proudly in all of our homes. I see it everywhere I go. It’s like the face of Big Brother. It just keeps on flapping and waving, waving and smiling, world without end, amen.

Perhaps “scale” is a misleading term. More than scale the challenge is how deep the roots of the problem lie. Truly to get a handle on it, truly to begin the revolution, an EMP would have to be detonated in my neighborhood. We’d have to throw our screens in a great glorious bonfire, turn off our wi-fi, and rid our homes of every “smart” device (falsely so called) and every member in that dubious, diabolical category: “the internet of things.” We’d have to delete Twitter, Instagram, Facebook, Snapchat, and TikTok from our phones. We’d have to cancel our subscriptions to Netflix, Disney, Apple, HBO, Hulu, and Amazon. We’d have to say goodbye to it all, and start over.

I don’t mean we have to live in a post-digital world to live sane lives. (Though some days I do wonder whether that may be true: viva la Butlerian Jihad!) I mean our lives are already so integrated with digital as to qualify as transhuman. We must face that fact squarely: If we are already cyborgs in practice, then disconnecting a few of the tubes while remaining otherwise hooked up to the Collective isn’t going to cut it.

Nor—and this is a buried lede—is any of this possible, if it is possible, for any but the hyper-educated or hyper-affluent. Most people, as I comment in the review, are just trying to survive:

We are too beholden to the economic and digital realities of modern life — too dependent on credit, too anxious about paying the rent, too distracted by Twitter, too reliant on Amazon, too deadened by Pornhub — to be in a position to opt for an alternative vision, much less to realize that one exists. We’ve got ends to meet. And at the end of the day, binging Netflix numbs the stress with far fewer consequences than opioids.

Yet all the hyper-educated and hyper-affluent people I now are just as plugged-in as those with fewer degrees and less money. Put most starkly, I read Crouch’s book as if it were a sermon preached by an ex-Borg to the Borg Hive. But individual Borg aren’t capable of disconnecting themselves. That’s what makes them the Borg.

As they say, resistance is futile.

*

My metaphors and rhetoric are outstripping themselves here, so let me pull it back a bit, not least because the point of this post isn’t to criticize Crouch’s book but to show that my (modestly!) critical questions aren’t at odds with my defense of Berry.

Let me summarize my main points, before I add one final word about scale, that word I keep using but not quite defining or addressing.

  1. Crouch’s book is an excellent and beautiful vision of what it means to be human, at all times and especially today, in a world beset by digital technology.

  2. I don’t know whether Crouch envisions that vision to be achievable by just anyone at all; and, if not, then by whom in particular.

  3. I don’t know whether Crouch’s vision is possible in principle, at least for normal people with normal jobs and normal lives.

  4. Even if it were possible in principle for the few saints and heroes among us, I don’t know whether it would make a difference except to themselves.

  5. This last observation is not a criticism in itself, but it becomes a criticism if Crouch believes that cultural transformation occurs from the ground up through the patient faithfulness of a tiny minority of persons leavening society by their witness, eventuating in radical social transformation.

  6. Points two through five are not in tension with my defense of Berry against Scialabba, because (a) Berry’s vision is livable, (b) it is livable by normies, (c) it is not designed or proposed in order to effectuate mass change, and (d) he knows this and believes it is worth doing anyway.

Clearly, I have set myself up here to be disproved: If Crouch’s vision is not only possible to be lived in general but is being lived right now, as we speak, by normies, then he’s off the hook and I’ve got pie on my face. More, if he doesn’t believe that—or is not invested in the likelihood that—this vision, put into practice by normal folks, will or should lead to social, cultural, economic, and political transformation, then that’s a second pie on top of the first, and I hereby pledge to repent in digital dust and ashes.

Nothing would make me happier than being shown to be wrong here. I want Crouch to be right, because I want nothing more than for my life and the lives of my friends and neighbors (and, above all, those of my children) to be free of the derelictions and defacements of digital. Not only that, but there’s no one I trust more on this issue than Crouch. I assign The Tech-Wise Family to my students every year, and practices he commends there have made their way into my home. I owe him many debts.

But I just can’t shake the feeling that the problem is even bigger, even nastier, even deeper and more threatening than he or any of us can find it within us to admit. That’s what I mean when I refer to “scale.” Permit me to advert to one last overwrought analogy. Berry wants us (among other things) to live within limits, on a plot of land that we work by our own hands to bring forth some allotment of food for us, our household, our neighbors, our animals. He doesn’t ask us to breathe unpolluted oxygen, to live on a planet without air pollution. That’s now, regrettably, a fact of life; it encompasses us all. By contrast, reading The Life We’re Looking For I get one of two feelings: either that unpolluted oxygen is available, you just have to know where to find it; or that the pollution isn’t so bad after all. Maybe there really are folks who’ve fashioned or found oxygen masks here and there around the globe. Maybe I’ve just been unfortunate not to have spotted any. But I fear there are none, or there aren’t nearly enough to go around.

In brief, the Hive isn’t somewhere else or other than us; we are the Hive, and the Hive is us. It’s just this once-blue planet spinning in space, now overtaken by the tunnels and tubes, the darkness and silence of the Cube. If there’s a way out of this digital labyrinth, I’m all ears. All day long I’m looking for that crimson thread, showing the way out. If someone—Bilbro, Berry, Crouch, whoever—can lead the way, I’ll follow. The worry that keeps me up nights, though, is that there is no exit, and we’re deceiving ourselves imagining there is.

Read More
Brad East Brad East

Six months without podcasts

Last September I wrote a half-serious, half-tongue-in-cheek post called “Quit Podcasts.” There I followed my friend Matt Anderson’s recommendation to “Quit Netflix” with the even more unpopular suggestion to quit listening to podcasts. As I say in the post, the suggestion was two-thirds troll, one-third sincere. That is, I was doing some public teasing, poking the bear of everyone’s absolutely earnest obsession with listening to The Best Podcasts all day every day.

Last September I wrote a half-serious, half-tongue-in-cheek post called “Quit Podcasts.” There I followed my friend Matt Anderson’s recommendation to “Quit Netflix” with the even more unpopular suggestion to quit listening to podcasts. As I say in the post, the suggestion was two-thirds troll, one-third sincere. That is, I was doing some public teasing, poking the bear of everyone’s absolutely earnest obsession with listening to The Best Podcasts all day every day. Ten years ago, in a group of twentysomethings, the conversation would eventually turn to what everyone was watching. These days, in a group of thirtysomethings, the conversation inexorably turns to podcasts. So yes, I was having a bit of fun.

But not only fun. After 14 years of listening to podcasts on a more or less daily basis, I was ready for something new. Earlier in the year I’d begun listening to audiobooks in earnest, and in September I decided to give up podcasts for audiobooks for good—or at least, for a while, to see how I liked it. Going back and forth between audiobooks and podcasts had been fine, but when the decision is between a healthy meal and a candy bar, you’re usually going to opt for the candy bar. So I cut out the treats and opted for some real food.

That was six months ago. How’s the experiment gone? As well as I could have hoped for. Better, in fact. I haven’t missed podcasts once, and it’s been nothing but a pleasure making time for more books in my life.

Now, before I say why, I suppose the disclaimer is necessary: Am I pronouncing from on high that no one should listen to podcasts, or that all podcasts are merely candy bars, or some such thing? No. But: If you relate to my experience with podcasts, and you’re wondering whether you might like a change, then I do commend giving them up. To paraphrase Don Draper, it will shock you how much you won’t miss them, almost like you never listened to them in the first place.

So why has it been so lovely, life sans pods? Let me count the ways.

1. More books. In the last 12 months I listened to two dozen works of fiction and nonfiction by C. S. Lewis and G. K. Chesterton alone. Apart from the delight of reading such wonderful classics again, what do you think is more enriching for my ears and mind? Literally any podcast produced today? Or Lewis/Chesterton? The question answers itself.

2. Not just “more” books, but books I wouldn’t otherwise have made the time to read. I listened to Fahrenheit 451, for example. I hadn’t read it since middle school. I find that I can’t do lengthy, complex, new fiction on audio, but if it’s a simple story, or on the shorter side, or one whose basic thrust I already understand, it goes down well. I’ve been in a dystopian mood lately, and felt like revisiting Bradbury, Orwell, Huxley, et al. But with a busy semester, sick kids, long evenings, finding snatches of time in which to get a novel in can be difficult. But I always have to clean the house and do the dishes. Hey presto! Done and done. Many birds with one stone.

3. Though I do subscribe to Audible (for a number of reasons), I also use Libby, which is a nice way to read/listen to new books without buying them. That’s what I did with Oliver Burkeman’s Four Thousand Weeks—another book that works well on audio. I’ve never been much of a local library patron, except for using university libraries for academic books. This is one way to patronize my town’s library system while avoiding spending money I don’t have on books I may not read anytime soon.

4. I relate to Tyler Cowen’s self-description as an “infogore.” Ever since I was young I have wanted to be “in the know.” I want to be up to date. I want to have read and seen and heard all the things. I want to be able to remark intelligently on that op-ed or that Twitter thread or that streaming show or that podcast. Or, as it happens, that unprovoked war in eastern Europe. But it turns out that Rolf Dobelli is right. I don’t need to know any of that. I don’t need to be “in the know” at all. Seven-tenths is evanescent. Two-tenths is immaterial to my life. One-tenth I’ll get around to knowing at some point, though even then I will, like everyone else, overestimate its urgency.

That’s what podcasts represent to me: either junk food entertainment or substantive commentary on current events. To the extent that that is what podcasts are, I am a better person—a less anxious, more contemplative, more thoughtful, less showy—for having given them up.

Now, does this description apply to every podcast? No. And yet: Do even the “serious” podcasts function in this way more often than we might want to admit? Yes.

In any case, becoming “news-resilient,” to use Burkeman’s phrase, has been one of the best decisions I’ve made in a long time. My daily life is not determined by headlines—print, digital, or aural. Nor do I know what the editors at The Ringer thought of The Batman, or what Ezra Klein thinks of Ukraine, or what the editors at National Review think of Ukraine. The truth is, I don’t need to know. Justin E. H. Smith and Paul Kingsnorth are right: the number of people who couldn’t locate Ukraine on a map six weeks ago who are now Ukraine-ophiles with strong opinions about no-fly zones and oil sanctions would be funny, if the phenomenon of which they are a part weren’t so dangerous.

I don’t have an opinion about Ukraine, except that Putin was wrong to invade, is unjust for having done so, and should stop immediately. Besides praying for the victims and refugees and for an immediate cessation to hostilities, there is nothing else I can do—and I shouldn’t pretend otherwise. That isn’t a catchall prohibition, as though others should not take the time, slowly, to learn about the people of Ukraine, Soviet and Russian history, etc., etc. Anyone who does that is spending their time wisely.

But podcasts ain’t gonna cut it. Even the most sober ones amount to little more than propaganda. And we should all avoid that like the plague, doubly so in wartime.

The same goes for Twitter. But then, I quit that last week, too. Are you sensing a theme? Podcasts aren’t social media, but they aren’t not social media, either. And the best thing to do with all of it is simple.

Sign off.

Read More
Brad East Brad East

Lent: no Twitter + new piece in FT

As promised a few weeks back, I deactivated my Twitter profile this morning. (I thought to link to it and then realized … there’s nothing to link to!) It’s the first time I’ve done so since creating an account nine years ago. It was reading Thomas à Kempis that spurred me seriously to contemplate it; then it was reading Bonhoeffer for class this morning that prompted me just to get it over with already. Mortify the flesh, doubly so when that mortification is digital.

As promised a few weeks back, I deactivated my Twitter profile this morning. (I thought to link to it and then realized … there’s nothing to link to!) It’s the first time I’ve done so since creating an account nine years ago. It was reading Thomas à Kempis that spurred me seriously to contemplate it; then it was reading Bonhoeffer for class this morning that prompted me just to get it over with already. Mortify the flesh, doubly so when that mortification is digital.

So you won’t see me on Twitter between now and Easter. We’ll see if I reappear after that. TBD.

While I’m at it, though, I’ve got a little piece over in First Things today. It’s about Ash Wednesday, naturally, and it’s called “Marked by Death.” While you’re there, read Peter Leithart’s lovely, moving reflection on the silence of Jesus in St. Matthew’s Gospel.

A blessed Lent to all of you.

Read More
Brad East Brad East

The meta mafia

Here’s something Mark Zuckerberg said during his indescribably weird announcement video hawking Facebook’s shift into the so-called metaverse: There are going to be new ways of interacting with devices that are much more natural. Instead of typing or tapping, you’re going to be able to gesture with your hands, say a few words, or even just make things happen by thinking about them. Your devices won’t be the focal point of your attention anymore. Instead of getting in the way, they’re going to give you a sense of presence in the new experiences that you’re having and the people who you’re with. And these are some of the basic concepts for the metaverse.

Here’s something Mark Zuckerberg said during his indescribably weird announcement video hawking Facebook’s shift into the so-called metaverse:

There are going to be new ways of interacting with devices that are much more natural. Instead of typing or tapping, you’re going to be able to gesture with your hands, say a few words, or even just make things happen by thinking about them. Your devices won’t be the focal point of your attention anymore. Instead of getting in the way, they’re going to give you a sense of presence in the new experiences that you’re having and the people who you’re with. And these are some of the basic concepts for the metaverse.

There are many things to say about this little snippet—not to mention the rest of his address. (I’m eagerly awaiting the 10,000-word blog-disquisition on the Zuck’s repeated reference to “presence” by contradistinction to real presence.) But here’s the main thing I want to home in on.

In our present practice, per the CEO of Facebook-cum-Meta, devices have become the “focal point” of our “attention.” Whereas in the future being fashioned by Meta, our devices will no longer “get in the way” of our “presence” to one another. Instead, those pesky devices now out of the way, the obstacles will thereby be cleared for the metaverse to facilitate, mediate, and enhance the “sense of presence” we’ve lost in the device-and-platform obsessions of yesteryear.

Now suppose, as a thought experiment, that an anti–Silicon Valley Luddite wanted to craft a statement and put it into the mouthpiece of someone who stands for all that’s wrong with the digital age and our new digital overlords—a statement rife with subtle meanings legible only to the initiated, crying out for subtextual Straussian interpretation. Would one word be different in Zuckerberg’s script?

To wit: a dialogue.

*

Question: Who introduced those ubiquitous devices, studded with social media apps, that now suck up all our attention, robbing us of our sense of presence?

Answer: Steve Jobs, in cahoots with Mark Zuckerberg, circa 15 years ago.

Question: And what is the solution to the problem posed by those devices—a problem ranging from partisan polarization to teen addiction to screens to social anomie to body image issues to political unrest to reduced attention spans to diffuse persistent low-grade anxiety to massive efforts at both disinformation and censorship—a problem, recall, introduced by (among others) Mr. Zuckerberg, which absorbs and deprives us all, especially young people, of the presence and focus necessary to inhabit and navigate the real world?

Answer: The metaverse.

Question: And what is that?

Answer: An all-encompassing digital “world” available via virtual reality headsets, in which people, including young people, may “hang out” and “socialize” (and “teleport” and, what is the Zuck’s go-to weasel word, “connect”) for hours on end with “avatars” of strangers that look like nothing so much as gooey Minecraft simulacra of human beings.

Question: And who, pray tell, is the creator of the metaverse?

Answer: Why, the creator of Facebook: Mark Zuckerberg, CEO of Meta.

Question: So the problem with our devices is not that they absorb our attention, but that they fall short of absolute absorption?

Answer: Exactly!

Question: So the way the metaverse resolves the problem of our devices’ being the focal point of our attention is by pulling us into the devices in order to live inside them? So that they are no longer “between” or “before” us but around us?

Answer: Yes! You’re getting closer.

Question: Which means the only solution to the problem of our devices is more and better devices?

Answer: Yes! Yes! You’re almost there!

Question: Which means the author of the problem is the author of the solution?

Answer: You’ve got it. You’re there.

Question: In sum, although the short-sighted and foolish-minded among us might suppose that the “obvious solution” to the problem of Facebook “is to get rid of Facebook,” the wise masters of the Valley propose the opposite: “Get rid of the world”?

Answer: This is the way. You’ve arrived. Now you love Big Zuckerberg.

*

Lest there be any doubt, there is a name for this hustle. It’s a racket. When a tough guy smashes the windows of your shop, after which one of his friends comes by proposing to protect you from local rowdy types, only (it goes without saying) he requires a weekly payment under the table, you know what’s happening. You’re not grateful; you don’t welcome the proposal. It’s the mafia. And mob protection is fake protection. It’s not the real thing. The threat and the fix wear the same face. Maybe you aren’t in a position to decline, but either way you’re made to be a fool. Because the humiliation is the point.

Mark Zuckerberg is looking us in the eyes and offering to sell us heroin to wean us off the cocaine he sold us last year.

It’s a sham; it’s a racket; it’s the mob; it’s a dealer.

Just say no, y’all.

Just. Say. No.

Read More
Brad East Brad East

Quit podcasts

“Quit Netflix.” Matt Anderson has made that phrase something of a personal slogan. Among the many controversial things he’s written—he’s a controversialist, after all—it may be the most controversial. We like our shows in the age of Peak TV. And what he wants to do is take them away.

“Quit Netflix.”

Matt Anderson has made that phrase something of a personal slogan. Among the many controversial things he’s written—he’s a controversialist, after all—it may be the most controversial. We like our shows in the age of Peak TV. And what he wants to do is take them away.

Okay, not exactly. But he thinks you—all of us—would be better off if we canceled our Netflix subscriptions. For the obvious reasons: Netflix is a candy bar, a sedative, a numbing device by which we pass the time by doing nothing especially worthwhile at all. It turns out that’s true not only of Netflix but of the so-called Golden Age of TV as a whole. It’s not that there are no good TV shows. It’s that, in almost every circumstance, there are dozens of better things you could be doing with your time. (You might be inclined to resist this claim, but in your heart you know it’s inarguable.) Add to that the fact that watching Prestige TV has become, in the last dozen years, a vaguely moralized social obligation for a certain subset of upper-middle class white-collar professionals, and perhaps Matt is right that Christians not only may but should quit Netflix.

Point taken. Now allow me to swap out a different activity for your quitting consideration.

Podcasts.

Podcasts, as you well know, are the new Prestige TV. They’re ubiquitous. Not only does everyone have a favorite podcast. Everyone has a podcast, i.e., hosts one of their own. Or is starting one. Or is planning to. Or has an idea for one. They just need to get the equipment and line up some guests . . .

I live and work right next to a college campus. If you see someone walking on campus and that person is under 40 and alone, almost certainly she has air pods in her ears, and chances are those air pods are playing a podcast. (Maybe music. Maybe.) What is the podcast? Who knows? There are literally millions today, on every topic under the sun. “Have you listened to [X] podcast?” is the new “Have you seen the latest episode of [X]?” Just last month our pediatrician asked me, in the middle of a check-up for one of our kids, given my job, whether I was listening to the Mars Hill podcast. Alas, I had to say no.

Now, this post is two-thirds troll, one-third sincere. I’ve been listening to podcasts for nearly 15 years. My first was Bill Simmons’ old pod for ESPN, whose first episode dropped while I was living in an apartment in Tomsk, Russia, early in the summer of 2007. I’ve been listening to Simmons off and on ever since. My podroll has increased as I’ve aged, and some of my typical listens are among the usual suspects: Ezra Klein, The Argument, Zach Lowe, The Watch, The Rewatchables, Know Your Enemy, LR&C, Tyler Cowen, Mere Fidelity, The Editors, a few others here and there. Washing the dishes or cleaning the house, it’s a pleasure to listen to these folks talk about sports and entertainment and news and politics and theology. It’s background noise, their voices become like those of friends, and occasionally you even learn things.

So unlike the scourge of Prestige TV—which is little more than a Trojan horse for reinforcing the single greatest collective habitual addiction besetting our society for nearly a century—podcasts aren’t All Bad, nor are their benefits mainly a function of rationalization and self-justification. I’m not worried about them in the same way.

Having said that. Let me suggest a few reasons why you ought to be a little more skeptical of them. So as to decrease your podcasting, and maybe even to quit it.

First, podcasts are filler. They’re aural wallpaper. They’re something to have on in the background while you do something else, something that requires your actual focus and attention. If that’s true, can they really be that substantial? Aren’t they, as often as not, little more than snack food for the ears?

Second, if you really want to listen to something (say, on a road trip or a long walk or while working out), why not listen to an audiobook? Ninety-nine times out of a hundred, the book will be worth your time—a four-course meal—in a way the Cheetos-posing-as-Michelin-starred-podcast will not. You could buy audiobooks or sign up for Audible, as a way to patronize authors, or you could use Overdrive or Libby. You have more or less every great work of literature and prose in the English language available to listen to for free; sometimes the work was composed with the express purpose of hearing it read aloud, rather than reading words on a page. Why not give it a try?

Third, those first two points suppose it’s not a problem, us walking around incessantly filling our ears with the voices of others, thus blocking out the noises—and silences—of the real world. Isn’t it a problem, though, this anxious need to fill even the smallest of snatches of time with meaningful noise, lest we be oppressed by the stillness and quietness (or, if you’re in a big city, the parading loudness) of life? Or perhaps we feel anxious to Do Something Productive with our time: If our attention can manage it, surely we Ought To Be Learning/Listening/Thinking? Nonsense. If to cook is to pray, so is every other daily “mindless” habitual activity that doesn’t demand the totality of our attention. Such activity may, in fact, permit our attention to be at ease, or to meditate on other matters, or to examine our days, or to wander as it pleases. Or, as the case may be, to choke on emotions we’d rather not address, indeed would rather numb and sedate and repress through unremitting distraction. Perhaps podcasts are a kind of noise pollution but on an individual level, self-chosen rather than imposed from without. We just have to refuse the urge to put the pods in and press play.

Fourth, podcasts almost invariably trade on the new, the latest, the exclusive-breaking-this-just-in-ness of our forgetful presentist age. In this they’re analogous to Twitter: an infinite scroll, not for the eyes, but for the ears. Doubtless some people listen to podcasts while scrolling Twitter. (The horror!) The podcasts play on, world without end, one blending into another, until you forget where one begins and one ends. Of all the podcasts you’ve ever listened to—and I’ve surely listened to thousands—how many discrete episodes can you point to, from memory, and say, “That one, right there, was significant, a meaningful and substantive and life-giving episode”? I’m not saying you couldn’t pick out a few. I’m suggesting the batting average will be very low. Again, like remembering individual tweets. That’s why podcasts are so disposable. The moment they lose their immediate relevance, they are cast aside into the dustbin of history. It’s what makes writers who become podcasters so sad. Books and essays and columns stand the test of time. Pods do not. Bill Simmons, whom I referenced earlier, stopped writing five or six years ago. He likes to say his fingers stopped working. The truth is, a combination of market inefficiency plus the convenience of podcasting meant taking the time to draft and revise and draft and revise, under an editor’s watchful eye, was less convenient and more time-demanding than hopping onto a podcast seconds after a game ended—plus advertisers are willing to pay for that in a way they don’t for individual columns. So a writer who came onto the scene and made a name for himself because of his writing simply ceased to practice his craft. That’s something to lament. Beyond that individual case, though, it’s a parable for our time. And Simmons is someone with an audience in the millions. Yet his thousands upon thousands of podcasts from the last decade will never be listened to a second time, now or in the future. They might as well be lit on fire ten days after going online. The same goes for politics podcasts. They’re talk radio, only rarefied and highbrow. But they have the same shelf life. And they partake of the selfsame contemporary obsession with The New that all people of good will, but certainly Christians (and Jews and Muslims), should repudiate in all its forms. Go read Rolf Dobelli’s Stop Reading the News and you’ll realize just how unimportant—in general and to your own daily life—“keeping up with the news” is. That goes for politics and sports and entertainment (but I repeat myself) as much as for anything else. Stop reading the news translates to stop listening to the news, which I will gloss here as stop listening to podcasts devoted to The New.

Fifth, podcasts have increasingly become niche and personalized, as so much in our digital economy has. You, the individual consumer-listener, pay the individual content-maker/podcaster, perhaps become a Patreon supporter, and thereby receive Exclusive Access and Personal Benefits and other just-for-you paid-for goods and services. Am I the only one who finds all of this ever so slightly weird, even gross? I don’t begrudge anyone hustling to do their thing or to find an audience, precisely outside of the decaying and desiccated institutions that act as gatekeepers today. But there’s something icky about it nonetheless. In the same way that news-watchers can exist in entirely different moral and epistemic universes—one presided over and mediated by Fox News, the other by MSNBC—so podcast-listeners curate their own little private aural worlds with nary a glance at or interruption from another. It doesn’t help that this ecosystem (or ecosphere?) overlaps substantially with the gig-cum-influencer economy, where fame and fortune are always one viral moment away, for anyone and everyone. We’re all always already potentially (in)famous and affluent, if only the digital stars will align. We try to nudge those stars by flooding the market with our content, a sort of astrology or spell-conjuring with ones and zeroes, or in this case, “Thanks for following; while you’re here, check out my SoundCloud.”

In any case, those are at least some of the reasons for increasing your skepticism quotient in this matter. More than a slightly more skeptical eye, though, consider whether you ought to go all the way. For there’s a solution lying close to hand.

Quit podcasts.

Read More