My essay on Oliver Sacks and a “humanism of the abyss” is unpaywalled. I don’t know whether this one is any good, but it’s important to me.
the Oppenheimer Principle revisited
Eight years ago, I wrote about a dominant and pernicious ideology that features two components:
Component one: that we are living in a administrative regime built on technocratic rationality whose Prime Directive is, unlike the one in the Star Trek universe, one of empowerment rather than restraint. I call it the Oppenheimer Principle, because when the physicist Robert Oppenheimer was having his security clearance re-examined during the McCarthy era, he commented, in response to a question about his motives, “When you see something that is technically sweet, you go ahead and do it and argue about what to do about it only after you’ve had your technical success. That is the way it was with the atomic bomb.”
The topic of that essay was the prosthetic reconstruction of bodies and certain incoherent justifications thereof, so I went on: “We change bodies and restructure child-rearing practices not because all such phenomena are socially constructed but because we can — because it’s ‘technically sweet.’” Then:
My use of the word “we” in that last sentence leads to component two of the ideology under scrutiny here: Those who look forward to a future of increasing technological manipulation of human beings, and of other biological organisms, always imagine themselves as the Controllers, not the controlled; they always identify with the position of power. And so they forget evolutionary history, they forget biology, they forget the disasters that can come from following the Oppenheimer Principle — they forget everything that might serve to remind them of constraints on the power they have … or fondly imagine they have.
In light of current debates about the development of AI – debates that have become more heated in the wake of an open letter pleading with AI researchers to pause their experiments and take some time to think about the implications – the power of the Oppenheimer Principle has become more evident than ever. And it’s important, I think, to understand what in this context is making it so powerful.
Before I go any further, let me note that the term Artificial Intelligence may cover a very broad range of endeavors. Here I am discussing a recently emergent wing of the overall AI enterprise, the wing devoted to imitating or counterfeiting actions that most human beings think of as distinctively human: conversation, image-making (through drawing, painting, or photography), and music-making.
I think what’s happening in the development of these counterfeits – and in the resistance to asking hard questions about them – is the Silicon Valley version of what the great economist Thorstein Veblen called “trained incapacity.” As Robert K. Merton explains in a famous essay on “Bureaucratic Structure and Personality,” Veblen’s phrase describes a phenomenon identified also by John Dewey – though Dewey called it “occupational psychosis” – and by Daniel Warnotte – though Warnotte called it “Déformation professionnelle.” It is curious that this same phenomenon gets described repeatedly by our major social scientists; that suggests that it is a powerful and widespread phenomenon indeed.
Peggy Noonan recently wrote in the Wall Street Journal of the leaders of the major Silicon Valley companies,
I am sure that as individuals they have their own private ethical commitments, their own faiths perhaps. Surely as human beings they have consciences, but consciences have to be formed by something, shaped and made mature. It’s never been clear to me from their actions what shaped theirs. I have come to see them the past 40 years as, speaking generally, morally and ethically shallow—uniquely self-seeking and not at all preoccupied with potential harms done to others through their decisions. Also some are sociopaths.
I want to make a stronger argument: that the distinctive “occupational psychosis” of Silicon Valley is sociopathy – the kind of sociopathy embedded in the Oppenheimer Principle. The people in charge at Google and Meta and (outside Silicon Valley) Microsoft, and at the less well-known companies that are being used by the mega-companies, have been deformed by their profession in ways that prevent them from perceiving, acknowledging, and acting responsibly in relation to the consequences of their research. They have a trained incapacity to think morally. They are by virtue of their narrowly technical education and the strong incentives of their profession moral idiots.
The ignorance of the technocratic moral idiot is exemplified by Sam Altman of OpenAI – an increasingly typical Silicon Valley type, with a thin veneer of moral self-congratulation imperfectly obscuring a thick layer of obedience to perverse incentives. “If you’re making AI, it is potentially very good, potentially very terrible,” but “The way to get it right is to have people engage with it, explore these systems, study them, to learn how to make them safe.” He can’t even imagine that “the way to get it right” might be not to do it at all. (See Scott Alexander on the Safe Uncertainty Fallacy: We have absolutely no idea what will result from this technological development, therefore everything will be fine.) The Oppenheimer Principle trumps all.
These people aren’t going to fix themselves. As Jonathan Haidt (among others) has often pointed out – e.g. here – the big social media companies know just how much damage their platforms are doing, especially to teenage girls, but they do not care. As Justin E. H. Smith has noted, social media platforms are “inhuman by design,” and some of the big companies are tearing off the fig leaf by dissolving their ethics teams. Deepfakes featuring Donald Trump or the Pope are totally cool, but Chairman Xi gets a free pass, because … well, just follow the money.
Decisions about these matters have to be taken out of the hands of avaricious professionally-deformed sociopaths. And that’s why lawsuits like this one matter.
Molly White: “I … don’t think that a company that creates harmful technology should be excused simply because they’re bad at it.”
Damon Krukowski: “Two years ago this month, I disconnected my recording studio from the internet entirely. This wasn’t an analog rebellion – I didn’t trash my studio computer and replace it with vintage tape machines. On the contrary, I did it to preserve the digital audio tools I have come to rely on. I wanted my tools to continue working the way I know.”
“Standing on the bare ground, — my head bathed by the blithe air, and uplifted into infinite spaces, — all mean egotism vanishes. I become a transparent eye-ball; I am nothing; I see all; the currents of the Universal Being circulate through me; I am part or particle of God.”
As my son says, Angus is like Michael Jordan: he never takes a play off. Relax for a moment and he will exploit your weakness. Leave a pocket on your backpack unzipped and he will extract what’s in it. Drop anything — anything — on the floor and he will instantly appear to eat it. He’s relentless.
Currently reading: Charles Ives: A Life with Music by Jan Swafford 📚
Finished reading: Murray Talks Music: Albert Murray on Jazz and Blues by Albert Murray. I read this over a period of months, pausing to listen to the music Murray talks about. It’s a feast. 📚
adjustments
As many of my readers will know, I am continually fiddling around with my online presence, to such a degree that I try my own patience. The one element that’s fixed is my newsletter, which (IMHO) has a clear identity and purpose. I always know when something I’ve come across will be a fit for the newsletter.
Deciding how to use my micro.blog page has been a bit more of a challenge, but in recent months I have settled on what strikes me as a good approach: It’s a kind of journal, with photos and links to what I’m reading and listening to. And that’s all.
Everything else goes here — but what should that “everything else” be? As I’ve been mulling this over, I’ve come to two conclusions:
- I share too much nasty stuff. I’ve become like those Geico raccoons: “This is terrible, you gotta try it.” No more of that. You can find plenty to alarm and disgust you elsewhere. I need to remember my own tagline for this blog. That doesn’t mean that I won’t write about unpleasant topics, but …
- Whether pleasant or unpleasant, the stuff I share — if it’s worth sharing at all — needs more commentary than I typically give it. So I’m going to try to post less often but in more detail. Maybe only a couple of posts per week, but I want them to be more like essays that offhand comments.
Let’s see how well I keep my resolutions!
UPDATE: A reader has rightly questioned my comment about “nasty stuff.” Not the best phrase for what I mean, which is “current events that call for critique or denunciation.” So many people are already in the critique-and-denunciation game, I don’t need to add to their number. (That said, my next major post will be, um, a critique and denunciation. Oh well.)
Listening to Thelonious Monk - Genius of Modern Music, Vol. 1 ♫
Listening to Complete Mozart Piano Trios ♫
Bernard of Clairvaux: “It is not necessary for you to cross the seas, nor to pierce the clouds, nor to climb mountains to meet your God. It is not a lengthy road that is set before you; you have only to enter into yourself to find him.”
Teens on screens: Life online for children and young adults revealed - Ofcom:
This year also saw the rise of ‘split-screening’. Split-screen social media posts allow children to watch more than one short-form video simultaneously, on a single-screen, side-by-side or stacked on top of one another. This appears to be a progression of the ‘multi-screening’ behaviours seen in previous research waves, where children reported difficulties focusing on one screen-based activity at a time.
Distinctions needed here: There is a difference between genuinely watching “more than one short-form video simultaneously” and merely having more than one short-form video on one’s screen at a given time. I seriously doubt that it is possible for any human being to watch two videos at the same time; the best we can do, I suspect, is to switch rapidly between two videos, and a good deal of research indicates that we’re not good at doing even that. Every time our attention switches to one information source we cease to attend to the other.
Finished reading: The Earthsea Quartet by Ursula K. Le Guin. What a joy to revisit these glorious books. 📚
This is a very basic point, but I find that it’s consistently under-discussed: to close achievement gaps like the racial achievement gap, not only must Black and Hispanic students learn more, white and Asian students must learn less than they do. Closing any gap has to entail the poorly-performing students not just learning but learning at a sufficiently faster pace than the high-performing students that the gap closes. This is not a minor point! American students of all races have been improving over time. But gaps have persisted because… students of all races have been improving over time. As long as white and Asian students learn as much as Black and Hispanic, the gap cannot close. This is so obvious it feels like it should go without saying, but the point is frequently obscured, for a couple of reasons. First, because “every kid can learn” is a more pleasing and simplistic narrative than “kids from disadvantaged subpopulations can not only learn but can learn sufficiently to close large gaps against competitors who are still learning more themselves.” Second, because the problem suggests a solution that is politically untenable, to put it mildly — to close gaps, we need to prevent the students who are ahead from learning at all.
I think there are a great many people on the so-called left who would be glad to accept that deal. Close the gap by any means necessary. There’s no necessary connection between wanting equality of outcomes and wanting better outcomes.
Our new baby dogwood is looking good.
my proposed law
“Any online platform and/or application that delivers content to users may deliver only content explicitly requested by said users.”
That’s it. No algorithms, no autoplay, no “You may also like,” no “Up next.” Only what human beings (AKA “consumers”) choose. Now you don’t have to ban TikTok, and you will reduce the power that Facebook, Twitter, and all the other social-media platforms have over the minds and emotions of their users. It will even reduce, though not eliminate, the ability of Spotify and other streaming platforms to ruin music.
(I’m sure many other people have made this suggestion.)
