Effective Altruism and Longtermism.
In all of the coverage of Sam Bankman-Fried and Elon Musk, one topic that only recently caught my attention are the ideas of Effective Altruism (pioneered by William MacAskill) and Longtermism. By recently, I mean last night, after which I tweeted:
“Just now learning about the whole longtermism / effective altruism thing because of Musk and SBF and like — are they trying to Leto II this? How high on spice are they.”
A friend then sent me a link to a New Yorker profile on MacAskill and the E.A. movement from August of this year, which I read this morning and recommend to anyone looking for the full background story here. The school of thought encompasses a lot of disparate ideas, which is why it can be difficult to identify without that kind of context — everything from Bill Gates rationalizing all of his expenses around the cost of saving one person from malaria to Elon Musk’s “mission to extend the light of consciousness.”
Anyway, during and after reading it, I tweeted a lot. One particular line in the New Yorker piece stuck out early on:
“Among other back-of-the-envelope estimates, E.A.s believe that a life in the developing world can be saved for about four thousand dollars.”
It’s hard not to be reminded of the moral scrupulosity I experienced as an OCD kid when I see that kind of thinking. It’s like a kind of pathology, and basing your ethics on that kind of hard quantification of human life has some concerning downstream effects, because you have to focus on specific metrics like birth rates that end up creating troubling political alliances. Theoretically you’d be incentivized refine those metrics, so I don’t want to paint the whole thing with too broad a brush.
But looking upstream, the whole thing relies on having some kind of shared system of account to normalize our moral debts to each other — which is money! And money isn’t neutral, it’s a political product. A global monetary system is the product of empire. So if you want to move away from that, maybe you can create some kind of de-fi system (and this is obviously popular with E.A. proponents) but it’s still the same game, and you’re still essentially doing the same thing, which is commodifying human life. Ripping individuals from their context in a unique web of relationships, and assigning a number.
There are basically two threads in the movement, and it really is all very sci-fi. One is basically morality arbitrage from the imperial core to the outer rim, and the other is, like I initially called it, Leto II’s golden path from Dune. Both share the same technocratic arrogance that with enough intelligence and capital (or prescience and spice), you can solve every problem. It’s radically anti-democratic, but it’s also just massively naive.
Still, I feel like Effective Altruism and Longtermism are so interesting to me as ideas because I also think it’s good to interrogate the decision-making frameworks that charitable donors and foundations use. I also think it’s fun and exciting to think about AI and life beyond earth.
Those are important ideas! But actuarial charts and science fiction aren’t a good foundation for moral philosophy. There are easy parallels to draw with other pro-natalist doomsday groups, and with Elon Musk I think that’s fair, but in general I wouldn’t go so far as to make the equivalence.
In the New Yorker article, they describe a Peter Singer thought experiment that galvanized MacAskill into developing Effective Altruism:
“if you stroll by a child drowning in a shallow pond, presumably you don’t worry too much about soiling your clothes before you wade in to help; given the irrelevance of the child’s location—in an actual pond nearby or in a metaphorical pond six thousand miles away—devoting resources to superfluous goods is tantamount to allowing a child to drown for the sake of a dry cleaner’s bill”
You can see how it’s grippy, if you buy into the premises of the problem, but it’s like high school physics, where everything is in a nice Newtonian vacuum. It has no material bearing on reality. Human social responsibilities don’t exist on a Cartesian plane, and attempts to make it so have had disastrous effect. We’re relational animals. Society is made up of nontransferable debts and obligations that fade from consequence at the outer limits of our social networks. The allocation of capital is full of important questions that demand consideration, but closer to the heart of morality the questions are more direct — Do people have autonomy? Do people have the freedom to leave their homes and to move to new places? Do people have the freedom to form relationships? Do they have a fair share in decision making processes? Too often, the answer is no, and what concerns me about the framework of Effective Altruism, and Longtermism in particular, is that these questions are secondary at best. It’s an ethics for the powerful. An ethics without humility.
William MacAskill’s latest book, which Elon Musk highlighted as a “close match” for his own philosophy, is titled What We Owe the Future. I can’t help but think of another tech visionary, with a very differently-oriented philosophy. When you visit The Steve Jobs Archive, still a very simple website, you’re greeted by an email he wrote to himself about the limits of his own impact and the debt he owes to the past and present:
I grow little of the food I eat, and of the little I do grow I did not breed or perfect the seeds.
I do not make any of my own clothing.
I speak a language I did not invent or refine.
I did not discover the mathematics I use.
I am protected by freedoms and laws I did not conceive of or legislate, and do not enforce or adjudicate.
I am moved by music I did not create myself.
When I needed medical attention, I was helpless to help myself survive.
I did not invent the transistor, the microprocessor, object oriented programming, or most of the technology I work with.
I love and admire my species, living and dead, and am totally dependent on them for my life and well being.