Jeanne Woodbury.

An archive of some of my old blog posts!

Effective Altruism and Longtermism.

In all of the coverage of Sam Bankman-Fried and Elon Musk, one topic that only recently caught my attention are the ideas of Effective Altruism (pioneered by William MacAskill) and Longtermism. By recently, I mean last night, after which I tweeted:

“Just now learning about the whole longtermism / effective altruism thing because of Musk and SBF and like — are they trying to Leto II this? How high on spice are they.”

A friend then sent me a link to a New Yorker profile on MacAskill and the E.A. movement from August of this year, which I read this morning and recommend to anyone looking for the full background story here. The school of thought encompasses a lot of disparate ideas, which is why it can be difficult to identify without that kind of context — everything from Bill Gates rationalizing all of his expenses around the cost of saving one person from malaria to Elon Musk’s “mission to extend the light of consciousness.

Anyway, during and after reading it, I tweeted a lot. One particular line in the New Yorker piece stuck out early on:

“Among other back-of-the-envelope estimates, E.A.s believe that a life in the developing world can be saved for about four thousand dollars.”

It’s hard not to be reminded of the moral scrupulosity I experienced as an OCD kid when I see that kind of thinking. It’s like a kind of pathology, and basing your ethics on that kind of hard quantification of human life has some concerning downstream effects, because you have to focus on specific metrics like birth rates that end up creating troubling political alliances. Theoretically you’d be incentivized refine those metrics, so I don’t want to paint the whole thing with too broad a brush.

But looking upstream, the whole thing relies on having some kind of shared system of account to normalize our moral debts to each other — which is money! And money isn’t neutral, it’s a political product. A global monetary system is the product of empire. So if you want to move away from that, maybe you can create some kind of de-fi system (and this is obviously popular with E.A. proponents) but it’s still the same game, and you’re still essentially doing the same thing, which is commodifying human life. Ripping individuals from their context in a unique web of relationships, and assigning a number.

There are basically two threads in the movement, and it really is all very sci-fi. One is basically morality arbitrage from the imperial core to the outer rim, and the other is, like I initially called it, Leto II’s golden path from Dune. Both share the same technocratic arrogance that with enough intelligence and capital (or prescience and spice), you can solve every problem. It’s radically anti-democratic, but it’s also just massively naive.

Still, I feel like effective altruism and longtermism are so interesting to me as ideas because I also think it’s good to interrogate the decision-making frameworks that charitable donors and foundations use. I also think it’s fun and exciting to think about AI and life beyond earth.

Those are important ideas! But actuarial charts and science fiction aren’t a good foundation for moral philosophy. There are easy parallels to draw with other pro-natalist doomsday groups, and with Elon Musk I think that’s fair, but in general I wouldn’t go so far as to make the equivalence.

In the New Yorker article, they describe a Peter Singer thought experiment that galvanized MacAskill into developing Effective Altruism:

“if you stroll by a child drowning in a shallow pond, presumably you don’t worry too much about soiling your clothes before you wade in to help; given the irrelevance of the child’s location—in an actual pond nearby or in a metaphorical pond six thousand miles away—devoting resources to superfluous goods is tantamount to allowing a child to drown for the sake of a dry cleaner’s bill”

You can see how it’s grippy, if you buy into the premises of the problem, but it’s like high school physics, where everything is in a nice Newtonian vacuum. It has no material bearing on reality. Human social responsibilities don’t exist on a Cartesian plane, and attempts to make it so have had disastrous effect. We’re relational animals. Society is made up of nontransferable debts and obligations that fade from consequence at the outer limits of our social networks. The allocation of capital is full of important questions that demand consideration, but closer to the heart of morality the questions are more direct — Do people have autonomy? Do people have the freedom to leave their homes and to move to new places? Do people have the freedom to form relationships? Do they have a fair share in decision making processes? Too often, the answer is no, and what concerns me about the framework of Effective Altruism, and Longtermism in particular, is that these questions are secondary at best. It’s an ethics for the powerful. An ethics without humility.

William MacAskill’s latest book, which Elon Musk highlighted as a “close match” for his own philosophy, is titled What We Owe the Future. I can’t help but think of another tech visionary, with a very differently-oriented philosophy. When you visit The Steve Jobs Archive, still a very simple website, you’re greeted by an email he wrote to himself about the limits of his own impact and the debt he owes to the past and present:

I grow little of the food I eat, and of the little I do grow I did not breed or perfect the seeds.

I do not make any of my own clothing.

I speak a language I did not invent or refine.

I did not discover the mathematics I use.

I am protected by freedoms and laws I did not conceive of or legislate, and do not enforce or adjudicate.

I am moved by music I did not create myself.

When I needed medical attention, I was helpless to help myself survive.

I did not invent the transistor, the microprocessor, object oriented programming, or most of the technology I work with.

I love and admire my species, living and dead, and am totally dependent on them for my life and well being.

Four Things I Want To See From Apple by 2030.

IP Over Peer-to-Peer Mesh Networking.

With Sidewalk, Amazon has leveraged its base of networked devices to create a version of the kind of internet mesh I'd like to see, but if any company is well positioned to create a wide scale mesh network for internet use on personal devices, it’s Apple, and they should if they care about privacy and protecting consumers from government censorship.

A Foldable M1 iPad Mini.

Really the specifics here aren’t the most important, but I want an iPad that can fold to the size of a large iPhone (roughly pocketable, or sized for a small purse) but also support something like stage manager on an external display. Modular computing is already part of the promise of the iPad, and laptops have been doing something like it for years, but this would take it to the next step.

Headphones for Video.

This was something Steve Jobs said in allusion to the forthcoming iPhone, but it encapsulates exactly what an AR/VR device needs to be. Something lightweight and easy to bring with you, but fully immersive when in use, with no compromise compared to live music or expensive speakers. In this case, no compromise compared to excellent screens or the fidelity of the real world. It needs to be as elegant as a pair of glasses. Key use cases would be as a substitute for screens and to overlay digital information on the real world.

Mobility as a Service.

The promise of autonomous driving is on-demand last-mile transportation, not owning a car without a steering wheel. Rather than selling a car to consumers, Apple should integrate ride-hailing for autonomous vehicles into Apple Maps. This could integrate with electric bike networks and public transportation routing.

Buying IBM.

The way I see it, when we talk about money in politics, there are actually two separate crises — money as a corrupting influence (the “millionaires and billionaires”) and money as just a lazy fucking habit — and the second is worse.

Spend any amount of time with someone running any kind of electoral program, and they’ll probably ask you the same exact questions — can you donate money, can you send us your volunteers, can you make calls / send mail / knock doors / buy ads — and what this really boils down to is just throwing money at a numbers game. Sure, every now and then something will crop up with an incredible conversion rate (mass texting in 2020), but they invariably collapse to the same overall levels of return as everything else. Truly great mass market sales channels are inherently unstable, whether they’re eroded by scammy exploits, tuned out by audiences, or eventually just priced according to their actual value. It doesn’t matter if a sales channel is good, or if your ad is good. Those things help, but ultimately, it’s about finding the right combination of conversion rates and prices across channels to cover the right demographics, and throwing enough money at the problem to make the numbers work. When record voter turnout looks like 60%, and a midterm election can see rates as low as 30%, this strategy starts to make sense. Shifting an un-winnable district from 90-10 to 80-20 might seem like a waste of resources, but if it shifts a statewide race by .01%, that might be all you need. The really seasoned operatives, with the most insight into the numbers, know that there’s always more water to be wrung from the stone. It’ll just take a few more million dollars. Don’t have that on you? Maybe you’re just not really committed. Fuck you.

This is almost the entirety of campaign work; sure, there’s a real vision somewhere, but the message most voters will actually hear will be focus-grouped and A/B tested to death by the time it shows up in a preroll ad on YouTube. And I don’t mean to say there aren’t real differences between candidates (they’re astronomical!) or even reasons to be enthusiastic, but the lesson to draw from sub-50% voter turnout isn’t that people don’t care, and just need to be sold harder. It’s that no one has given them a real reason to care yet. Issues consistently poll higher than politicians. If you can’t sell your product without running sensationalized ads or sending 20 million unsolicited texts, maybe your product is actually shit. And if it isn’t, it’s clear that you don’t really believe that.

The classic thing to say is, “no one ever got fired for buying IBM.” And when someone in a nice suit shows you the numbers, it’s hard to argue, even when you know there ought to be a better way. Because if you stick to your gut, and you fail, you’ll have nothing to defend yourself with. If the IBM purchase isn’t cutting it, well, you can always spend a little more money. It’s about fear, and shame, and from them, the death of creativity. No one in politics is the carpenter polishing the back of the chest of drawers.

I can think of two politicians charting a slightly different path. But Donald Trump, poster-in-chief and the king of earned media, isn’t selling you anything original, he’s just the IBM sales rep who will do a line of cocaine with you in the boardroom, and Bernie Sanders, for everything he’s ushered in with the innovation of small-dollar-donor-led campaigns, might as well be selling you an IBM PC Compatible. It’s still just money.

And that’s the thing. It’s that second problem, the lazy habit of throwing money at the numbers, that makes the first problem possible. You can’t buy something unless it’s for sale.

Context Creation.

A week or so ago, I used the phrase “context creation” in a parenthetical and since then have been telling everyone I know that I’m obsessed with it as a concept. The obvious contrast to draw is against “content creation,” and when I searched the phrase online, I found one example of that comparison, made by the excellent cartoonist Lucy Bellwood, but otherwise very little use of the phrase whatsoever. I think that should change.

Before I get into that, let me give some examples to help illustrate what I mean by context creation. I’ve divided those between online and offline examples mostly because it was much easier to think of online examples and I wanted to make sure I didn’t represent it as a purely online phenomenon.

Online Examples

  1. Know Your Meme is never the first website you'll visit when you open your browser, but if you're ever baffled by an inscrutable tweet or inexplicable GIF, it’ll probably be the first website you visit after the first website you visit. A wiki for memes, each page documents the origin and evolution of a given meme and explains its uses and variations, often linking to or categorizing it with related memes. What makes Know Your Meme such a useful example of context creation is that it exists just for people seeking context — it’s the whole point.

  2. Wikipedia, from a user’s perspective, is something different. People don’t really go to Wikipedia looking for context; we use it to find things out and learn new information. And yet despite seeming like an endless source of knowledge, what we all have probably had drilled into our heads is that Wikipedia is not a source, and this is by design. The “No Original Research” principle is central to the approach Wikipedia editors take to their work. No one is bringing new knowledge to Wikipedia; instead, the work is to contextualize knowledge — linking and categorizing, sourcing and editing. Somewhere between Know Your Meme and Wikipedia are fan wikis like the Wookieepedia, where you might as likely turn for information (“just how many Star Wars movies are there anyway”) or context for something you saw in a movie (“what is Darth Maul doing with this crime syndicate 22 years after being cut in half by Obi Wan Kenobi?”).

  3. FOAF and the semantic web. Wikipedia thrives as a center of context creation on the web because it’s perfectly suited to hypertext as a medium. What the semantic web tried to do, with experiments like the Friend of a Friend protocol, was to extend that model to more types of information. It’s not something I’m an expert on, and it’s not something I’m going to try to write a post-mortem for, but it’s a notable failed attempt at context creation that serves well as a foil to my next example.

  4. Facebook. Where FOAF relied on internet users creating and formatting social graph data to host on their own websites, Facebook and earlier social media sites gave users an easy way to get online and made form-filling something that could be fun. In its early days (and even by the time I joined about 5 years later), using Facebook was about maintaining your profile and building your friends list. Status updates were actual status updates, and we wrote them in that context, with our names at the top flowing into the text of the update — “Jeanne Woodbury: is eating lunch.” To interact with someone, you might post on their wall, and using the site involved visiting other users’ profiles to a far greater degree than it does today. The introduction of the News Feed didn’t immediately disrupt this, but from the start it meant controversial decisions had to be made about which updates to feature in the feed. Status updates became more like tweets or blog posts, and as interaction increasingly shifted to the news feed, allowing pages to publish to your feed in an effort to compete with Twitter only accelerated that transition. Answering the question of how to handle the sheer volume of content flowing through each user’s feed eventually meant leaving the chronological timeline behind. EdgeRank gave users the false sense that their feeds were showing them all of the updates they should be seeing, even as the algorithm behind the feed meant, by design, that some connections might not be displayed at all. Eventually what users learned was that if you wanted your friends to see your updates, they needed to be engaging enough to rank. This was the full arc of Facebook’s evolution from context creation to content creation, and after the company had effectively monopolized social graph data, it was an epochal shift.

Some Quick Offline Examples

  1. Hosting a dinner party or book club. Just by putting everyone in the same space, you’re creating context. All of the guests reading the same book creates context for the group. Making introductions as a host is another form of context creation.

  2. Broadcast television, when there were only a handful of channels, was a large scale exercise in context creation (by giving tens of millions of people something to talk about, on a synchronized schedule).

  3. I can’t also turn this into an essay about money, but when the government issues money, it’s creating context for transactions and exchanges that otherwise might lack a shared reference point.

When I first used the phrase “context creation,” I was griping about what I see as an annoyingly narrow focus on suggestion algorithms in tech criticism and policy making. Certainly there’s a lot to untangle there, but it’s frustrating to see so much energy directed into questions of how to reform those algorithms when it’s not obvious to me that there’s anything essential about them to discovery or social interaction on the web.

The bifurcated experience of Twitter, where some users (like me) read a reverse-chronological timeline and many others use an algorithmically ordered and curated feed, leads to a very real degree of annoyance among users in the first group when users in the second group voice the complaint, as they often do, of “why am I seeing this?” Our chronological-purist response is simple — you control who you follow! — but the question reveals the stark lack of context many users have for the content they interact with online. For longtime users of the platform, who rely on manual discovery and eschew the algorithmic timeline, the majority of tweets they’ll read in a day are rich with context: they know which users interact with which other users, they are familiar with the kind of jokes the users they follow make or are likely to retweet, and they’re able to see or at least intuit the shape of the conversation happening on their timeline. This increasingly seems to be anomalous.

Even Twitter itself is a relatively minor player among the major social networks, as an ever growing share of attention is captured by TikTok, evidently the preeminent content recommendation engine on the internet, one where a social graph is not even a factor. Instagram, which grew as an image sharing service on the Twitter social graph, is now Meta’s main bet against TikTok, chasing its feature set with Reels and waffling on just how much to deprioritize its once-central feed, at times showing only one or two recent posts by accounts a user actually chose to follow before flowing into algorithmic recommendations. This of course very closely mirrors Facebook’s own development as a product from context creation to content creation, discussed above.

I don’t think this is some kind of apocalyptic trend in social media — at the same time that every major platform seems intent on turning into personalized shopping channels, smaller messaging platforms like Discord are rising up to fill the gap. Individual Discord servers are created with context built in, whether as the backroom chat space for a subreddit or as an organizing hub for a social movement, and within a server, mods can create topical channels for more focused conversations. Communication is impossible without context. The more social networks do to pivot from context creation to content creation, the more impossible it becomes for real conversations to happen there, and the more motivated users will become to create their own spaces for conversation somewhere else. But communication is more than just conversation, and what private and semi-private chatrooms like those on Discord lack is discoverability. Context at a larger scale is key to building and structuring relationships between people, and if the primary ways we have to use the internet consist of services that show us exactly what we want, based on what we already like, and closed networks where we interact only with people we already know, or already know we share an opinion with, we lack that kind of context most of all.

Building new experiences on the web, or new social movements, or even new political realities is going to require a new focus on context creation. This much is clear to me. What I’m seeing right now instead is, in essence, fiddling with the knobs on the recommendation machine while Rome burns.

Web True Believer.

The history of the internet, and particularly the web, are major interests of mine, and with a new book coming out from author Ben Tarnoff, there's been some fun interviews and articles coming out about not only how the internet and the web have developed, but how they could have and could still develop differently. I love Darius Kazemi's idea of federated social networks maintained by local library systems — something I've advocated for in the past is a wikipedia-inspired approach to context creation on networks like Twitter, and that's a fitting task for librarians — but what frustrates me in Tarnoff's hypothetical vision of a better internet is that it's still algorithmically driven, only in this case the users decide the algorithm. Or I suppose the government does, and if people complain about first amendment rights now when they don't get enough likes, I can't imagine how a city government would handle the actual first amendment issues of government moderation of the internet. It seems about as well conceived as Elon Musk's idea of open-sourcing the Twitter algorithm. Structural and technological changes to algorithms don't solve the possibly unsolvable problem of moderation at scale, and I think Tarnoff understands this, but then why fixate on algorithms at all? An algorithmic feed is still relatively new, and I still exclusively use Twitter's chronological feed. That Mastodon — essentially a more complicated and less interesting Twitter experience — remains the primary example of a different path for the social web seems proof positive to me that there's a fundamental unwillingness to reimagine what social media can be outside of griping about the content moderation decisions or user interface of Facebook or Twitter (and this extends to Congress). Clearly Tarnoff understands this and is grappling with it from a position of real understanding of the historical, technological, and sociopolitical architecture of the internet, so despite my own gripes I'm excited for the book. But before we get to those links, a great historical what-if about the semantic web.

Two-Bit History: Friend of a Friend: The Facebook That Could Have Been

“Which finally brings me back to FOAF. Much of the world seems to have forgotten about the FOAF standard, but FOAF was an attempt to build a decentralized and open social network before anyone had even heard of Facebook. If any decentralized social network ever had a chance of occupying the redoubt that Facebook now occupies before Facebook got there, it was FOAF. Given that a large fraction of humanity now has a Facebook account, and given that relatively few people know about FOAF, should we conclude that social networking, like subway travel, really does lend itself to centralization and natural monopoly? Or does the FOAF project demonstrate that decentralized social networking was a feasible alternative that never became popular for other reasons?”

Adi Robertson interviewing Ben Tarnoff: Why We Need a Public Internet and How to Get One

“So I see those spaces and those alternatives as really cool and inspiring and creative technical experiments. But technical experimentation, as we’ve learned, isn’t enough to generate a radically different arrangement. It’s important — but we need politics. We need public policy. We need social movements. We need all these other ingredients that we can’t get from a code base.”

Ben Tarnoff: The Internet Is Broken. How Do We Fix It?

“What would a day on the deprivatized internet look like? You wake up, grab coffee, and sit down at your computer. Your first stop is a social-media site run by your local library. The other users are your neighbors, your co-workers, or residents of your county. There’s a news report in your feed about a coming municipal election, published by a local public media center. In fact, much of the content that circulates on the site comes from public media sources.

“The site is a cooperative; you and the other users govern it collectively. You elect the board that designs the filtering algorithms and writes the content moderation policies that determine what you see in your feed. The board’s decisions are carried out by employees of the local library, who act as caretakers of the community, always on hand to help classify, curate and add context to information.”