More EA

Nov 20, 2022 22:12


I wrote a post on Effective Altruism a few months ago, and a little while later I started thinking that I should write an update of sorts.  I'd been reading the EA forum daily, quietly following some of EA Twitter, and I felt that some of what I'd written came from a dated perspective.  I was aware that the movement had changed since 2015, but hadn't appreciated just how fundamental some of the change had been, and so some of my arguments were poorly directed.  Then FTX blew up, so let's talk about that instead.



1) How do I feel about the FTX saga?

I feel entertained.  Crypto crashes are funny!  Every few months there's some new hack, rug pull, or Ponzi collapse, and loads of people lose their tokens.  The whole edifice is backed by the hope that other people will think it has value in the future; there's no fundamentals anywhere; the whole scene is rife with scams and fraud; most existing non-speculation use cases are for crimes.  And now one of the biggest players is someone I almost know, and they lost billions of dollars of customer funds?  Inject it all into my veins.

A lot of the writing from EA people has had seriously intoned words of sorrow for the people who trusted SBF with their money, and I can only 10%-heartedly summon the same feelings.  You get into crypto, you risk getting scammed, no sympathy from me.  This attitude is not particularly virtuous of me, but few people in EA are virtue ethicists.

One aspect that tempers my enjoyment is that FTX advertised heavily.  Presumably this was at least partially successful, bringing in some less-crypto-y, more-normal types who otherwise wouldn't have got scammed in this way.  I feel a little bad for such people.

The other big disappointing aspect is that I liked some of what the FTX Future Fund was doing.  SBF is/was very much not in my EA tribe, my concerns of global poverty and animal welfare being merely "emotionally driven", but there is a reason why the reflexively PR-oriented EA writing on the subject mentions the Future Fund's support for pandemic prevention measures: it was good.  I mean, I can't vouch for the particular programmes they funded.  But supporting the development of new vaccine development platforms, drug trials, diagnostic tools, etc. - I would have liked more of it.  If governments and their agencies aren't funding pandemic preparedness enough - and that seems to be a consensus - then it is good if private actors step in to try to fill the gaps.  A philanthropist focused on somewhat low-probability, highly deadly pandemics... losing that is a loss.

2) How would I feel if it was someone on the GiveWell side of EA committing lots of fraud?

Ohhhhhhh, that'd hurt way more.

3) How well did I "almost know" SBF?

I went scouring through the archives of the EA Facebook group last night, trying to find some old thread in which we'd both participated.  But I found nothing.  We must have been reading some of the same posts, but I didn't remember any of his comments.  A search of my Gmail archive turns up his name in a 2013 spreadsheet detailing 80k's operations, but SBF was just a line in the interns section.

But he did intern at 80k, and he was active on the Facebook group.  And that meant that in September 2021, when 80k sent me an email saying that "one of our readers is now the richest person in the world under 30!", I:

  • thought "Wow, the EA ideas from a decade ago are really bearing fruit", and
  • looked him up on Facebook and found that we have 16 mutuals.

So that is something.

4) Did SBF burn billions of dollars of his customers' wealth because of his risk-neutral utilitarian EA principles, thinking in expectation that the harm done would be outweighed by the greater good?

I can't say what was going on inside his head.  But what I reckon is, early advice from EA leaders pushed him towards earning to give.  So in some sense, the whole FTX saga is at least in part a consequence of early EA orthodoxy (in the absence of EA, maybe SBF does something else in life).  I think that the early 80k earning-to-give logic was pretty sound and has been unfairly maligned over the years.  But if someone wants to argue that giving many people a goal of accumulating vast wealth will lead to some of those people behaving dishonestly, fraudulently, etc., well, they would have a point.

Once SBF was in the for-profit world with a goal of earning money, his subsequent actions are most simply explained without referring back to utilitarianism.  A decent guess at the moment seems to be that FTX only started misappropriating large amounts of users' deposits when loans came due and they needed the cash to pay creditors; I think it most likely that at this time, the thinking was "We desperately need to do this to try to keep this company alive", rather than "We desperately need to do this to try to keep this company alive to make profits for the Future Fund".

There's a part of SBF's interview with Kelsey Tuoc Piper in which he talks about ethics.  There are two common interpretations of SBF's words, whatever they happen to be worth: a) that the whole effective altruism thing was a PR front, or b) that he remains utilitarian to the core, and the front was him earlier saying that there are ethical lines that a utilitarian shouldn't cross.

My read of the interview is that interpretation a) is straightforwardly correct: he holds up CZ as being a winner, and says that that's what matters, ethics be damned.  But Kelsey's own interpretation is "honestly I'm pretty unsure/confused", so I suppose I should doubt my reading at least a little bit.

5) Should EA have promoted and accepted money from the Future Fund?  Should SBF have been promoted by EA?

These are the difficult questions.  The basic case here is:

  • SBF was a crypto billionaire with a base in a tax haven.
  • There were accusations made about unethical behaviour by SBF in the early months of Alameda Research, and apparently these were made known to people in EA leadership at the time.  (See comments to this forum post for details; I didn't know anything about this.)
  • EA had previously courted, with some success, another crypto billionaire, Ben Delo, a few years ago.  Delo was later charged with various regulatory offences by American authorities in 2020, and doesn't seem to figure much in EA promotional material anymore.  (I wasn't following EA closely during this time, and on learning all of this in recent days, my reaction was "that name is vaguely familiar".  Even just within EA circles, I don't think he was anywhere near as high profile as SBF became, and his offences seem much more mild than SBF's.)

On the other hand, money talks.  If you've known a billionaire since before they were a millionaire, and they're treated with enough respectability to be talking to politicians, and you remember when they were younger and were totally on board with your effective altruism movement, and you thought that millions of philanthropic dollars could be put to excellent use....  If I'd been in Will MacAskill's place, I'd have also taken a seat at the table to help direct some of that money to causes we both thought were important.

In the "is it OK to do harm if it's for the greater good?" debates, I think the relevant hypothetical in this saga is if FTX were running a slightly shady crypto platform.  Encouraging people to get into the speculation game, taking a cut of all the trades, pumping up worthless tokens and profiting from the losers who got in too late, promoting a system that goes through staggeringly large amounts of electricity.  Is that level of harm sufficient to refuse funding for pandemic prevention research?

I am utilitarian enough to say "no, that's OK, take the money for the greater good".  But people could reasonably disagree both on this question, and whether my hypothetical is actually the most relevant for judging the EA leadership's approach to SBF.

6) Do I have any opinions about what this means for non-profit governance in the EA space?

No, this is not my area, and I don't know if this would have helped any.  The most egregious governance failures were at FTX/Alameda, where apparently the CEO wrote a backdoor for himself to transfer billions of dollars without needing to tell anyone else in the company.  There's a forum post which situates the FTX failures as part of broader problems with governance in EA, and those may be real, but fundamentally I see FTX as separate.  I doubt that better accounting processes at the Centre for Effective Altruism would have inculcated in young SBF an attitude of using professional accounting software and not losing and losing track of billions of dollars.

Still, I created this little section because in researching this post, I came across the 2019 discussions about the reduction in GiveWell's board from eight members to five.  Two of the outgoing board members went on the record to say that they disagreed with this move.

Look, I don't even know what a board does.  I don't have any objections to what GiveWell's done before or since 2019.  But it's interesting to me - given that people talk about governance as important, maybe there is wisdom in having more diverse boards, to reduce the probability of things going off the rails.

7) How vulnerable is EA to this kind of scandal?

Power and wealth in our society is unequally distributed.  In trying to win influence, there is therefore an unfortunate logic in targeting the more affluent, the more elite.  If trying to generate donations, it is meaningfully useful if you can convince someone earning $60,000 to donate $6000 per year instead of $600 - it's a ten-fold improvement!  But a single donor donating $120,000 per year (say they earn $240k and donate 50%) is twenty times better still.

It makes sense to try to persuade a population of students from top universities, since that is a group where disproportionately many future high-income earners are to be found, along with existing high-income earners.  And so we had(/have?) 80,000 Hours more or less explicitly writing for elite students, anecdotes containing eye-popping salaries for new graduates, and so forth.

But the logic of going after the elite doesn't stop at my example of a $120,000-per-year donor.  A billionaire would be many times better again!  This is basically the dynamic mentioned in my earlier post, which Alyssa Vance described in 2015.  And if you want to make a quick billion these days, doing something dodgy in crypto might be the best option.

If there continue to be billionaire donors in EA, then they will be providing most of the funds, and EA's reputation will be linked to them.

8) What was I going to write about before the FTX collapse?

In my years of only occasionally following EA, and doing so from a distance, a recurring theme was that I'd read a post on Slate Star Codex/Astral Codex Ten that exhorted readers to follow Giving What We Can and donate 10% of your income to highly effective charities.

This was pretty much what I considered to be the/a central part of the EA movement, and is reflected in my grumbling point 12) of my previous post, in which say that short-termist giving is still a plurality within EA.

But the vibes I get from the forum these days - I'll qualify this a bit later on - is that donating to charity just isn't a big part of EA for the highly engaged members of the community.  Someone wrote (I've lost the reference) that EA used to tell you where to give, but now it's where you go to get funded.

Jeff Kaufman (April 2022):

In thinking about what it means to lead a good life, people often struggle with the question of how much is enough: how much does our morality demand of us? People have given a wide range of answers to this question, but effective altruism has historically used "giving 10%". Yes, it's better if you donate a larger fraction, switch to a job where you can earn more, or put your career to use directly, but if you're giving 10% to effective charity you're doing your share, you've met the bar to consider yourself an EA, and we're happy to have you on board. I say "historically", because it feels like this is changing; I think EAs would generally still agree with my paragraph above, but while in 2014 it would have been uncontroversial now I think some would disagree and others would have to think for a while.

Even before the FTX cash started getting splashed around, this was a trend.  Kerry Vaughan (November 2021):

If I imagine being someone who is new-ish to EA, who wants to do good in the world and is considering making donations my plan for impact, I imagine that I really have two questions here:

1. Is donating an effective way to do good in the world given the amount of money committed to EA causes?

2. Will other people in the EA community like and respect me if I focus on donating money?

I think question 2) understandably matters to people, but it's a bit uncouth to say it out loud (which is why I'm trying to state it explicitly).

In the earliest days of EA, the answer to 2) was "yeah, definitely, especially if you're thoughtful about where you donate." Over time, I think the honest answer shifted to "not really, they'll tell you to do direct work." I don't know what the answer is currently, but reading between the lines of the article I'd guess that it's probably close to "not really" than "yeah definitely."

Giving What We Can, which I considered the beating heart of the EA movement, was apparently left to wither for years without any full-time staff, judging by this forum post announcing that they would be doing things again in 2022.*

*That said, GWWC has continued to accumulate members at a decent clip, and perhaps the yearly doublings were not sustainable.  Member counts in each January, using the first available snapshot from the Wayback Machine:
2013   276
2014   407
2015   791
2016  1566
2017  2458
2018  3335
2019  3839
2020  4504
2021  5556
2022  7727

I feel a little resentful - which is not virtuous - about this change in attitude away from donating from highly engaged EA's.  Most of my resentment comes from reading online debates in which longtermists (I use the word in its loose sense) use GiveWell-directed donations as a shield from criticism directed against EA.

Person 1: I'm an EA, and I work on trying to stop artificial intelligence from killing everyone.
Person 2: That's stupid, you should focus on all the suffering people experience in our world today.
Person 1: The EA community gives millions of dollars to global poverty!  It is the most popular cause area in EA!

This is irritating on its own - people like me being invoked to defend people I disagree with - but all the moreso when the vibe of the forum is that donating doesn't matter much, and direct work is vastly more important.

But I promised that I would qualify these remarks.  There are some people who work full-time in an EA or EA-aligned organisation on AI, and donate 10% of their income to short-termist charities.  EA presents a seemingly bizarre constellation of cause areas, and every now and then you run into a vegan working to prevent an AI takeover who donates to malaria charities.

However unimportant donating 10% may be to many people who attend EA Global conferences, GiveWell and its recommended charities are still part of the fabric of the community.  Posts on global poverty interventions still get upvoted in the forum.  So I don't feel like the movement has completely left me behind, even if I feel a little on the outer.

9) Regardless of how I feel, is it reasonable for the focus and esteem to be so heavily on direct work?

Here's a toy model.  There are people doing direct work at a non-profit.  Someone needs to pay their salary.  At its simplest level, ten regular people each donating 10% could contribute to one equivalent salary of a non-profit worker.  Though, since the non-profit worker is probably taking a lower salary than for-profit workers, perhaps we can reduce the number to something like six donors per direct worker.

In this toy model, the direct worker is for sure doing more good than any of the individual donors paying their salary.  But the ratio is not overwhelming, and it feels like a real group effort that everyone is part of.

It is not clear to me exactly how this toy model breaks down in reality.  Here are some considerations:

  • Many careers recommended by 80k are not in charities, but instead in government, universities, or the private sector.  Small donors are not relevant for these sectors.
  • The toy model implicitly assumes (?) that we're at an equilibrium.  If a crypto billionaire enters the scene and puts millions of extra dollars into the system, then the first priority is making sure that there are effective ways to use that money, so direct work becomes relatively more important.
  • For years, there were regular anecdotes saying that getting an "EA job" was really hard, with many, many rejections.
  • I suspect that the turn to longtermism is behind some of the shift; indeed it would make sense to me if this was the primary cause.  The longtermists tend to see their work as having vastly higher impact (in expectation) than work in global poverty alleviation.  While there's still plenty of ability to absorb more funds for cash transfers, there's a relatively small number of people working to prevent the AI apocalypse, and as a result, extra workers are seen as vastly more important than traditional donations.

Whatever the precise reasoning or intuition, the situation earlier this year was that some people considered one EA worker-year as roughly equivalent in impact to a $1mn or even $3mn donation.  Instead of 6 donors equalling one direct worker, it might be 300 or something.  (Perhaps the numbers will be dialled back a bit post-FTX.)

Within short-termist causes, such large ratios don't make a lot of sense to me.  The head of the CEA has written that he thinks CEA material should be about 60% longtermist.  When I read about people doing "EA jobs", it's rarely, say, being a cog in the system for the Malaria Consortium (although the 80k job board, at the time of writing, does list a Malaria Consortium role near the top of its page).  Explicitly longtermist work is disproportionately EA, as compared to global poverty work, which has a pre-existing group of people working in it, and which EA supports mostly through donations.

Regardless, I can only reflect on the reasons that I started donating to GiveWell-recommended charities in the first place, all those years ago.  If statistical lives can be saved at $5000 each, then I think that is worth doing.

10) How popular is wild animal suffering research?

Not that popular.  I said in my previous post that I was looking forward to Animal Charity Evaluators' update of its giving metrics page with 2021 donations.  The numbers are a little bit hard to interpret, and a single large donor could easily skew the results.  Of dollars donated via the ACE page, The Humane League was about ten times more popular than the Wild Animal Initiative, and about 20 times more popular in ACE-influenced donations made direct to the charities.  The Good Food Institute was about five times more popular than WAI.

(A curiosity is that the Albert Schweitzer Foundation received about $88k in donations through ACE, and just $22k in ACE-influenced direct donations.  All the other top charities had ratios in the other direction, i.e. much more direct-donated than donated through ACE.  If the numbers are correct, then WAI was at least more popular than ASF, despite the latter being 4 times more popular on the ACE page.)

11) What's the deal with Leverage Research?

I don't know who my target audience is for this (final) section.  Maybe it is just my future self.

I first encountered Leverage when they were presented as an EA-ish organisation in EA's early years.  A small number of people reported donating to them in the 2014 EA survey. They were utterly opaque.  They wanted to solve the world's problems by...?  I don't know.  I recall their website not saying anything much at the time, and they've scrubbed it from the Wayback Machine.  "Sketchy" was an adjective associated with them.

Still, it turns out that there was some information available on the public record, in old Less Wrong comments.  Their plan was described in an astonishingly complicated flowchart, starting with an investigation to discover a good philosophical method, then making its way through a prototype theory of psychology, development of sociology theory, development of an AI oracle, forming political organisations to win elections, creating or maintaining global coordination with regard to important issues, ensuring the impossibility of harmful governments, and ending up at an optimal world.  I'm only scratching the surface of the document, which is completely wild.

A simpler story, which I've read in more recent posts, is that the/an idea was to use an understanding of psychology to unlock vastly more productivity from people.  Instead of there being one Elon Musk, they could imagine many thousands of them; all that was needed was for people to get their psychology sorted.

Anyway, I would have said that they had no business of being under the EA umbrella, but somehow they were.  Leverage is casually mentioned by MacAskill in his history of the term 'effective altruism', and they ran the first EA summits.  The founder had apparently wanted to work at the Singularity Institute, or something.  There were plenty of personal connections.  I suppose if the early movement included the people wanting to create a Singularity, then it was not so differently outlandish to have a group who wanted to solve psychology and optimise the world.

Nevertheless, most other people in EA seriously disliked Leverage, they were basically excised from the movement, and I don't think I heard much about them, especially as I wasn't following EA news closely.  Apparently a long-running feud was maintained; there was some drama caused by what was variously described as "doxxing" or "doing a LinkedIn search", none of it particularly interesting to me.

In late 2021, a former Leverage employee published a long post describing her experience there; the typical reaction to the post being, "Wow, Leverage was a cult."

1. 2-6hr long group debugging sessions in which we as a sub-faction (Alignment Group) would attempt to articulate a “demon” which had infiltrated our psyches from one of the rival groups, its nature and effects, and get it out of our systems using debugging tools.

2. People in my group commenting on a rival group having done “magic” that weekend and clearly having “powered up,” and saying we needed to now go debug the effects of being around that powered-up group, which they said was attacking them or otherwise affecting their ability to trust their own perception

3. Accusations being thrown around about people leaving “objects” (similar to demons) in other people, and people trying to sort out where the bad “object” originated and if it had been left intentionally/malevolently or by accident/subconsciously.

(There's a lot more; it's a long post.)

Leverage people recommend that we read a different perspective on that era, apparently more balanced.  My read of that (very long) post is that Leverage basically functioned as a cult for a small handful of its employees, and for everyone else "cult-like" is a more reasonable description.  It was, at the very least, extremely weird; a former academic recruiting young people to perform psych experiments on one another while all living in the same big house (at least for several years), the group eventually collapsing in some kind of prolonged trauma.

A former CEO of the CEA is now COO at Leverage; another prominent former CEA employee is also now at Leverage.  I'm instinctively distrustful of anyone who would want to rehabilitate the Leverage brand, and this sentiment is an occasional undercurrent to some of the FTX discourse, in which a couple of Leverage people have been approvingly cited both on Twitter and on the EA forum.

I've even quoted a Leverage employee myself in this post.  We all contain multitudes.
Previous post Next post
Up