I’m the Director of the Happier Lives Institute and Postdoctoral Research Fellow at Oxford’s Wellbeing Research Centre. I’m a philosopher by background and did my DPhil at Oxford, primarily under the supervision of Peter Singer and Hilary Greaves. I’ve previously worked for an MP and failed to start a start-up.
MichaelPlant
Measuring Good Better
Don’t just give well, give WELLBYs: HLI’s 2022 charity recommendation
A philosophical review of Open Philanthropy’s Cause Prioritisation Framework
Will faster economic growth make us happier? The relevance of the Easterlin Paradox to Progress Studies
Donating money, buying happiness: new meta-analyses comparing the cost-effectiveness of cash transfers and psychotherapy in terms of subjective well-being
The elephant in the bednet: the importance of philosophy when choosing between extending and improving lives
Cause profile: mental health
What is effective altruism? How could it be improved?
Announcing the launch of the Happier Lives Institute
Ending The War on Drugs—A New Cause For Effective Altruists?
Effective altruism’s odd attitude to mental health
Using Subjective Well-Being to Estimate the Moral Weights of Averting Deaths and Reducing Poverty
Hello Jack, I’m honoured you’ve written a review of my review! Thanks also for giving me sight of this before you posted. I don’t think I can give a quick satisfactory reply to this, and I don’t plan to get into a long back and forth. So, I’ll make a few points to provide some more context on what I wrote. [I wrote the remarks below based on the original draft I was sent. I haven’t carefully reread the post above to check for differences, so there may be a mismatch if the post has been updated]
First, the piece you’re referring to is a book review in an academic philosophy journal. I’m writing primarily for other philosophers who I can expect to have lots of background knowledge (which means I don’t need to provide it myself).
Second, book reviews are, by design, very short. You’re even discouraged from referencing things outside the text you’re reviewing. The word limit was 1,500 words—I think my review may even be shorter than your review of my review! - so the aim is just to give a brief overview and make a few comments.
Third, the thrust of my article is that MacAskill makes a disquietingly polemical, one-sided case for longtermism. My objective was to point this out and deliberately give the other side so that, once readers have read both they are, hopefully, left with a balanced view. I didn’t seek to, and couldn’t possibly hope to, given a balanced argument that refutes longtermism in a few pages. I merely explain why, in my opinion, the case for it in the book is unconvincing. Hence, I’d have lots of sympathy with your comments if I’d written a full-length article, or a whole book, challenging longtermism.
Fourth, I’m not sure why you think I’ve misrepresented MacAskill (do you mean ‘misunderstood’?). In the part you quote, I am (I think?) making my own assessment, not stating MacAskill’s view at all. What’s more, I don’t believe MacAskill and I disagree about the importance of the intuition of neutrality for longtermism. I only observe that accepting that intuition would weaken the case—I do not claim there is no case for longtermism if you accept it. Specifically, you quote MacAskill saying:
[if you endorse the intuition of neutrality] you wouldn’t regard the absence of future generations in itself as a moral loss.
But the cause du jour of longtermism is preventing existential risks in order that many future happy generations exist. If one accepts the intuition of neutrality that would reduce/remove the good of doing that. Hence, it does present a severe challenge to longtermism in practice—especially if you want to claim, as MacAskill does, that longtermism changes the priorities.
Finally, on whether ‘many’ philosophers are sympathetic to person-affecting views. In my experience of floating around seminar rooms, it seems to be a view of the large minority of discussants (indeed, it seems far more popular than totalism). Further, it’s taken as a default, or starting position, which is why other philosophers have strenuously argued against it; there is little need to argue against views that no one holds! I don’t think we should assess philosophical truth ‘by the numbers’, ie polling people, rather than by arguments, particularly when those you poll aren’t familiar with the arguments. (If we took such an approach, utilitiarianism would be conclusively ‘proved’ false.). That said, off the top of my head, philosophers who have written sympathetically about person-affecting views include Bader, Narveson (two classic articles here and here), Roberts (especially here, but she’s written on it a few times), Frick (here and in his thesis), Heyd, Boonin, Temkin (here and probably elsewhere). There are not ‘many’ philosophers in the world, and population ethics is a small field, so this is a non-trivial number of authors! For an overview of the non-identity problem in particular, see the SEP.
Ineffective entrepreneurship: post-mortem of Hippo, the happiness app that never quite was
This seems to be a false equivalence. There’s a big difference between asking “did this writer, who wrote a bit about ethics and this person read, influence this person?” vs “did this philosophy and social movement, which focuses on ethics and this person explicitly said they were inspired by, influence this person?”
I agree with you that the question
Who’s at fault for FTX’s wrongdoing?
has the answer
FTX
But the question
Who else is at fault for FTX’s wrongdoing?
Is nevertheless sensible and cannot have the answer FTX.
[Written in a personal capacity, etc. This is the first of two comments: second comment here]
Hello Will. Glad to see you back engaging in public debate and thanks for this post, which was admirably candid and helpful about how things work. I agree with your broad point that EA should be more decentralised and many of your specific suggestions. I’ll get straight to one place where I disagree and one suggestion for further decentralisation. I’ll split this into two comments. In this comment, I focus on how centralised EA is. In the other, I consider how centralised it should be.
Given your description of how EA works, I don’t understand how you reached the conclusion that it’s not that centralised. It seems very centralised—at least, for something portrayed as a social movement.
Why does it matter to determine how ‘centralised’ EA is? I take it the implicit argument is EA should be “not too centralised, not too decentralised” and so if it’s ‘very centralised’ that’s a problem and we consider doing something. Let’s try to leave aside whether centralisation is a good thing and focus on the factual claim of how centralised EA is.
You say, in effect, “not that centralised”, but, from your description, EA seems highly centralised. 70% of all the money comes from one organisation. A second organisation controls the central structures. You say there are >20 ‘senior figures’ (in a movement of maybe 10,000 people) and point out all of these work at one or the other organisation. You are (often apparently mistaken for) the leader of the movement. It’s not mentioned but there are no democratic elements in EA; democracy has the effect of decentralising power.
If we think of centralisation just on a spectrum of ‘decision-making power’, as you define it above (how few people determine what happens to the whole) EA could hardly be more centralised! Ultimately, power seems the most important part of centralisation, as other things flow from it. On some vague centralisation scale, where 10⁄10 centralisation is “one person has all the power” and 1⁄10 is “power is evenly spread”, it’s … an 8/10? If one organisation, funded by two people, has 70% of the resources, considering that alone suggests a 7⁄10. (Obviously, putting things on scales is silly but never mind that!)
Your argument that it’s not centralised seems to be that EA is not a single legal entity. But that seems like an argument only against the claim it’s not entirely centralised, rather than that it’s not very centralised.
All this is relevant to the point you make about “who’s responsible for EA?”. You say no one’s in charge and, in footnote 3, give different definitions of responsibility. But the key distinction here, one you don’t draw on, seems to be de jure vs de facto. I agree that, de jure, legally speaking, no one controls EA. Yet, de facto, if we think about where power, in fact, resides, it is concentrated in a very small group. If someone sets up an invite-only group called the ‘leaders’ forum’, it seems totally reasonable for people to say “ah, you guys run the show”. Hence the claim ‘no one is in charge’ doesn’t ring true for me. I don’t see how renaming this the ‘coordination forum’ changes this. Given that EA seems so clearly centralised, I can’t follow why you think it isn’t.
You cite the American Philosophical Association as a good example of “not too centralised”. Again, let’s not focus on whether centralisation is good, but think about how central the APA is to philosophy. The APA doesn’t control really any of the money going into philosophy. It runs some conferences and some journals. AFAICT, its leaders are elected by fee-paying members. As Jason points out, I wonder how centralised we’d think power in philosophy were if the APA controlled 70% of the grants and its conferences and journals were run by unelected officials. I think we’d say philosophy was very centralised. I think we’d also think this level of centralisation was not ideal.
Similarly, EA seems very centralised compared to other movements. If I think of the environmental or feminist movements—and maybe this is just my ignorance—I’m not aware of there being a majority source of funding, the conferences being run by a single entity, there being a single forum for discussion, etc.. In those movements, it does seem that, de facto and de jure, no one is really in charge. As a hot take, I’d say they are each about 2-3/10 on my vague centralisation scale. Hence, EA doesn’t match my mental image of a social movement because it’s so centralised. If someone characterised EA as a basically single organisation with some community offshoots, I wouldn’t disagree.
I’ll turn to how centralised EA should be in my other comment.
- 29 Jun 2023 11:34 UTC; 29 points) 's comment on Decision-making and decentralisation in EA by (
Strawmen, steelmen, and mithrilmen: getting the principle of charity right
The Happier Lives Institute is funding constrained and needs you!
What is the main issue in EA governance then, in your view? It strikes me [I’m speaking in a personal capacity, etc.] the challenge for EA is a combination of the fact the resources are quite centralised and that trustees of charities are (as you say) not accountable to anyone. One by itself might be fine. Both together is tricky. I’m not sure where this fits in with your framework, sorry.
There’s one big funder (Open Philanthropy), many of the key organisations are really just one organisation wearing different hats (EVF), and these are accountable only to their trustees. What’s more, as Buck notes here, all the dramatis personae are quite friendly (“lot of EA organizations are led and influenced by a pretty tightly knit group of people who consider themselves allies”). Obviously, some people will be in favour of centralised, unaccountable decision-making—those who think it gets the right results—but it’s not the structure we expect to be conducive to good governance in general.
If power in effective altruism were decentralised, that is, there were lots of ‘buyers’ and ‘sellers’ in the ‘EA marketplace’, then you’d expect competitive pressure to improve governance: poorly run organisations will be wracked by the “gales of creative destruction” as donors go elsewhere.
If leaders in effective altruism were accountable, for instance, if EVF became a membership organisation and the board were elected by its (paying?) members, that would provide a different sort of check and balance. I don’t think it’s reasonable for individual donors, i.e. Dustin Mosktovitz and Cari Tuna, or cause-specific organisations, to submit their money to the democratic will, but it seems more sensible for central organisations, those that are something like natural monopolies and ostensibly serve the whole community, to have democratic elements.
As it is, the governance structure across EA is, essentially, for its leaders to police themselves—and wait for media stories to break. Particularly in light of recent events, it’s unclear if this is the optimal approach. I am reminded of the following passage in Pratchett.
Quis custodiet ipsos custodes? Your Grace.”
“I know that one,” said Vimes. “Who watches the watchmen? Me, Mr. Pessimal.”
“Ah, but who watches you, Your Grace?” said the inspector with a brief little smile.
“I do that, too. All the time,” said Vimes. “Believe me.”
-Terry Pratchett, Thud!
Can you say more about your plans to bring additional trustees on the boards?
I note that, at present, all of EV (USA)’s board are current or former members of Open Philanthropy: Nick Beckstead, Zachary Robinson and Nicole Ross are former staff, Eli Rose is a current staffmember. This seems far from ideal; I’d like the board to be more diverse and representative of the wider EA community. As it stands, this seems like a conflict of interest nightmare. Did you discuss why this might be a problem? Why did you conclude it wasn’t?
Others may disagree, but in my perspective, EV/CEA’s role is to act as a central hub for the effective altruism community, and balance the interests of different stakeholders. It’s difficult to see how it could do that effectively if all of its board were or are members of the largest donor.