I’m the Director of the Happier Lives Institute and Postdoctoral Research Fellow at Oxford’s Wellbeing Research Centre. I’m a philosopher by background and did my DPhil at Oxford, primarily under the supervision of Peter Singer and Hilary Greaves. I’ve previously worked for an MP and failed to start a start-up.
MichaelPlant
Hello Jack, I’m honoured you’ve written a review of my review! Thanks also for giving me sight of this before you posted. I don’t think I can give a quick satisfactory reply to this, and I don’t plan to get into a long back and forth. So, I’ll make a few points to provide some more context on what I wrote. [I wrote the remarks below based on the original draft I was sent. I haven’t carefully reread the post above to check for differences, so there may be a mismatch if the post has been updated]
First, the piece you’re referring to is a book review in an academic philosophy journal. I’m writing primarily for other philosophers who I can expect to have lots of background knowledge (which means I don’t need to provide it myself).
Second, book reviews are, by design, very short. You’re even discouraged from referencing things outside the text you’re reviewing. The word limit was 1,500 words—I think my review may even be shorter than your review of my review! - so the aim is just to give a brief overview and make a few comments.
Third, the thrust of my article is that MacAskill makes a disquietingly polemical, one-sided case for longtermism. My objective was to point this out and deliberately give the other side so that, once readers have read both they are, hopefully, left with a balanced view. I didn’t seek to, and couldn’t possibly hope to, given a balanced argument that refutes longtermism in a few pages. I merely explain why, in my opinion, the case for it in the book is unconvincing. Hence, I’d have lots of sympathy with your comments if I’d written a full-length article, or a whole book, challenging longtermism.
Fourth, I’m not sure why you think I’ve misrepresented MacAskill (do you mean ‘misunderstood’?). In the part you quote, I am (I think?) making my own assessment, not stating MacAskill’s view at all. What’s more, I don’t believe MacAskill and I disagree about the importance of the intuition of neutrality for longtermism. I only observe that accepting that intuition would weaken the case—I do not claim there is no case for longtermism if you accept it. Specifically, you quote MacAskill saying:
[if you endorse the intuition of neutrality] you wouldn’t regard the absence of future generations in itself as a moral loss.
But the cause du jour of longtermism is preventing existential risks in order that many future happy generations exist. If one accepts the intuition of neutrality that would reduce/remove the good of doing that. Hence, it does present a severe challenge to longtermism in practice—especially if you want to claim, as MacAskill does, that longtermism changes the priorities.
Finally, on whether ‘many’ philosophers are sympathetic to person-affecting views. In my experience of floating around seminar rooms, it seems to be a view of the large minority of discussants (indeed, it seems far more popular than totalism). Further, it’s taken as a default, or starting position, which is why other philosophers have strenuously argued against it; there is little need to argue against views that no one holds! I don’t think we should assess philosophical truth ‘by the numbers’, ie polling people, rather than by arguments, particularly when those you poll aren’t familiar with the arguments. (If we took such an approach, utilitiarianism would be conclusively ‘proved’ false.). That said, off the top of my head, philosophers who have written sympathetically about person-affecting views include Bader, Narveson (two classic articles here and here), Roberts (especially here, but she’s written on it a few times), Frick (here and in his thesis), Heyd, Boonin, Temkin (here and probably elsewhere). There are not ‘many’ philosophers in the world, and population ethics is a small field, so this is a non-trivial number of authors! For an overview of the non-identity problem in particular, see the SEP.
This seems to be a false equivalence. There’s a big difference between asking “did this writer, who wrote a bit about ethics and this person read, influence this person?” vs “did this philosophy and social movement, which focuses on ethics and this person explicitly said they were inspired by, influence this person?”
I agree with you that the question
Who’s at fault for FTX’s wrongdoing?
has the answer
FTX
But the question
Who else is at fault for FTX’s wrongdoing?
Is nevertheless sensible and cannot have the answer FTX.
[Written in a personal capacity, etc. This is the first of two comments: second comment here]
Hello Will. Glad to see you back engaging in public debate and thanks for this post, which was admirably candid and helpful about how things work. I agree with your broad point that EA should be more decentralised and many of your specific suggestions. I’ll get straight to one place where I disagree and one suggestion for further decentralisation. I’ll split this into two comments. In this comment, I focus on how centralised EA is. In the other, I consider how centralised it should be.
Given your description of how EA works, I don’t understand how you reached the conclusion that it’s not that centralised. It seems very centralised—at least, for something portrayed as a social movement.
Why does it matter to determine how ‘centralised’ EA is? I take it the implicit argument is EA should be “not too centralised, not too decentralised” and so if it’s ‘very centralised’ that’s a problem and we consider doing something. Let’s try to leave aside whether centralisation is a good thing and focus on the factual claim of how centralised EA is.
You say, in effect, “not that centralised”, but, from your description, EA seems highly centralised. 70% of all the money comes from one organisation. A second organisation controls the central structures. You say there are >20 ‘senior figures’ (in a movement of maybe 10,000 people) and point out all of these work at one or the other organisation. You are (often apparently mistaken for) the leader of the movement. It’s not mentioned but there are no democratic elements in EA; democracy has the effect of decentralising power.
If we think of centralisation just on a spectrum of ‘decision-making power’, as you define it above (how few people determine what happens to the whole) EA could hardly be more centralised! Ultimately, power seems the most important part of centralisation, as other things flow from it. On some vague centralisation scale, where 10⁄10 centralisation is “one person has all the power” and 1⁄10 is “power is evenly spread”, it’s … an 8/10? If one organisation, funded by two people, has 70% of the resources, considering that alone suggests a 7⁄10. (Obviously, putting things on scales is silly but never mind that!)
Your argument that it’s not centralised seems to be that EA is not a single legal entity. But that seems like an argument only against the claim it’s not entirely centralised, rather than that it’s not very centralised.
All this is relevant to the point you make about “who’s responsible for EA?”. You say no one’s in charge and, in footnote 3, give different definitions of responsibility. But the key distinction here, one you don’t draw on, seems to be de jure vs de facto. I agree that, de jure, legally speaking, no one controls EA. Yet, de facto, if we think about where power, in fact, resides, it is concentrated in a very small group. If someone sets up an invite-only group called the ‘leaders’ forum’, it seems totally reasonable for people to say “ah, you guys run the show”. Hence the claim ‘no one is in charge’ doesn’t ring true for me. I don’t see how renaming this the ‘coordination forum’ changes this. Given that EA seems so clearly centralised, I can’t follow why you think it isn’t.
You cite the American Philosophical Association as a good example of “not too centralised”. Again, let’s not focus on whether centralisation is good, but think about how central the APA is to philosophy. The APA doesn’t control really any of the money going into philosophy. It runs some conferences and some journals. AFAICT, its leaders are elected by fee-paying members. As Jason points out, I wonder how centralised we’d think power in philosophy were if the APA controlled 70% of the grants and its conferences and journals were run by unelected officials. I think we’d say philosophy was very centralised. I think we’d also think this level of centralisation was not ideal.
Similarly, EA seems very centralised compared to other movements. If I think of the environmental or feminist movements—and maybe this is just my ignorance—I’m not aware of there being a majority source of funding, the conferences being run by a single entity, there being a single forum for discussion, etc.. In those movements, it does seem that, de facto and de jure, no one is really in charge. As a hot take, I’d say they are each about 2-3/10 on my vague centralisation scale. Hence, EA doesn’t match my mental image of a social movement because it’s so centralised. If someone characterised EA as a basically single organisation with some community offshoots, I wouldn’t disagree.
I’ll turn to how centralised EA should be in my other comment.
- 29 Jun 2023 11:34 UTC; 29 points) 's comment on Decision-making and decentralisation in EA by (
What is the main issue in EA governance then, in your view? It strikes me [I’m speaking in a personal capacity, etc.] the challenge for EA is a combination of the fact the resources are quite centralised and that trustees of charities are (as you say) not accountable to anyone. One by itself might be fine. Both together is tricky. I’m not sure where this fits in with your framework, sorry.
There’s one big funder (Open Philanthropy), many of the key organisations are really just one organisation wearing different hats (EVF), and these are accountable only to their trustees. What’s more, as Buck notes here, all the dramatis personae are quite friendly (“lot of EA organizations are led and influenced by a pretty tightly knit group of people who consider themselves allies”). Obviously, some people will be in favour of centralised, unaccountable decision-making—those who think it gets the right results—but it’s not the structure we expect to be conducive to good governance in general.
If power in effective altruism were decentralised, that is, there were lots of ‘buyers’ and ‘sellers’ in the ‘EA marketplace’, then you’d expect competitive pressure to improve governance: poorly run organisations will be wracked by the “gales of creative destruction” as donors go elsewhere.
If leaders in effective altruism were accountable, for instance, if EVF became a membership organisation and the board were elected by its (paying?) members, that would provide a different sort of check and balance. I don’t think it’s reasonable for individual donors, i.e. Dustin Mosktovitz and Cari Tuna, or cause-specific organisations, to submit their money to the democratic will, but it seems more sensible for central organisations, those that are something like natural monopolies and ostensibly serve the whole community, to have democratic elements.
As it is, the governance structure across EA is, essentially, for its leaders to police themselves—and wait for media stories to break. Particularly in light of recent events, it’s unclear if this is the optimal approach. I am reminded of the following passage in Pratchett.
Quis custodiet ipsos custodes? Your Grace.”
“I know that one,” said Vimes. “Who watches the watchmen? Me, Mr. Pessimal.”
“Ah, but who watches you, Your Grace?” said the inspector with a brief little smile.
“I do that, too. All the time,” said Vimes. “Believe me.”
-Terry Pratchett, Thud!
FWIW, I’ve had similar thoughts: I used to think being veg*n was, in some sense, really morally important and not doing it would be really letting the side down. But, after doing it for a few years, I felt much less certain about it.*
To press though, what seems odd about the “the other things I do are so much more impactful, why should I even worry about this?” line is that it has an awkward whisper of self-importance and that it would license all sorts of other behaviours.
To draw this out with a slightly silly and not perfect analogy, imagine we hear a story about some medieval king who sometimes, but not always, kicked people and animals that got in his way. When asked by some brave lackey, “m’lord, but why do you kick them; surely there is no need?” The king replies (imagine a booming voice for best effect) “I am very important and do much good work. Given this, whether I kick or not kick is truly a rounding error, a trifle, on my efforts and I do not propose to pay attention to these consequences”.
I think that we might grant that what the king says is true—kicking things is genuinely a very small negative compared to the large positive of his other actions. However, we might still be bothered by two things. First, it’s rude for him to remind us how much more important he is than us, even if it’s true. Second, the kicking still seems unnecessary, even if it’s only small. It’s not like it helps him be more impactful with the rest of his life. So perhaps our intuitions on the meat-eating case turn on whether we think it’s a serious vs a trivial sacrifice to stop doing it.
*In fact, I’ve been experimenting with a ‘welfaretarian’ diet (where you only eat animals that have had happy lives) recently and might write something up on that at some point.
I’ve only just seen this and thought I should chime in. Before I describe my experience, I should note that I will respond to Luke’s specific concerns about subjective wellbeing separately in a reply to his comment.
TL;DR Although GiveWell (and Open Phil) have started to take an interest in subjective wellbeing and mental health in the last 12 months, I have felt considerable disappointment and frustration with their level of engagement over the previous six years.
I raised the “SWB and mental health might really matter” concerns in meetings with GiveWell staff about once a year since 2015. Before 2021, my experience was that they more or less dismissed my concerns, even though they didn’t seem familiar with the relevant literature. When I asked what their specific doubts were, these were vague and seemed to change each time (“we’re not sure you can measure feelings”, “we’re worried about experimenter demand effect”, etc.). I’d typically point out their concerns had already been addressed in the literature, but that still didn’t seem to make them more interested. (I don’t recall anyone ever mentioning ‘item response theory’, which Luke raises as his objection.) In the end, I got the impression that GiveWell staff thought I was a crank and were hoping I would just go away.
GiveWell’s public engagement has been almost non-existent. When HLI published, in August 2020, a document explaining how GiveWell could (re)estimate their own ‘moral weights’ using SWB, GiveWell didn’t comment on this (a Founders Pledge researcher did, however, provide detailed comments). The first and only time GiveWell has responded publicly about this was in December 2020, where they set out their concerns in relation to our cash transfer vs therapy meta-analyses; I’ve replied to those comments (many of which expressed quite non-specific doubts) but not yet received a follow-up.
The response I was hoping for—indeed, am still hoping for—was the one Will et al. gave above, namely, “We’re really interested in serious critiques. What do you think we’re getting wrong, why, and what difference would it make if you were right? Would you like us to fund you to work on this?” Obviously, you wouldn’t expect an organisation to engage with critiques that are practically unimportant and from non-credible sources. In this case, however, I was raising fundamental concerns that, if true, could substantially alter the priorities, both for GiveWell and EA more broadly. And, for context, at the time I initially highlighted these points I was doing a philosophy PhD supervised by Hilary Greaves and Peter Singer and the measurement of wellbeing was a big part of my thesis.
There has been quite good engagement from other EAs and EAs orgs, as Aaron Gertler notes above. I can add to those that, for instance, Founders Pledge have taken SWB on board in their internal decision-making and have since made recommendations in mental health. However, GiveWell’s lack of engagement has really made things difficult because EAs defer so much to GiveWell: a common question I get is “ah, but what does GiveWell think?” People assume that, because GiveWell didn’t take something seriously, that was strong evidence they shouldn’t either. This frustration was compounded by the fact that because there isn’t a clear, public statement of what GiveWell’s concerns were, I could neither try to address their concerns nor placate the worries of others by saying something like “GiveWell’s objection is X. We don’t share that because of Y”.
This is pure speculation on my part, but I wonder if GiveWell (and perhaps Open Phil too) developed an ‘ugh field’ around subjective wellbeing and mental health. They didn’t look into it initially because they were just too damn busy. But then, after a while, it became awkward to start engaging with because that would require admitting they should have done so years ago, so they just ignored it. I also suspect there’s been something of an information cascade where someone originally looked at all this (see my reply to Luke above), decided it wasn’t interesting, and then other staff members just took that on trust and didn’t revisit it—everyone knew an idea could be safely ignored even if they weren’t sure why.
Since 2021, however, things have been much better. In late 2020, as mentioned, HLI published a blog post showing how SWB could be used to (re)estimate GiveWell’s ‘moral weights’. I understand that some of GiveWell’s donors asked them for an opinion on this and that pushed them to engage with it. HLI had a productive conversation with GiveWell in February 2021 (see GiveWell’s notes) where, curiously, no specific objections to SWB were raised. GiveWell are currently working on a blog post responding to our moral weights piece and they kindly shared a draft with us in July asking for our feedback. They’ve told us they plan to publish reports on SWB and psychotherapy in the next 3-6 months.
Regarding Open Phil, it seemed pointless to engage unless GiveWell came on board, because Open Phil also defer strongly to GiveWell’s judgements, as Alex Berger has recently stated. However, we recently had some positive engagement from Alex on Twitter, and a member of his team contacted HLI for advice after reading our report and recommendations on global mental health. Hence, we are now starting to see some serious engagement, but it’s rather overdue and still less fulsome than I’d want.
- 6 Jan 2022 23:14 UTC; 44 points) 's comment on Democratising Risk—or how EA deals with critics by (
I quite like the idea of an EAG: Open, but presumably as a complement, rather than replacement, to the current networking-focused EAGlobal.
One thing that seems missing from the EA ecosystem is a single place where there are talks which convey new information to lots of interested, relevant people in one go, and those ideas can be discussed.
This used to happen at EAGlobal, but it doesn’t anymore because (for understandable reasons) the event is very networking focused, so talks basically got canned. I find it odd there’s now so little public discussion at the EA community’s flagship event. (The only major communication that happens is at the opening and closing ceremonies, and is (always?) done by Will. Will is great, but it would be great to have a diversity of messages and messengers.)
There is more content at EAGxs, but then only a fraction of people see those. I’ve realised I’m basically touring the world giving more-or-less the same talk so most people hear it once. In some ways, this is quite fun, but it’s also pretty inefficient. I’d prefer to give that talk once and then be able to move onto other topics.
The EA forum currently serves as the central place for discussion, but it’s not that widely used and stuff tends to disappear from view pretty fast. It certainly doesn’t do the same thing as TED-style big talks do for communicating important ideas.
This got me wondering: how much agreement is there between grantmakers (assuming they already share some broad philosophical assumptions)?
Because, if the top grants are much better than the marginal grants, and grantmakers would agree over what those are, then you could replace the ‘extremely busy’ grantmakers with less busy ones. The less busy ones would award approximately the same grants but be able to spend more time investigating marginal giving feedback.
I’m concerned about the scenario where (nearly) all grantmakers are too busy to give feedback and applicants don’t improve their projects.
IMO good-faith, strong, fully written-up, readable, explicit critiques of longtermism are in short supply; indeed, I can’t think of any. The three you raise are good, but they are somewhat tentative and limited in scope. I think that stronger objections could be made.
FWIW, on the EA facebook page, I raised three critiques of longtermism in response to Finn Moorhouse’s excellent recent article on the subject, but all my comments were very brief.
The first critique involves defending person-affecting views in population ethics and arguing that, when you look at the details, the assumptions underlying them are surprisingly hard to reject. My own thinking here is very influenced by Bader (2022), which I think is a philosophical masterclass, but is also very dense and doesn’t address longtermism directly. There are other papers arguing for person-affecting views, e.g. Narveson (1967) and Heyd (2012) but both are now a bit dated—particularly Narveson—in the sense they don’t respond to the more sophisticated challenges to their views that have since been raised in the literature. For the latest survey of the literature and those challenges—albeit not one sympathetic to person-affecting views—see Greaves (2017).
The second draws on a couple of suggestions made by Webb (2021) and Berger (2021) about cluelessness. Webb (2021) is a reasonably substantial EA forum post about how we might worry that, the further in the future something happens, the smaller the expected value we should assign to it, which acts as an effective discount. However, Webb (2021) is pretty non-committal about how serious a challenge this is for longtermism and doesn’t frame it as one. Berger (2021) is talking on the 80k podcasts and suggests that longtermist interventions are either ‘narrow’ (e.g. AI safety) or ‘broad’ (‘improving politics’), where the former are not robustly good, and the latter are questionably better than existing ‘near-termist’ interventions such as cash transfers to the global poor. I wouldn’t describe this as a worked-out thesis though and Berger doesn’t state it very directly.
The third critique is that, a la Torres, longtermism might lead us towards totalitarianism. I don’t think this is a really serious objection, but I would like to see longtermists engage with it and say why they don’t believe it is.
I should probably disclose I’m currently in discussion with Forethought about a grant to write up some critiques of longtermism in order to fill some of this literature gap. Ideally, I’ll produce 2-3 articles within the next 18 months.
- 23 Jun 2022 10:32 UTC; 49 points) 's comment on Critiques of EA that I want to read by (
- More to explore on ‘What could the future hold? And why care?’ by 7 Jul 2022 23:00 UTC; 4 points) (
Hi Greg,
Thanks for this post, and for expressing your views on our work. Point by point:
I agree that StrongMinds’ own study had a surprisingly large effect size (1.72), which was why we never put much weight on it. Our assessment was based on a meta-analysis of psychotherapy studies in low-income countries, in line with academic best practice of looking at the wider sweep of evidence, rather than relying on a single study. You can see how, in table 2 below, reproduced from our analysis of StrongMinds, StrongMinds’ own studies are given relatively little weight in our assessment of the effect size, which we concluded was 0.82 based on the available data. Of course, we’ll update our analysis when new evidence appears and we’re particularly interested in the Ozler RCT. However, we think it’s preferable to rely on the existing evidence to draw our conclusions, rather than on forecasts of as-yet unpublished work. We are preparing our psychotherapy meta-analysis to submit it for academic peer review so it can be independently evaluated but, as you know, academia moves slowly.
We are a young, small team with much to learn, and of course, we’ll make mistakes. But, I wouldn’t characterise these as ‘grave shortcomings’, so much as the typical, necessary, and important back and forth between researchers. A claims P, B disputes P, A replies to B, B replies to A, and so it goes on. Even excellent researchers overlook things: GiveWell notably awarded us a prize for our reanalysis of their deworming research. We’ve benefitted enormously from the comments we’ve got from others and it shows the value of having a range of perspectives and experts. Scientific progress is the result of productive disagreements.
I think it’s worth adding that SimonM’s critique of StrongMinds did not refer to our meta-analytic work, but focused on concerns about StrongMinds own study and analysis done outside HLI. As I noted in 1., we share the concerns about the earlier StrongMinds study, which is why we took the meta-analytic approach. Hence, I’m not sure SimonM’s analysis told us much, if anything, we hadn’t already incorporated. With hindsight, I think we should have communicated far more prominently how small a part StrongMinds’ own studies played in our analysis, and been quicker off the mark to reply to SimonM’s post (it came out during the Christmas holidays and I didn’t want to order the team back to their (virtual) desks). Naturally, if you aren’t convinced by our work, you will be sceptical of our recommendations.
You suggest we are engaged in motivated reasoning, setting out to prove what we already wanted to believe. This is a challenging accusation to disprove. The more charitable and, I think, the true explanation is that we had a hunch about something important being missed and set out to do further research. We do complex interdisciplinary work to discover the most cost-effective interventions for improving the world. We have done this in good faith, facing an entrenched and sceptical status quo, with no major institutional backing or funding. Naturally, we won’t convince everyone – we’re happy the EA research space is a broad church. Yet, it’s disheartening to see you treat us as acting in bad faith, especially given our fruitful interactions, and we hope that you will continue to engage with us as our work progresses.
Table 2.
- 13 Jul 2023 15:51 UTC; 31 points) 's comment on The Happier Lives Institute is funding constrained and needs you! by (
Yeah, this is an excellent list. To me, the OP seems to miss the obvious the point, which is that if you look at what the central EA individuals, organisations, and materials are promoting, you very quickly get the impression that, to misquote Henry Ford, “you can have any view you want, so long as it’s longtermism”. One’s mileage may vary, of course, as to whether one thinks this is a good result.
To add to the list, the 8-week EA Introductory Fellowship curriculum, the main entry point for students, i.e. the EAs of the future, has 5 sections on cause areas, of which 3 are on longtermism. As far as I can tell, there are no critiques of longtermism anywhere, even in the “what might we be missing?” week, which I found puzzling.
[Disclosure: when I saw the Fellowship curriculum about a year ago, I raised this issue with Aaron Gertler, who said it had been created without much/any input from non-longtermists, this was perhaps an oversight, and I would be welcome to make some suggestions. I meant to make some, but never prioritised it, in large part because it was unclear to me if any suggestions I made would get incorporated.]
- 4 May 2022 18:37 UTC; 37 points) 's comment on EA is more than longtermism by (
- 6 May 2022 16:43 UTC; 25 points) 's comment on EA is more than longtermism by (
- 23 May 2022 3:13 UTC; 7 points) 's comment on EA is more than longtermism by (
[Writing from my hotel room at EAG at 5am because my body is on UK time and I can’t sleep. Hopefully my reasoning isn’t too wonky]
Hello Ozzie. Thanks very much for writing this. It brings lots of nuance. I agree this conversation is easier to have at an abstract level. I wanted to make a few points.
One early reviewer critiqued this post saying that they didn’t believe that discomfort was a problem
I’ve been struck at how often I’ve seen or heard people say something like this, i.e., that people do feel free to make critiques on important issues. For a community of people that prizes itself on avoiding cognitive biases, this seems a real blind spot. It seems that some people mistakenly infer from the fact they don’t feel uncomfortable making critiques, and they see other people doing it, that no one feels awkward about this and that and everything important gets said. In fact, I strongly suspect there are unrecognised power dynamics at play. If you’re in a position of power, eg control funding, and work with people who mostly agree with you, those people may feel psychologically safe enough to give you pushback. However others—who may have more important disagreements with you—might not feel comfortable saying anything. This would falsely create the impressions both that people in general feel free to make critiques and that everyone agrees with you, leading to overconfidence.
Second, you ask the question of who is uncomfortable critiquing who. This raises the further question, Why? Again, I suspect this has to relate to power and interpersonal awkwardness. It’s much easier to object to global health and wellbeing interventions, because you can focus on the evidence. It’s less personal. But for longtermism stuff, it’s more about people and their ideas and how well they seem to be running a project. When you add in the small, interconnected funder ecosystem, the incentives to criticise longtermism stuff are pretty weak: there is little to gain, but potentially much to lose, from objecting, so you’d expect less criticism. I speak to lots of people who don’t find longtermism particularly plausible but conclude (I think rationally) it’s not smart for them to say anything.
Third, as a personal note, I’ve found, and find, critiquing other bits of EA deeply uncomfortable. People might be surprised by this, because (1) I’ve done quite a bit of it and (2) I may give off the impression of being very confident and enjoying disagreement (I’m a 6ft5 male with a philosophy PhD) but (even?) I consistently find it really difficult and stressful. I do it because I think the issues are too important. But it’s often psychologically unpleasant. And it’s genuinely very difficult to do without annoying people, even when you really don’t want to (I don’t think I’ve been great at this in the past but hope I’m improving). Doing good better relies on us challenging our current approaches, which is why it’s so important to recognise how hard it is to make critiques and to think about what could be tweaked to improve this.
Thanks very much for these comments! Given that Alex—who I’ll refer to in the 3rd person from here—doesn’t want to engage in a written back and forth, I will respond to his main points in writing now and suggest he and I speak at some other time.
Alex’s main point seems to be that Open Philanthropy (OP) won’t engage in idle philosophising: they’re willing to get stuck into the philosophy, but only if it makes a difference. I understand that—I only care about decision-relevant philosophy too. Of course, sometimes the philosophy does really matter: the split of OP into the ‘longtermism’ and ‘global health and wellbeing’ pots is an indication of this.
My main reply is that Alex has been too quick to conclude that moral philosophy won’t matter for OP’s decision-making on global health and wellbeing. Let me (re)state a few points which show, I think, that it does matter and, as a consequence, OP should engage further.
As John Halstead has pointed out in another comment, the location of the neutral point could make a big difference and it’s not obvious where it is. If this was a settled question, I might agree with Alex’s take, but it’s not settled.
Relatedly, as I say in the post, switching between two different accounts of the badness of death (deprivationism and TRIA) would alter the value of life-extending to life-improving interventions by a factor of perhaps 5 or more.
Alex seems to object to hedonism, but I’m not advocating for hedonism (at least, not here). My main point is about adopting a ‘subjective wellbeing (SWB) worldview’, where you use the survey research on how people actually experience their lives to determine what does the most good. I’m not sure exactly what OP’s worldview is—that’s basically the point of the main post—but it seems to place little weight on people’s feelings (their ‘experienced utility’) and far more on what they do or would choose (their ‘decision utility’). But, as I argue above, these two can substantially come apart: we don’t always choose what makes us happiest. Indeed, we make predictable mistakes (see our report on affective forecasting for more on this).
Mental health is a problem that looks pretty serious on the SWB worldview but appears nowhere in the worldview that OP seems to favour. As noted, HLI finds therapy for depressed people in LICs is about 10x more cost-effective than cash-transfers in LICs. That, to me, is sufficient to take the SWB worldview seriously. I don’t see what this necessarily has to do with animals.
Will the SWB lens reveal different priorities in other cases? Very probably—pain and loneliness look more important, economic growth less, etc. - but I can’t say for sure because attempts to apply this lens are so new. I had hoped OP’s response would be “oh, this seems to really matter, let’s investigate further” but it seems to be “we’re not totally convinced, so we’ll basically ignore this”.
Alex says “we don’t think that different measures of subjective wellbeing (hedonic and evaluative) neatly track different theories of welfare” but he doesn’t explain or defend that claim. (There are a few other places above where he states, but doesn’t argue for, his opinion, which makes it harder to have a constructive disagreement.)
On the total view, saving lives, and fertility, we seem to be disagreeing about one thing but agreeing about another. I said the total view would lead us to reduce the value of saving lives. Alex says it might actually cause us to increase the value of saving lives when we consider longer-run effects. Okay. In which case, it would seem we agree that taking a stand on population ethics might really matter. In which case, I take it we ought to see where the argument goes (rather than ignore it in case it takes us somewhere we don’t like).
It seems that Alex’s conclusion that moral philosophy barely matters relies heavily on the reasoning in the spreadsheet linked to in footnote 50 of the technical update blog post. The footnote states “Our [OP’s] analysis tends to find that picking the wrong moral weight only means sacrificing 2-5% of the good we could do”. I discussed this above in footnote 3, but I expect it’s worth restating and elaborating on that here. The spreadsheet isn’t explained and it’s unclear what the justification is. I assume the “2-5%” thing is really a motte-and-bailey. To explain, one might think OP is making a very strong claim such as “whatever assumptions you make about morality makes almost no difference to what you ought to do”. Clearly, that claim is implausible. If OP does believe this, that would be an amazing conclusion about practical ethics and I would encourage them to explain it in full. However, it seems that OP is probably making a much weaker claim, such as “given some restrictions on what our moral views can be, we find it makes little difference which ones we pick”. This claim is plausible, but of course, the concern is that the choice of moral views has been unduly restricted. What the preceding bullet points demonstrate is that different moral assumptions (and/or ‘worldviews’) could substantially change our conclusions—it’s not just a 2-5% difference.
I understand, of course, that investigating—and, possibly, implementing—additional worldviews is, well, hassle. But Open Philanthropy is a multi-billion dollar foundation that’s publicly committed to worldview diversification and it looks like it would make a practical difference.
First of all, I’d like to say I’ve been excited about this topic for some time and have been following each of you, and your (excellent) work individually, so it’s a very pleasant surprise to have you all here!
Question: what is your thinking on how cost-effective, from a donor perspective, additional resources are if put towards psychedelics compared to other problems, e.g. the GiveWell-style health and development interventions?
Follow up: How valuable do you think additional detailed research on this would be (to you)?
This is primarily for Tim, seeing as he’s really putting his money where his mouth is!
Background: I run the Happier Lives Institute and I want us to take look, in the near-future, into funding psychedelics.* Psychedelics seems very promising, but it’s unclear exactly how promising.
One generic issue is that it’s hard to sensibly model the cost-effectiveness of systemic interventions, e.g psychedelics, to ‘atomic’ ones, e.g. handing out cash transfers to one person at a time, because you have to make so many assumptions about how funding one thing might impact an entire society. The best analysis currently is from Founders Pledge, who compared funding psychedelic research (specifically, Usona’s research into psilocybin as a treatment for depression) to funding psychotherapy for mental health (specifically, StrongMinds, which treats women for depression in Africa). This is probably the most straightforward comparison, as it’s in terms of depression in both cases, and finds them about equally effective. However, the Founders Pledge analysis of psychedelics is arguably too sceptical of psychedelics because, for instance, it only considers the impact research would have in the US, rather than world.
A particular issue is that psychedelics now seems to be getting increasingly more attention, so one might wonder if all the best projects will get funded anyway, and donors seeking the biggest impact should go elsewhere.
*Or, rather, another look—I wrote series of posts on this forum and gave a talk on it in 2017 - but then dropped the topic because Founders Pledge picked it up.
I enjoyed reading this, but you don’t seem to seriously engage with the point you’re supposed to be arguing against, so much as instead focusing on poetically tugging your readers’ intuitions in a particular direction. I think this has its place but I thought I should provide the (dry) philosophical counterpoint nevertheless.
The essence of your post is to advocate for comparativism, the view existence can be better for someone than non-existence. However, comparativism has problematic metaphysical commitments. I’m drawing heavily on unpublished work by Raph Bader here.
The obvious (only?) way to understand the ‘personal betterness relation’ – being “better for” – is a two-place relation that has lives (or ‘time slices’) as its ‘relata’ (the things being related). Hence, something can only be better for someone if they exist in both outcomes we’re comparing.
The last paragraph was quite jargony. Sorry. Here’s a more intuitive way of bringing out the same problem. Suppose I say “Joe is to the left of”. You might look at me blankly and say “okay. Joe is to the left of … what, exactly?” You would then point out, quite correctly, that it doesn’t make sense to say “Joe is to the left of” in the abstract. For the relationship of ‘being to the left of’ to obtain for an object, there have to be two things, they need to have a location, and we need to establish positionality such that one is on the left of the other thing. We run into the same problem if we say “world one (where Joe exists) is better for Joe than world two (where he doesn’t)”. The ‘better for Joe’ relation doesn’t hold unless Joe exists in both places. To be clear, I have no issue with saying “world one is impersonally better than world two” on the grounds the former contains more happiness. It just seems confused to say it’s ‘better for Joe’.
A more intuitive, but less analogous way to press this sort of complaint is if I say “blue is taller than green”. Clearly, blue and green can’t stand in the relationship of being taller than each other—neither has the property of height. It’s not just that they are equally tall as each other: that would require them to have a property of height and for them to have the same quantity of it. Rather, neither have the property of height, hence we are not able to compare them with respect to their height. Note, having a height of zero is not the same as not having the property of having a height, much as there is a difference between not having a bank account and having a bank account with nothing in it.
The challenge for the comparativist is to explain which properties ground the personal betterness relationship. For a life to have evaluative properties—to be good/bad for the person—it has to have some non-evaluative properties, e.g. how happy/sad the person is. But, a non-existent life does not have any non-evaluative properties to get the evaluative ones off the ground. There’s no way to compare existence to non-existence for someone; it is an attempt to compare something with nothing. Hence it is not the case existence is better, worse, or equally good as existence for someone; rather, existence and non-existence are incomparable in value for someone.
As Bader (very dryly) puts it: “Comparativism is thus not viable since there cannot be a betterness relation without relata, nor can there be goodness without good-making features” This is a quote from this paper (https://homeweb.unifr.ch/BaderR/Pub/Person-affecting (R. Bader).pdf) where he mentions, but doesn’t develop, the points I made above.
Someone suggested I should mention a few of the EA critiques I’m personally working on. I’ve only skimmed the comment so sorry if I’ve missed something relevant.
Three are of longtermism (and prospectively with funding support from the Forethought Foundation).
One is based on defending person-affecting views. Here are some brief, questionably comprehensible notes for a talk I did at GPI a couple of weeks ago. Prose blog post and eventually an academic paper to follow.
Another is on tractability/cluelessness: can we foreseeably and significantly influence the long-term? No notes yet, but I sketch the idea in another EA forum comment.
A third is developing a theoretical justification for something like worldview diversification. If this were true, it would seem to follow we should split resources rather than go ‘all-in’ on any one cause. In fairness, this isn’t an argument against being a longtermist, it’s an argument against being only a longtermism. No note on this yet, either, but hopefully a blog post sketching it in <2 month
I’ve also got a ‘red-team’ of Open Philanthropy’s cause prioritisation framework. That’s written and should appear within a month.
On top of these, me and the team at HLI are generally doing research which starts with the assumption our cost-effectiveness analyses should directly measure the effects on people’s subjective wellbeing (aka happiness) and see how that could change our priorities. Last week, we did a webinar with StrongMinds where I set out our work which found that treating depression in Africa is about 10x better than providing cash transfers (recording). More work in this vein to come too...
I also share sympathy with some of the other ones OP flags.
[restating and elaborating on what I said on twitter]
Thanks very for this update, Ben. The “EA has loads of money” meme has unfortunately led people to (incorrectly) assume that everything ‘within EA’ was fully funded. This made it harder to fundraise, particularly for small orgs, like mine, who do need new donors, because prospective donors assumed they weren’t necessary.
Of course, the meme had no impact on organisations that are already fully-funded—which is more or less only those orgs being funded by Open Philanthropy.
While I agree that net global welfare may be negative and declining, in light of the reasoning and evidence presented here, I think you could and should have claimed something like this: “net global welfare may be negative and declining, but it may also be positive and increasing, and really we have no idea which it is—any assessment of this type of is enormously speculative and uncertain”.
As I read the post, the two expressions that popped into my head were “if it’s worth doing, it’s worth doing with made-up numbers” and “if you saw how the sausage is made …”.
The problem here is that all of the numbers for ‘animal welfare capacity’ and ‘welfare percentages’ are essentially—and unfortunately—made up. You cite Rethink Priorities for the former, and Charity Entrepreneurship for the latter, and express some scepticism, but then more or less take them at face value. You don’t explain how those people came up with numbers and whether they should be trusted. I don’t think I am disparaging the good folk at either organisation—and I am certainly not trying to! - because you asked them about this, I think they would freely say “look, we don’t really know how to do this. We have intuitions about this, of course, but we’re not sure if there’s any good evidenced-based way to come up with these numbers”;* indeed, that is, in effect, the conclusion Rethink Priorities stated in the write-up of their recent workshop (see my comment on that too). Hence, such numbers should not be taken with a mere pinch of salt, but with a bucketload.
You don’t account for uncertainty here (you used point estimates), and I appreciate that is extra hassle, but I think the uncertainty here is the story. If you were to use upper and lower subjective bounds for e.g. “how unhappy are chickens compared to how happy humans are?”, they would be very large. They must be very large because, as noted, we don’t even know what factual, objective evidence we would use to narrow them down, so we have nothing to constrain the bounds of what’s plausible. But given how large they would be, we’d end up with the conclusion that we really don’t know whether global welfare is negative or positive.
* People are often tempted to say that we could look at objective measures, like neuron counts, for interspecies comparison. But this merely kicks the can down the road. How do we know what the relationship is between neuron counts and levels of pleasure and pain? We don’t. We have intuitions, yes, but what evidence could we point to to settle the question? I do not know.
The Procreative Asymmetry is very widely held, and much discussed, by philosophers who work on population ethics (and seemingly very common in the general population). If anything, it’s the default view, rather than a niche position (except among EA philosophers). If you do a quick search for it on philpapers.org there’s quite a lot there.
You might think the Asymmetry is deeply mistaken, but describing it as a ‘niche position’ is much like calling non-consequentialism a ‘niche position’.
Can you say more about your plans to bring additional trustees on the boards?
I note that, at present, all of EV (USA)’s board are current or former members of Open Philanthropy: Nick Beckstead, Zachary Robinson and Nicole Ross are former staff, Eli Rose is a current staffmember. This seems far from ideal; I’d like the board to be more diverse and representative of the wider EA community. As it stands, this seems like a conflict of interest nightmare. Did you discuss why this might be a problem? Why did you conclude it wasn’t?
Others may disagree, but in my perspective, EV/CEA’s role is to act as a central hub for the effective altruism community, and balance the interests of different stakeholders. It’s difficult to see how it could do that effectively if all of its board were or are members of the largest donor.