I’m the Director of the Happier Lives Institute and Postdoctoral Research Fellow at Oxford’s Wellbeing Research Centre. I’m a philosopher by background and did my DPhil at Oxford, primarily under the supervision of Peter Singer and Hilary Greaves. I’ve previously worked for an MP and failed to start a start-up.
MichaelPlant
Hello Jack. A quick reply: I’m not sure how well the arguments for improving global being a sensible longterm priority will stack up. I suspect they won’t, on closer inspection, but it seems worth investigating at some point.
Hello Matt and thanks for your overall vote of confidence, including your comments below to Nathan.
Could you expand on what you said here?
I may also have been a little sus early (sorry Michael) on but HLI’s work has been extremely valuable
I’m curious to know why you were originally suspicious and what changed your mind. Sorry if you’ve already stated that below.
Hello Nathan. Thanks for the comment. I think the only key place where I would disagree with you is what you said here
If, as seems likely the forthcoming RCT downgrades SM a lot and the HLI team should have seen this coming, why didn’t they act?
As I said in response to Greg (to which I see you’ve replied) we use the conventional scientific approach of relying on the sweep of existing data—rather than on our predictions of what future evidence (from a single study) will show. Indeed, I’m not sure how easily these would come apart: I would base my predictions substantially on the existing data, which we’ve already gathered in our meta-analysis (obviously, it’s a matter of debate as to how to synthesise data from different sources and opinions will differ). I don’t have any reason to assume the new RCT will show effects substantially lower than the existing evidence, but perhaps others are aware of something we’re not.
Hello Richard. Glad to hear this! I’ve just sent you HLI’s bank details, which should allow you to pay without card fees (I was inclined to share them directly here, but was worried that would be unwise). I don’t have an answer to your second question, I’m afraid.
Hello Jack. I think people can and will have different conceptions of what the criteria to be on a/the ‘top charity’ list are, including what counts as sufficient strength of evidence. If strength of evidence is essential, that may well rule out any interventions focused on the longterm (whose effects we will never know) as well as deworming (the recommendation of which is substantially based on a single long-term study). The evidence relevant for StrongMinds was not trivial though: we drew on 39 studies of mental health interventions in LICs to calibrate our estimates.
We’d be very happy to see further research funded. However, we see part of our job as trying to inform donors who want to fund interventions, rather than research. On the current evidence and analysis we’ve been able to do, StrongMinds was the only organisation we felt comfortable recommending. We are working to update our existing analysis and search for new top interventions.
Hi Greg,
Thanks for this post, and for expressing your views on our work. Point by point:
I agree that StrongMinds’ own study had a surprisingly large effect size (1.72), which was why we never put much weight on it. Our assessment was based on a meta-analysis of psychotherapy studies in low-income countries, in line with academic best practice of looking at the wider sweep of evidence, rather than relying on a single study. You can see how, in table 2 below, reproduced from our analysis of StrongMinds, StrongMinds’ own studies are given relatively little weight in our assessment of the effect size, which we concluded was 0.82 based on the available data. Of course, we’ll update our analysis when new evidence appears and we’re particularly interested in the Ozler RCT. However, we think it’s preferable to rely on the existing evidence to draw our conclusions, rather than on forecasts of as-yet unpublished work. We are preparing our psychotherapy meta-analysis to submit it for academic peer review so it can be independently evaluated but, as you know, academia moves slowly.
We are a young, small team with much to learn, and of course, we’ll make mistakes. But, I wouldn’t characterise these as ‘grave shortcomings’, so much as the typical, necessary, and important back and forth between researchers. A claims P, B disputes P, A replies to B, B replies to A, and so it goes on. Even excellent researchers overlook things: GiveWell notably awarded us a prize for our reanalysis of their deworming research. We’ve benefitted enormously from the comments we’ve got from others and it shows the value of having a range of perspectives and experts. Scientific progress is the result of productive disagreements.
I think it’s worth adding that SimonM’s critique of StrongMinds did not refer to our meta-analytic work, but focused on concerns about StrongMinds own study and analysis done outside HLI. As I noted in 1., we share the concerns about the earlier StrongMinds study, which is why we took the meta-analytic approach. Hence, I’m not sure SimonM’s analysis told us much, if anything, we hadn’t already incorporated. With hindsight, I think we should have communicated far more prominently how small a part StrongMinds’ own studies played in our analysis, and been quicker off the mark to reply to SimonM’s post (it came out during the Christmas holidays and I didn’t want to order the team back to their (virtual) desks). Naturally, if you aren’t convinced by our work, you will be sceptical of our recommendations.
You suggest we are engaged in motivated reasoning, setting out to prove what we already wanted to believe. This is a challenging accusation to disprove. The more charitable and, I think, the true explanation is that we had a hunch about something important being missed and set out to do further research. We do complex interdisciplinary work to discover the most cost-effective interventions for improving the world. We have done this in good faith, facing an entrenched and sceptical status quo, with no major institutional backing or funding. Naturally, we won’t convince everyone – we’re happy the EA research space is a broad church. Yet, it’s disheartening to see you treat us as acting in bad faith, especially given our fruitful interactions, and we hope that you will continue to engage with us as our work progresses.
Table 2.
- Jul 13, 2023, 3:51 PM; 31 points) 's comment on The Happier Lives Institute is funding constrained and needs you! by (
Hello Alex,
Reading back on the sentence, it would have been better to put ‘many’ rather than ‘all’. I’ve updated it accordingly. TLYCS don’t mention WELLBYs, but they did make the comment “we will continue to rely heavily on the research done by other terrific organizations in this space, such as GiveWell, Founders Pledge, Giving Green, Happier Lives Institute [...]”.
It’s worth restating the positives. A number of organisations have said that they’ve found our research useful. Notably, see the comments by Matt Lerner (Research Director, Founders Pledge) below and also those from Elie Hassenfield (CEO, GiveWell), which we included in footnote 3 above. If it wasn’t for HLI’s work pioneering the subjective wellbeing approach and the WELLBY, I doubt these would be on the agenda in effective altruism.
Hello James. Apologies, I’ve removed your name from the list.
To explain why we included it, although the thrust of your post was to critically engage with our research, the paragraph was about the use of the SWB approach for evaluating impact, which I believed you were on board with. In this sense, I put you in the same category as GiveWell: not disagreeing about the general approach, but disagreeing about the numbers you get when you use it.
Thanks! Yes, that’s right. ‘Lean’ is small team, 12 month budget. ‘Growth’ is growing the team, 12 month budget. ‘Optimal growth’ is just ‘growth’, but 18 month budget.
I’m now wondering if we should use different names...
HLI’s 2023-4 research agenda
The Happier Lives Institute is funding constrained and needs you!
I didn’t expect people to agree with this comment, but I would be interested to know why they disagree! (Some people have commented below, but I don’t imagine that covers all the actual reasons people had)
Hi Ben. It’s a pity you didn’t comment on the substance of my post, just proposed a minor correction. I hope you’ll be able to comment later.
You point out EA Norway, which I was aware of, but I think it’s the only one and decided not to mention it (I’ve even been to the annual conference and apologise to the Norwegians—credit where credit’s due). But that seems to be the exception that proves the rule. Why are there no others? I’ve heard on the grapevine that CEA discourages it which seems, well, sinister. Seems a weird coincidence are nearly no democratic EA societies.
You say
“There are clearly democratic elements in EA [… E.g.] individuals choosing to donate their money without deference to a coordinating body”
I think you’ve misunderstood the meaning of democracy here. I think you’re just talking about not being a totalitarian state, where the state can control all your activities. I believe that in, say, Saudi Arabia (not a democracy) you can mostly spend your money on what you want, including your choice of charity, without deference to a coordination body.
Well, you’re not going to fund stuff if you don’t like what the organisation is planning to do. That’s generally true.
I don’t mind the idea of donors funding a members’ society. This happens all the time, right? It’s just the leaders have to justify it to the members. It’s also not obvious that, if CEA were a democratic society, it would counterfactually lose funding. You might gain some and lose others. I’m not sure I would personally fund ‘reformed-CEA’ but I would be more willing to do so.
I take it you’re saying making things more democratic can make them more powerful because they then have greater legitimacy, right? More decentralised power → large actual power?
I suppose part of my motivation to democratise CEA is that it sort of has that leadership role de facto anyway, and I don’t see that changing anytime soon (because it’s so central). Yet, it lacks legitimacy (i.e. the de jure bit), so a solution is to give it legitimacy.
I guess someone could say, “I don’t want CEA to have more power, and it would have if it were a members society, so I don’t want that to happen”. But that’s not my concern. If anything, what your comments make me think is (1) something like CEA should exist, (2) actual CEA does a pretty good job, (3) nevertheless, there’s something icky about its lack of legitimacy (maybe I’m far more of an instinctive democratic that I thought), (4) adding some democracy stuff would address (3).
Yeah, I’ve not spent loads of time trying to think through the details. I’m reluctant to do so unless there’s interest from ‘central EA’ on this.
As ubuntu’s comments elsewhere made clear, it’s quite hard for someone to replicate various existing community structures, e.g. the conferences, even though no one has a literal monopoly on them, because they are still natural monopolies. If you’re thinking “I can’t imagine a funder supporting a new version of X if X already exists”, then that’s a good sign it is a central structure (and maybe should have democratic elements). There are lots of philosophy conferences, but that doesn’t take away from the value of having a central one.
Also, you make the point “well, but would reformed-EA be worth doing if the main funder wouldn’t support it?”. Let’s leave that as an open question. But I do want to highlight a tension between that thought and the claim that “EA is not that centralised”. If how EA operates depends (very) substantially on one what a single funder thinks, we should presumably conclude EA is very centralised. Of course, it’s then a further question of whether or not that’s good and what, if anything, should be done by various individuals about it.
[Written in a personal capacity, etc. This is the second of two comments, see the first here.]
In this comment, I consider how centralised EA should be. I’m less sure how to think about this. My main, tentative proposal is:
We should distinguish central functions from central control. The more central a function something has, the more decentralised control of it should be. Specifically, I suggest CEA should become a fee-paying members’ society that democratically elects its officers—much like the America Philosophical Association does.
I suspect it helps not just to ask “how centralised should EA be” but also “what should be centralised and what shouldn’t?”. Some bits are, as you say, natural monopolies in that it’s easiest if there’s one of them. This seems most true for places where people meet and communicate with each other: a conference is valuable because other relevant people are there. For EA, I guess the central bits are the conferences, the introductory materials, the forum, the name(?), maybe other things. In my post on EA as a marketplace, which you kindly reference but don’t seem sympathetic to, I point out you can think of EA on a hub-a-spoke model. Imagine a bicycle wheel, where the rim represents the community members and the spokes the connections. There are some bits we widely participate in, the hub, such as the conference. But, besides that, people have links to only a subset: longtermists hang out mostly with longtermists, etc.
Now, it does not follow that simply because something has a central function, it’s used by lots of people, that it should be centrally controlled, i.e. controlled by a few people. In fact, we often think the opposite is true. The more central something is, the more often we think it should be democratically controlled. The obvious example of this is the state. It has a big impact on our lives, it’s a natural monopoly, so we tend to think democracy is good to make it accountable. Rule of thumb, then: central role, decentralised control.
Another example of this is the one you gave, the American Philosophical Association. It’s pretty useful to have a place that convenes those who have a common interest—in that case, doing academic philosophy. The APA seems useful and unobjectionable (this is my impression of it, anyway). But, the reason for this is that it doesn’t and can’t do anything besides serve and convene its members. It doesn’t take sides in philosophical debates or try to steer the field. People would object if it tried. How is this unobjectionableness achieved? I imagine it’s to do with the fact it’s a fee-paying society where those in positions of power are elected by members, as well as the fact it doesn’t control lots of funding. Despite being philosophers, I doubt the members of the APA would want it to be run by unelected philosopher kings! Roughly, the moral of the story seems to be that, if something is so central you can’t avoid participating in, you probably want decentralised power.
You talk about various possible ways to centralise or decentralise EA. But why is there no suggestion of democratising the central element of EA, namely the Centre for Effective Altruism? Here’s a concrete suggestion: CEA becomes like the APA: a fee-paying membership society where the members elect the trustees or officials, who then administer various central functions, like a conference, journals, etc. Is this an absurd idea? If so, why? It’s hardly radical. Members’ organisations are a default coordination solution where people have a common interest. I’m not sure it’s a brilliant idea, but it’s weird it’s not been discussed, and I’d be happy for someone to tell me why would be terrible.
Worried no one would join? If you want to kick-start it, you could provide a year’s free membership to those who have signed the GWWC pledge or attended an EA conference. There may be other ideas. EAs already focus on much harder tasks like influencing the next million years and ending poverty. Surely setting up a membership society, a solved-problem, is not insurmountable.
There’s been a lot of discussion recently about people not feeling like they’re part of EA. Well, here’s a cheap solution: let people become members of CEA. Then, you’re in and you can have a say in how things are run. This has other advantages: it makes CEA accountable to its members. It also allows it to genuinely speak for them, which currently it can’t do, because it doesn’t represent them. By charging a membership fee, CEA can offset the costs of other things, so won’t be so reliant on other donors. Honestly, this membership scheme would probably work if it were just about the EA conferences (the “EA conference Association”?).
Who should be against this idea? I recognise some effective altruists are sceptical of applying democracy to philanthropy, but, to be clear, I am not advocating for communism, that all of EA or ‘EA resources’ should come under common control: that Open Philanthropy should give 1/Nth of its resources to the N people self-describing as EA, or that if you want to become part of EA you need to give the community all your (spare) money. That’s absurd. I am only in favour of democratising the central convening and coordinating parts; this is like saying the APA should be a democratic entity, not that all philosophy and philosophy funding should be run by a democracy. As far as I can see, all the central elements of EA fall under CEA (I’m also not against spinning off parts of EVF.)
I don’t think it makes sense to have a democracy for the non-central elements—the spokes. I believe in free enterprise, including free philanthropic enterprise, and private ownership.
Where does Open Philanthropy fit into this, given it has most of the ‘EA money’? I’m not sure it does fit in. It’s fortunate there’s one really big donor (because there’s more money), and unfortunate there’s only one (but that one will have outsized influence). I think society should control some of people’s income through taxation. But I also believe people should have private property, and where they spend that (assuming it’s inside the law) is best left up to them, rather than trying to also make that part subject to democratic control. Hence, insofar as worries about centralisation spring from their being a single, huge funder, I don’t have a neat solution to that. That doesn’t mean other things couldn’t be done, however, such as democratising CEA. Indeed, if CEA were a democratic society, I’d be fairly relaxed about Open Philanthropy providing funding to it, because control of CEA would still be decentralised, so that would mitigate concerns about undue influence. Of course, I have views about how, morally speaking, people should spend their post-tax wealth, but that seems a separate issue.
Finally, you raise the point of who has the ‘legitimacy’ to centralise or decentralise EA, and say it should come from CEA or Open Philanthropy (the use of ‘legitimacy’ is somewhat interesting, because that has democratic connotations). At this point, I should probably mention I applied to be a board member of Effective Ventures, and in my application form, I explicitly stated I was interested in exploring bringing democratic elements into CEA, and so decentralising it. I didn’t make it to the first stage (I also asked for feedback and was told EV couldn’t provide any). Now, I am not claiming I am a ‘slam dunk’ choice for the board—EA has many highly talented people. However, I did find it discouraging, not least because I am interested in, and would have had legitimacy for, exploring institutional reforms. It reduced my confidence that, despite what you say, ‘central EA’ really is open to diverse voices or further decentralisation.
- Jun 29, 2023, 11:30 AM; 90 points) 's comment on Decision-making and decentralisation in EA by (
[Written in a personal capacity, etc. This is the first of two comments: second comment here]
Hello Will. Glad to see you back engaging in public debate and thanks for this post, which was admirably candid and helpful about how things work. I agree with your broad point that EA should be more decentralised and many of your specific suggestions. I’ll get straight to one place where I disagree and one suggestion for further decentralisation. I’ll split this into two comments. In this comment, I focus on how centralised EA is. In the other, I consider how centralised it should be.
Given your description of how EA works, I don’t understand how you reached the conclusion that it’s not that centralised. It seems very centralised—at least, for something portrayed as a social movement.
Why does it matter to determine how ‘centralised’ EA is? I take it the implicit argument is EA should be “not too centralised, not too decentralised” and so if it’s ‘very centralised’ that’s a problem and we consider doing something. Let’s try to leave aside whether centralisation is a good thing and focus on the factual claim of how centralised EA is.
You say, in effect, “not that centralised”, but, from your description, EA seems highly centralised. 70% of all the money comes from one organisation. A second organisation controls the central structures. You say there are >20 ‘senior figures’ (in a movement of maybe 10,000 people) and point out all of these work at one or the other organisation. You are (often apparently mistaken for) the leader of the movement. It’s not mentioned but there are no democratic elements in EA; democracy has the effect of decentralising power.
If we think of centralisation just on a spectrum of ‘decision-making power’, as you define it above (how few people determine what happens to the whole) EA could hardly be more centralised! Ultimately, power seems the most important part of centralisation, as other things flow from it. On some vague centralisation scale, where 10⁄10 centralisation is “one person has all the power” and 1⁄10 is “power is evenly spread”, it’s … an 8/10? If one organisation, funded by two people, has 70% of the resources, considering that alone suggests a 7⁄10. (Obviously, putting things on scales is silly but never mind that!)
Your argument that it’s not centralised seems to be that EA is not a single legal entity. But that seems like an argument only against the claim it’s not entirely centralised, rather than that it’s not very centralised.
All this is relevant to the point you make about “who’s responsible for EA?”. You say no one’s in charge and, in footnote 3, give different definitions of responsibility. But the key distinction here, one you don’t draw on, seems to be de jure vs de facto. I agree that, de jure, legally speaking, no one controls EA. Yet, de facto, if we think about where power, in fact, resides, it is concentrated in a very small group. If someone sets up an invite-only group called the ‘leaders’ forum’, it seems totally reasonable for people to say “ah, you guys run the show”. Hence the claim ‘no one is in charge’ doesn’t ring true for me. I don’t see how renaming this the ‘coordination forum’ changes this. Given that EA seems so clearly centralised, I can’t follow why you think it isn’t.
You cite the American Philosophical Association as a good example of “not too centralised”. Again, let’s not focus on whether centralisation is good, but think about how central the APA is to philosophy. The APA doesn’t control really any of the money going into philosophy. It runs some conferences and some journals. AFAICT, its leaders are elected by fee-paying members. As Jason points out, I wonder how centralised we’d think power in philosophy were if the APA controlled 70% of the grants and its conferences and journals were run by unelected officials. I think we’d say philosophy was very centralised. I think we’d also think this level of centralisation was not ideal.
Similarly, EA seems very centralised compared to other movements. If I think of the environmental or feminist movements—and maybe this is just my ignorance—I’m not aware of there being a majority source of funding, the conferences being run by a single entity, there being a single forum for discussion, etc.. In those movements, it does seem that, de facto and de jure, no one is really in charge. As a hot take, I’d say they are each about 2-3/10 on my vague centralisation scale. Hence, EA doesn’t match my mental image of a social movement because it’s so centralised. If someone characterised EA as a basically single organisation with some community offshoots, I wouldn’t disagree.
I’ll turn to how centralised EA should be in my other comment.
- Jun 29, 2023, 11:34 AM; 29 points) 's comment on Decision-making and decentralisation in EA by (
Yeah, I guess I mean genuinely new projects, rather than new tokens of the same type of project (eg group organisers are running the same thing in different places).
As MacAskill points out, it’s pretty hard to run $1m+/yr project (or even less, tbh) without Open Philanthropy supporting it.
But, no, I’m not thinking about centralisation in terms of micro management, so I don’t follow your comment. You can have centralised power without micromanagent.
Hello Jack (again!),
I agree with this. But the challenge from the Non-Identity problem is that there are few, if any, necessarily existing future individuals: what we do causes different people to come into existence. This raises a challenge to longtermism: how can we make the future go better if we can’t make it go better for anyone in particular? If an outcome is not better for anyone, how can it be better? In the discourse, philosophers tend to accept that it is the implication of (some) person-affecting views that we can’t (really) make the future go better for anyone, but take this implication as a decisive reason to reject those views. My suspicion is that philosophers have been too quick to dismiss such person-affecting views and they merit another look.