I’m the Director of the Happier Lives Institute and Postdoctoral Research Fellow at Oxford’s Wellbeing Research Centre. I’m a philosopher by background and did my DPhil at Oxford, primarily under the supervision of Peter Singer and Hilary Greaves. I’ve previously worked for an MP and failed to start a start-up.
MichaelPlant
Yeah, as I say, you don’t need to buy the market metaphor—curious where you think it goes wrong though. All you really need to observe is
(1) that there are central ‘hub’ functions, where it makes sense to have one organisation providing these for the rest of the community (they are sort of mini natural monopolies) vs the various ‘spokes’ who focus on particular cause areas.
(2) you want there to be good mechanisms for making the central bits responsive to the needs of the community as a whole (vs focusing on a subgroup)
(3) it’s unclear if we have (2).
Not the main point of what you said, but there’s a bit of a difference between the dynamic of one-on-ones discussion and a public forum.
Hello Ben and thanks for this!
As I said in my comment to Fin Moorhouse below, I’m not sure what difference it makes that market participants are buying for others in the EA market, but themselves in the normal market. Can you spell out what you take to the relevant feature, and what sort of ‘market distortions’ it specifically justifies? In both the EA and normal markets, are trying to get the ‘best value’, but will disagree over what that is and how it is to be achieved.
If the concern is about externalities, that seems to strongly count against intervening in the EA market. In normal markets, people don’t account for externalities, that is, their effects on others. But in the EA market, people are explicitly trying to do the most good: they are looking to do the things that have the best result when you account for all the effects on everyone; in economics jargon, they are trying to internalise all those externalities themselves. Hence, in the EA market -distinctively from any other market(!) - there are no clear grounds for intervening on the basis of externalities.
Thanks for this. Reading this, and other comments, I don’t think I’m managed to convey what I think could and should be distinctive about effective altruism. Let me try again!
In normal markets, people seek out the best value for themselves.
In effective altruism (as I’m conceiving of it) people seek out the best value for others. In both cases, people can and will have different ideas of what ‘value’ means in practice; and in the EA market, people may also disagree over how to think of who the relevant ‘others’ are too.
Both of these contrast with the ‘normal’ charity world, where people seek out value for others, but there is little implicit or explicit attempt to seek out the best value for others; it’s not something people have in mind. A major contribution of EA thinking is to point this out.
The normal market and EA worlds thus have something in common that distinguishes them from the regular charity world. The point of the post is to think about how, given this commonality, the EA market should be structured to achieve the best outcomes for its participants; my claim is that this, given this similarity, the presumption is that the EA market and normal market should run along similar lines.
If it helps, try to momentarily forget everything you know about the actual EA movement and ask “Okay, if wanted to design a ‘maximum altruist marketplace’ (MAM), a place people come to shop around for the best ways to use resources to help others, how would we do that?” Crucially, in MAM, just like a regular market, you don’t want to assume that you, as the social planner, have a better idea of what people want than they do themselves. That’s the Hayekian point (with apologies, I think you’ve got the wrong end of the stick here!)
Pace you and Ben West above, I don’t think it invalidates the analogy that people are aiming at value for others, rather than themselves. There seems to be a background assumption of “well, because people are buying for others in the MAM, and they don’t really know what others want, we (the social planner) should intervene”. But notice you can say the same thing in normal markets—“people don’t know really what they want, so we (the social planner) should intervene”. Yet we are very reluctant to intervene in the latter case. So, presumably, we should be reluctant here too.
Of course, we do think it’s justified to intervene in normal markets to some degree (e.g. alcohol sales restricted by age), but each intervention needs to be justified. Not all interventions are justified. The conversation I would like to have regarding MAM is about which interventions are justified, and why.
I get the sense we’re slightly speaking past each other. I am claiming (1) the maximum altruist market should exist, and then (2) suggesting how EA could be closer to that. It seems you, and maybe some others, are not sold on the value of (1): you’d rather focus on advocating for particular outcomes, and are indifferent about (1). Note it’s analogous to someone saying “look, I don’t care about whether there’s a free market: I just want my company to be really successful.”
I can understand that many people won’t care if a maximum altruism marketplace exists. I, for one, would like it to exist; it seems an important public good. I’d also like the central parts of the EA movement to fulfil that role, as they seem best placed to do it. If the EA movement (or, rather, its central parts) end up promoting very particular outcomes, then it loses much of what appeared to be distinctive about it, and it looks more like the rest of the charity world.
What is effective altruism? How could it be improved?
Yeah, it is vague. My understanding of debate motions is that you want them to leave them broad and open to interpretation vs very narrowly specified.
To chime in, I think it would be helpful to distinguish between:
1. AI risks on a ‘business as usual’ model, where society continues as it was before, ie not doing much
and
2. AI risks given different levels of society response.
This would then be analogous to familiar discussions about climate change, where people talk about different CO2 rise scenarios, how bad each would be, and also how much effort is required to achieve different levels of reduced emissions. I recognise it’s not very easy to specify options for 2, but it seems worth a try. To decide how much effort to put in, we need to understand the risk in 1 and how much it can go down for versions of 2, and the costs involved.
To elaborate, someone could say
(A) we’re almost certainly screwed, whatever we do
(B) we might be screwed, but not if we get our act together, which we’re not doing now
(C) we might be screwed, but not if we get our act together, which I’m confident will happen anyway
(D) there’s nothing to worry about in the first place.
Obviously, these aren’t the only options. (A), (C), and (D) imply that few or no additional resources are useful, whereas (B) implies extra resources are worthwhile. My impression is Yudowsky’s line is (A).
Yes, I agree it’s not practical to do it immediately, but getting to that stage later would require people various people thinking it’s a good plan in the first place.
Ah, I defer to your superior expertise! But I think it would mean specifically getting permission from the CC, which sounds like quite a bit of faff anyway, and after which they might still say no.
Better version of this: elect (some of) the trustees (for fixed-term limits).
Who would be the electorate? You could use GWWC members, former EAG attendees, or have EV/CEA as a fee-paying members society—which is extremely common in charities. None of these are perfect, but they are all in the ballpark of the right group. If you randomise, you still face the issue of who are you randomising from. If it’s just a group of people who volunteer themselves, you could equally use that group as an electorate.
Just a note for the UK boards: trustees cannot generally be paid for their work. UK gov quote:
When you become a trustee, you volunteer your services and usually won’t receive payment for your work.
Generally, charities cannot pay their trustees for simply being a trustee. Some charities do pay their trustees – they can only do so because it’s allowed by their governing document, by the Charity Commission or by the courts
An alternative that could touch on the same topic but is a bit more general:
This house believes effective altruism needs serious reform
This house believes we should prioritise the longterm future
Can you say more about your plans to bring additional trustees on the boards?
I note that, at present, all of EV (USA)’s board are current or former members of Open Philanthropy: Nick Beckstead, Zachary Robinson and Nicole Ross are former staff, Eli Rose is a current staffmember. This seems far from ideal; I’d like the board to be more diverse and representative of the wider EA community. As it stands, this seems like a conflict of interest nightmare. Did you discuss why this might be a problem? Why did you conclude it wasn’t?
Others may disagree, but in my perspective, EV/CEA’s role is to act as a central hub for the effective altruism community, and balance the interests of different stakeholders. It’s difficult to see how it could do that effectively if all of its board were or are members of the largest donor.
If I wanted to be useful to AI safety, what are the different paths I might take? How long would it take someone to do enough training to be useful, and what might they do?
For the different risks from AI, how might we solve each of them? What are the challenges to implementing those solutions? I.e. when is the problem engineering, incentives, etc?
If we get AGI, why might it pose a risk? What are the different components of that risk?
If and how are risks from AGI distinct from the kinds of risks we face from other people? The problem “autonomous agent wants something different from you” is just the everyday challenge of dealing with people.
Lead exposure: a shallow cause exploration
On 3. Epicureanism being a defensible position
Epicureanism is discussed in almost every philosophy course on the badness of death. It’s taken seriously, rather than treated as an absurd position, a non-starter, and whilst not that many philosophers end up as Epicureans, I’ve met some that are very sympathetic. I find critics dismiss the view too quickly and I’ve not seen anything that’s convinced me the view has no merit. I don’t think we should have zero credence in it, and it seems reasonable to point out that it is one of the options. Again, I’m inclined to let donors make up their own minds.
On what HLI actually believes
HLI is currently trying not to have a view on these issues, but point out to donors how having different views would change the priorities so they can form their own view. We may have to develop a ‘house view’ but none of the options for doing this seem particularly appealing (they include: we use my view, we use a staff aggregate, we poll donors, we poll the public, some combo of the previous options).
You bring up this quote
We’re now in a position to confidently recommend StrongMinds as the most effective way we know of to help other people with your money.
I regret this sentence, which is insufficiently nuanced and I wouldn’t use it again (you and I have discussed this privately). That said, I think we’re quite well-caveated elsewhere. You quote this bullet point:
We conclude that the Against Malaria Foundation is less cost-effective than StrongMinds under almost all assumptions. We expect this conclusion will similarly apply to the other life-extending charities recommended by GiveWell.
But you didn’t quote the bullet point directly before it (emphasis added):
In our new report, The Elephant in the Bednet, we show that the relative value of life-extending and life-improving interventions depends very heavily on the philosophical assumptions you make. This issue is usually glossed over and there is no simple answer.
The backstory to the “we confidently recommend StrongMinds” bit is that, when we did the analysis, StrongMinds looked better under almost all assumptions and, even where AMF was better, it was only slightly better (1.3x). We thought donors would want an overall recommendation, and hence StrongMinds seemed like the safe choice (given some intuitions about donors’ intuitions and moral uncertainty). You’re right that we’ll have to rethink what our overall recommendations are, and how to frame them, once the dust has settled on this debate.
Finally, whilst you say
But you have to actually defend the range of assumptions you’ve defined as reasonable. And in my view, they’re not.
This feels uneasily like a double standard. As I’ve pointed out before, neither GiveWell nor Open Philanthropy really defends their views in general (asserting a view isn’t the same as defending it). In this report, GiveWell doesn’t defend its assumptions, point out what other assumptions one might (reasonably) take, or say how this would change the result. Part of what we have tried to highlight in our work is that these issues have been mostly ignored and can really matter.
Our aim was more to cover the range of views we think some reasonable people would believe, not to restrict it to what we think they should believe. We motivated our choices in the original report and will restate that briefly here. For the badness of death, we give the three standard views in the literature. At one end, deprivationism gives ‘full value’ to saving lives. On the other, Epicurianism gives no weight to saving lives. TRIA offers something in between. For the neutral point, we used a range that included what we saw as the minimum and maximum possible values. Including a range of values is not equivalent to saying they are all equally probable. We encourage donors and decision-makers to use values they think are most plausible (for example, by using this interactive chart).
- Jul 10, 2023, 10:22 PM; 45 points) 's comment on The Happier Lives Institute is funding constrained and needs you! by (
- Jul 13, 2023, 3:51 PM; 31 points) 's comment on The Happier Lives Institute is funding constrained and needs you! by (
- Jul 11, 2023, 8:48 AM; 1 point) 's comment on The Happier Lives Institute is funding constrained and needs you! by (
I’ve just made a similar comment to Ryan. The central bits are more like natural monopolies. If you’re running a farmers’ market, you just want one venue that everyone comes to each market day, etc. The various stallholders and customers could set up a new market down the road, but it would be a huge pain and it’s not something any individual wants to do.
Regarding 80k, they were originally set up to tell people about careers, then moved to advertising EA more broadly. That’s a central function and in the market analogy, they were advertising the market, as a whole, to new customers and were sort of a ‘market catalogue’. 80k then switched to promoting a subset of causes (i.e. products), specifically longtermism. I wasn’t then, and still am not, wild about this for reasons I hope my post provides: it’s perverse to have multiple organisations fulfilling the same function, in this case advertising the EA market, so when they switched to longtermism, it meant they left a gap that wasn’t easy to fill. I understand that they wanted to promote particular causes, but hopefully they would appreciate that it meant they were aiding a sub-section of the community, and the remaining less-served subsections would feel disappointed by this. I think someone should be acting as a general advertiser for EA—perhaps a sort of public broadcast role, like the BBC has in the UK—and 80k would have been the obvious people to fulfil that role. Back to the farmers market analogy, if I sell apples, and the market advertiser decides to stop advertising fruit in its catalogue (or puts it in tiny pictures at the back), it puts the fruit sellers at a relative disadvantage.