Joey, are there unusual empirical beliefs you have in mind other than the two mentioned? Hits based giving seems clearly related to Charity Entrepreneurship’s work—what other important but unusual empirical beliefs do you/CE/neartermist EAs hold? (I’m guessing hinge of history hypothesis is irrelevant to your thinking?)
My guess is that few EAs care emotionally about cost effectiveness and that they care emotionally about helping others a lot. Given limited resources, that means they have to be cost effective. Imagine a mother with a limited supply of food to share between her children. She’s doesn’t care emotionally about rationing food, but she’ll pay a lot of attention to how best to do rationing.
I do think there are things in the vicinity of careful reasoning/thinking clearly/having accurate beliefs that are core to many EAs identities. I think those can be developed naturally to some extent, and don’t seem like complete prerequisites to being an EA
Thanks for writing this and contributing to the conversation :)
Relatedly, an “efficient market for ideas” hypothesis would suggest that if MB really was important, neglected, and tractable, then other more experienced and influential EAs would have already raised its salience.
I do think the salience of movement building has been raised elsewhere eg:
80,000 Hours do have a problem profile on it and consider it one of the most pressing problems to work on
The work around patient philanthropy has analogues to movement building (see Nuno Sempere’s in-progress paper extending this thinking to movement growth explicitly)
A bunch of other places eg. I really like this piece on movement growth
Having said that, I share the feeling that movement building seems underrated. Given how impactful it seems, I would expect more EAs to want to use their careers to work on movement building.
One resolution to this apparent conflict is that the fraction of people who can be good at movement building long-term might be smaller than it first seems. For lots of the interventions that you suggest, strong social skills and a strong understanding of EA concepts seem important, as well as some general executional or project management ability. Though movement builders don’t necessarily have to be excellent in any of these domains, they have to be at least pretty good at all of them. They also have to be interested enough in all of them to do movement building. This narrows down the pool of people who can work in movement building.
Another possible reason is that within the EA community movement building careers are generally seen as less prestigious than more ‘direct’ kinds of work and social incentives play a large role in career choice. For example, some people would be more impressed by someone doing technical AI safety research than by someone building talent pipelines into AI safety, even if the second one has more impact.
Also, as Aaron says, a lot of direct work has helpful movement building effects.
I also agree with Aaron that looking at funding is a bit complicated with movement building, partly because movement building is probably cheaper than other things, but also that it can be hard to tease apart what’s movement building and what’s not.
You really don’t seem like a troll! I think the discussion in the comments on this post is a very valuable conversation and I’ve been following it closely. I think it would be helpful for quite a few people for you to keep responding to comments
Of course, it’s probably a lot of effort to keep replying carefully to things, so understandable if you don’t have time :)
Thanks! I appreciate it :)
It makes me feel anxious to get a lot of downvotes with no explanation so I really appreciate your comment.
Just to clarify when you say “if that is a real tradeoff that a founder faces in practice, it is nearly always an indication the founder just hasn’t bothered to put much time or effort into cultivating a diverse professional network” I think I agree, but that this isn’t always something the founder could have predicted ahead of time, and the founder isn’t necessarily to blame. I think it can be very easy to ‘accidentally’ end up with a fairly homogeneous network eg. because your profession or university is homogenous. Sounds like Marcus is in this category himself (if tennis is mainly white, and his network is mainly tennis players).
Was this meant as a reply to my comment or a reply to Ben’s comment?
I was just asking what the position was and made explicit I wasn’t suggesting Marcus change the website.
Yep! I assumed this kind of thing was the case (and obviously was just flagging it as something to be aware of, not trying to finger-wag)
I don’t find anything wrong at all with ‘saintly’ personally, and took it as a joke. But I could imagine someone taking it the wrong way. Maybe I’d see what others on the forum think
It looks like all the founders, advisory team, and athletes are white or white-passing. I guess you’re already aware of this as something to consider, but it seems worth flagging (particularly given the use of ‘Saintly’ for those donating 10% :/).
Some discussion of why this might matter here: https://forum.effectivealtruism.org/posts/YCPc4qTSoyuj54ZZK/why-and-how-to-make-progress-on-diversity-and-inclusion-in
Edit: In fact, while I think appearing all-white and implicitly describing some of your athletes as ‘Saintly’ are both acceptable PR risks, having the combination of them both is pretty worrying and I’d personally be in favour of changing it.
Edited to address downvotes: Obviously, it is not bad in itself that the team if the team is all white, and I’m not implying that any deliberate filtering for white people has gone on. I just think it’s something to be aware of—both for PR reasons (avoiding look like white saviours) and for more substantive reasons (eg. building a movement and sub-movements that can draw on a range of experiences)
Some of the wording on the ‘Take the Pledge’ section seems a little bit off (to me at least!). Eg. saying a 1-10% pledge will ‘likely have zero noticeable impact on your standard of living’ seems misleading, and could give off the impression that the pledge is only for the very wealthy (for whom the statement is more likely to be true). I’m also not sure about the ‘Saintly’ categorisation of the highest giving level (10%). It could come across as a bit smug or saviour-ish. I’m not sure about the tradeoffs here though and obviously you have much more context than me.
Maybe you’ve done this already, but it could be good to ask Luke from GWWC for advice on tone here.
I see you mention that HIA’s recommendations are based on a suffering-focused perspective. It’s great that you’re clear about where you’re coming from/what you’re optimising for. To explore the ethical perspective of HIA further—what is HIA’s position on longtermism?
(I’m not saying you should mention your take on longtermism on the website.)
This is really cool! Thanks for doing this :)
Is there a particular reason the charity areas are ‘Global Health and Poverty’ and ‘Environmental Impact’ rather than including any more explicit mention of animal welfare? (For people reading this—the environmental charities include the Good Food Institute and the Humane League along with four climate-focussed charities.)
Welcome to the forum!
Have you read Bostrom’s Astronomical Waste? He does a very similar estimate there. https://www.nickbostrom.com/astronomical/waste.html
I’d be keen to hear more about why you think it’s not possible to meaningfully reduce existential risk.
“Life can be wonderful as well as terrible, and we shall increasingly have the power to make life good. Since human history may be only just beginning, we can expect that future humans, or supra-humans, may achieve some great goods that we cannot now even imagine. In Nietzsche’s words, there has never been such a new dawn and clear horizon, and such an open sea.
If we are the only rational beings in the Universe, as some recent evidence suggests, it matters even more whether we shall have descendants or successors during the billions of years in which that would be possible. Some of our successors might live lives and create worlds that, though failing to justify past suffering, would give us all, including some of those who have suffered, reasons to be glad that the Universe exists.”
Thanks for writing this! I and an EA community builder I know found it interesting and helpful.
I’m pleased you have a ‘counterarguments’ section, though I think there are some counterarguments missing:
OFTW groups may crowd out GWWC groups. You mention the anchoring effect on 1%, but there’s also the danger of anchoring on a particular cause area. OFTW is about ending extreme poverty, whereas GWWC is about improving the lives of others (much broader)
OFTW groups may crowd out EA groups. If there’s a OFTW group at a university, the EA group may have to compete, even if the groups are officially collaborating. In any case, they groups will be competing for attention of the altruistically motivated people at the university
Because OFTW isn’t cause neutral, it might not be a great introduction to EA. For some people, having lots of exposure to OFTW might even make them less receptive to EA, because of anchoring on a specific cause. As you say “Since it is a cause-specific organization working to alleviate extreme global poverty, that essentially erases EA’s central work of evaluating which causes are the most important.” I agree with you that trying to impartially work out which cause is best to work on is core to EA
OFTW’s direct effects (donations to end extreme poverty) may not be as uncontroversially good as they seem. See this talk by Hilary Greaves from the Student Summit: https://www.youtube.com/watch?v=fySZIYi2goY&ab_channel=CentreforEffectiveAltruism
-OFTW outreach could be so broad and shallow that it doesn’t actually select that strongly for future dedicated EAs. In a comment below, Jack says “OFTW on average engages a donor for ~10-60 mins before they pledge (and pre-COVID this was sometimes as little as 2 mins when our volunteers were tabling)”. Of course, people who take that pledge will be more likely to become dedicated EAs than the average student, but there are many other ways to select at that level
Thanks, that’s helpful for thinking about my career (and thanks for asking that question Michael!) Edit: helpful for thinking about my career because I’m thinking about getting economics training, which seems useful for answering specific sub-questions in detail (‘Existential Risk and Economic Growth’ being the perfect example of this), but one economic model alone is very unlikely to resolve a big question.
Thank you :) I’ve corrected it
I think I’ve conflated patient longtermist work with trajectory change (with the example of reducing x-risk in 200 years time being patient, but not trajectory change). This means the model is really comparing trajectory change with XRR. But trajectory change could be urgent (eg. if there was a lock-in event coming soon), and XRR could be patient.
(Side note: There are so many possible longtermist strategies! Any combination of (Patient,Urgent)×(Broad,Narrow)×(Trajectory Change,XRR) is a distinct strategy. This is interesting as often people conceptualise the available strategies as either patient, broad, trajectory change or urgent, narrow, XRR but there’s actually at least six other strategies)
This model completely neglects meta strategic work along the lines of ‘are we at the hinge of history?’ and ‘should we work on XRR or something else?’. This could be a big enough shortcoming to render the model useless. But this meta work does have to cash out as either increasing the probability of technological maturity, or in improving the quality of the future. So I’m not sure how worrisome the shortcoming is. Do you agree that meta work has to cash out in one of those areas?
I had s-risks in mind when I caveated it as ‘safely’ reaching technological maturity, and was including s-risk reduction in XRR. But I’m not sure if that’s the best way to think about it, because the most worrying s-risks seem to be of the form: we do reach technological maturity, but the quality is large and negative. So it seems that s-risks are more like ‘quality increasing’ than ‘probability increasing’. The argument for them being ‘probability increasing’ is that I think the most empirically likely s-risks might primarily be risks associated with transitions to technological maturity, just like other existential risks. But again, this conflates XRR with urgency (and so trajectory change with patience)
Thanks for writing this, I like that it’s short and has a section on subjective probability estimates.
What would you class as longterm x-risk (reduction) vs. nearterm? Is it entirely about the timescale rather than the approach? Eg. hypothetically very fast institutional reform could be nearterm, and doing AI safety field building research in academia could hypothetically be longterm if you thought it would pay off very late. Or do you think the longterm stuff necessarily has to be investment or intitutional reform?
Is the main crux for ‘Long-term x-risk matters more than short-term risk’ around how transformative the next two centuries will be? If we start getting technologically mature, then x-risk might decrease significantly. Or do you think we might reach technological maturity, and x-risk will be low, but we should still work on reducing it?
What do you think about the assumption that ‘efforts can reduce x-risk by an amount proportional to the current risk’? That seems maybe appropriate for medium levels of risk eg. 1-10%, but if risk is small, like 0.01-1%, it might get very difficult to halve the risk.