Hi — I’m Alex! I run the 80k headhunting service, which provides organisations with lists of promising candidates for their open roles.
You can give me (anonymous) feedback here: admonymous.co/alex_ht
Hi — I’m Alex! I run the 80k headhunting service, which provides organisations with lists of promising candidates for their open roles.
You can give me (anonymous) feedback here: admonymous.co/alex_ht
I don’t find anything wrong at all with ‘saintly’ personally, and took it as a joke. But I could imagine someone taking it the wrong way. Maybe I’d see what others on the forum think
It looks like all the founders, advisory team, and athletes are white or white-passing. I guess you’re already aware of this as something to consider, but it seems worth flagging (particularly given the use of ‘Saintly’ for those donating 10% :/).
Some discussion of why this might matter here: https://forum.effectivealtruism.org/posts/YCPc4qTSoyuj54ZZK/why-and-how-to-make-progress-on-diversity-and-inclusion-in
Edit: In fact, while I think appearing all-white and implicitly describing some of your athletes as ‘Saintly’ are both acceptable PR risks, having the combination of them both is pretty worrying and I’d personally be in favour of changing it.
Edited to address downvotes: Obviously, it is not bad in itself that the team if the team is all white, and I’m not implying that any deliberate filtering for white people has gone on. I just think it’s something to be aware of—both for PR reasons (avoiding look like white saviours) and for more substantive reasons (eg. building a movement and sub-movements that can draw on a range of experiences)
Some of the wording on the ‘Take the Pledge’ section seems a little bit off (to me at least!). Eg. saying a 1-10% pledge will ‘likely have zero noticeable impact on your standard of living’ seems misleading, and could give off the impression that the pledge is only for the very wealthy (for whom the statement is more likely to be true). I’m also not sure about the ‘Saintly’ categorisation of the highest giving level (10%). It could come across as a bit smug or saviour-ish. I’m not sure about the tradeoffs here though and obviously you have much more context than me.
Maybe you’ve done this already, but it could be good to ask Luke from GWWC for advice on tone here.
I see you mention that HIA’s recommendations are based on a suffering-focused perspective. It’s great that you’re clear about where you’re coming from/what you’re optimising for. To explore the ethical perspective of HIA further—what is HIA’s position on longtermism?
(I’m not saying you should mention your take on longtermism on the website.)
This is really cool! Thanks for doing this :)
Is there a particular reason the charity areas are ‘Global Health and Poverty’ and ‘Environmental Impact’ rather than including any more explicit mention of animal welfare? (For people reading this—the environmental charities include the Good Food Institute and the Humane League along with four climate-focussed charities.)
Welcome to the forum!
Have you read Bostrom’s Astronomical Waste? He does a very similar estimate there. https://www.nickbostrom.com/astronomical/waste.html
I’d be keen to hear more about why you think it’s not possible to meaningfully reduce existential risk.
“Life can be wonderful as well as terrible, and we shall increasingly have the power to make life good. Since human history may be only just beginning, we can expect that future humans, or supra-humans, may achieve some great goods that we cannot now even imagine. In Nietzsche’s words, there has never been such a new dawn and clear horizon, and such an open sea.
If we are the only rational beings in the Universe, as some recent evidence suggests, it matters even more whether we shall have descendants or successors during the billions of years in which that would be possible. Some of our successors might live lives and create worlds that, though failing to justify past suffering, would give us all, including some of those who have suffered, reasons to be glad that the Universe exists.”
Thanks for writing this! I and an EA community builder I know found it interesting and helpful.
I’m pleased you have a ‘counterarguments’ section, though I think there are some counterarguments missing:
OFTW groups may crowd out GWWC groups. You mention the anchoring effect on 1%, but there’s also the danger of anchoring on a particular cause area. OFTW is about ending extreme poverty, whereas GWWC is about improving the lives of others (much broader)
OFTW groups may crowd out EA groups. If there’s a OFTW group at a university, the EA group may have to compete, even if the groups are officially collaborating. In any case, they groups will be competing for attention of the altruistically motivated people at the university
Because OFTW isn’t cause neutral, it might not be a great introduction to EA. For some people, having lots of exposure to OFTW might even make them less receptive to EA, because of anchoring on a specific cause. As you say “Since it is a cause-specific organization working to alleviate extreme global poverty, that essentially erases EA’s central work of evaluating which causes are the most important.” I agree with you that trying to impartially work out which cause is best to work on is core to EA
OFTW’s direct effects (donations to end extreme poverty) may not be as uncontroversially good as they seem. See this talk by Hilary Greaves from the Student Summit: https://www.youtube.com/watch?v=fySZIYi2goY&ab_channel=CentreforEffectiveAltruism
-OFTW outreach could be so broad and shallow that it doesn’t actually select that strongly for future dedicated EAs. In a comment below, Jack says “OFTW on average engages a donor for ~10-60 mins before they pledge (and pre-COVID this was sometimes as little as 2 mins when our volunteers were tabling)”. Of course, people who take that pledge will be more likely to become dedicated EAs than the average student, but there are many other ways to select at that level
Thanks, that’s helpful for thinking about my career (and thanks for asking that question Michael!)
Edit: helpful for thinking about my career because I’m thinking about getting economics training, which seems useful for answering specific sub-questions in detail (‘Existential Risk and Economic Growth’ being the perfect example of this), but one economic model alone is very unlikely to resolve a big question.
Thank you :) I’ve corrected it
I think I’ve conflated patient longtermist work with trajectory change (with the example of reducing x-risk in 200 years time being patient, but not trajectory change). This means the model is really comparing trajectory change with XRR. But trajectory change could be urgent (eg. if there was a lock-in event coming soon), and XRR could be patient.
(Side note: There are so many possible longtermist strategies! Any combination of is a distinct strategy. This is interesting as often people conceptualise the available strategies as either patient, broad, trajectory change or urgent, narrow, XRR but there’s actually at least six other strategies)
This model completely neglects meta strategic work along the lines of ‘are we at the hinge of history?’ and ‘should we work on XRR or something else?’. This could be a big enough shortcoming to render the model useless. But this meta work does have to cash out as either increasing the probability of technological maturity, or in improving the quality of the future. So I’m not sure how worrisome the shortcoming is. Do you agree that meta work has to cash out in one of those areas?
I had s-risks in mind when I caveated it as ‘safely’ reaching technological maturity, and was including s-risk reduction in XRR. But I’m not sure if that’s the best way to think about it, because the most worrying s-risks seem to be of the form: we do reach technological maturity, but the quality is large and negative. So it seems that s-risks are more like ‘quality increasing’ than ‘probability increasing’. The argument for them being ‘probability increasing’ is that I think the most empirically likely s-risks might primarily be risks associated with transitions to technological maturity, just like other existential risks. But again, this conflates XRR with urgency (and so trajectory change with patience)
Thanks for writing this, I like that it’s short and has a section on subjective probability estimates.
What would you class as longterm x-risk (reduction) vs. nearterm? Is it entirely about the timescale rather than the approach? Eg. hypothetically very fast institutional reform could be nearterm, and doing AI safety field building research in academia could hypothetically be longterm if you thought it would pay off very late. Or do you think the longterm stuff necessarily has to be investment or intitutional reform?
Is the main crux for ‘Long-term x-risk matters more than short-term risk’ around how transformative the next two centuries will be? If we start getting technologically mature, then x-risk might decrease significantly. Or do you think we might reach technological maturity, and x-risk will be low, but we should still work on reducing it?
What do you think about the assumption that ‘efforts can reduce x-risk by an amount proportional to the current risk’? That seems maybe appropriate for medium levels of risk eg. 1-10%, but if risk is small, like 0.01-1%, it might get very difficult to halve the risk.
This is really interesting and I’d like to hear more. Feel free to just answer the easiest questions:
Do you have any thoughts on how to set up a better system for EA research, and how it should be more like academia?
What kinds of specialisation do you think we’d want—subject knowledge? Along different subject lines to academia?
Do you think EA should primarily use existing academia for training new researchers, or should there be lots of RSP-type things?
What do you see as the current route into longtermist research? It seems like entry-level research roles are relatively rare, and generally need research experience. Do you think this is a good model?
I’d really like to see “If causes differ astronomically in EV, then personal fit in career choice is unimportant”
Thanks for writing this. I’d love to see your napkin math
Thanks for the answer.
Will MacAskill mentioned in this comment that he’d ‘expect that, say, a panel of superforecasters, after being exposed to all the arguments, would be closer to my view than to the median FHI view.’
You’re a good forecaster right? Does it seem right to you that a panel of good forecasters would come to something like Will’s view, rather than the median FHI view?
Thanks for the answer.
Will MacAskill mentioned in this comment that he’d ‘expect that, say, a panel of superforecasters, after being exposed to all the arguments, would be closer to my view than to the median FHI view.’
You’re a good forecaster right? Does it seem right to you that a panel of good forecasters would come to something like Will’s view, rather than the median FHI view?
Thanks, those look good and I wasn’t aware of them
Yep—the author can click on the image and then drag from the corner to enlarge them (found this difficult to find myself)
Yep! I assumed this kind of thing was the case (and obviously was just flagging it as something to be aware of, not trying to finger-wag)