Wow, lots of disagreement points, I’m curious what people disagree with.
Jack_S
Thanks for the post, this is definitely a valuable framing.
But I’m a bit concerned that the post creates a misleading impression that the whole catastrophic/ speculative risk field is completely overwhelmed by AI x-risk.
Assuming you don’t believe that other catastrophic risks are completely negligible compared to AI x-risk, I’d recommend adding a caveat that this is only comparing AI x-risk and existing/ non-speculative risks. If you do think AI x-risk overwhelms other catastrophic risks, you should probably mention that too.
Although many IDev professors (estimate: ~70%) are likely just poorly calibrated, and have no incentives to look into the cost-effectiveness of interventions, many who do know about CEAs might underestimate.
For “the cost to save the life of a child” question, an IDev policy expert might take a different perspective. In my IDev masters, one prof in his 70s explained that, if you’ve already paid the fixed costs of getting into the decision making process, it’s very often possible to find low-hanging fruit policy changes that save more lives and cost less money (bottom right quadrant in the picture below, taken from one of his classes).
I expect most EAs would be self-critical enough to see these both as frequently occurring flaws in the movement, but I’d dispute the claim that they’re foundational. For the first criticism, some people track personal impact, and 80k talks a lot about your individual career impact, but people working for EA orgs are surely thinking of their collective impact as an org rather than anything individual. In the same way, ‘core EAs’ have the privilege of actually identifying with the movement enough that they can internalise the impact of the EA community as a whole.
As for measurability, I agree that it is a bias in the movement, albeit probably a necessary one. The ecosystem example is an interesting one- I’d argue that it’s not that difficult to approach ecosystem conservation from an EA perspective. We generally understand how ecosystems work and how they provide measurable valuable services to humans. A cost-effectiveness calculation would provide the human value of ecosystem services (which environmental economists usually do)and, if you want to give inherent value to species diversity, add the number of species within a given area, the number of individuals of these species and rarity/ external value of species etc. Then add weights according to various criteria to give something like an ‘ecosystem value per square metre’, and you’d get to a value that you could compare to other ecosystems. Calculate the price that it costs to conserve various ecosystems around the world, and voila, you have a cost-effectiveness analysis that feels at home on an EA platform. The reason this process doesn’t feel 100% EA is not that it’s difficult to measure, but because it can include value judgements that aren’t related to the welfare of conscious beings.
I suspected that, but it didn’t seem very logical. AI might swamp x-risk, but seems unlikely to swamp our chances of dying young, especially if we use the model in the piece.
Although he says that he’s more pessimistic on AI than his model suggests, in the model, his estimates are definitely within the bounds that other catastrophic risks would seriously change his estimates.
I did a rough estimate with nuclear war vs. natural risk (using his very useful spreadsheet, and loosely based on Rodriguez’ estimates) (0.39% annual chance of US-Russia nuclear exchange, 50% chance of a Brit dying in it; I know some EAs have made much lower estimates, but this seems in line with the general consensus). In this model, nuclear risk comes out a bit higher than ‘natural’ over 30 years.
Even if you’re particularly optimistic about other GCRs, if you add all the other potential catastrophic/ speculative risks together (pandemics, non-existential AI risk, nuclear, nano, other), I can’t imagine them not shifting the model.