Thanks for writing this, I had not seen any public posts on this topic before, and the loss of productivity considerations etc. are novel arguments to me.
I have no object level comments, but a few meta level ones:
As Denise mentioned, this post is very long, and I think would probably benefit from being split into multiple shorter posts.
In particular the two strands of ‘preventing sexual violence within EA’ and ‘preventing sexual violence in the rest of the world’ seem suitably different in both arguments for their importance and calls to action that clearly splitting them into two posts might add clarity. (Although they clearly share some backbone in the discussion of the effects and severity of sexual violence).
I found the post structure not especially clear, and on multiple occasions was somewhat confused about what exactly was being discussed (an example of which is the “Observations about sexual violence in the EA network” section). I also found the formatting a bit confusing and this made reading somewhat more challenging.
I find writing lengthy posts like this very challenging, and I am not trying to claim any objective problems, just that I often found it difficult to keep track. (Note, since I read the post a table of contents has been added, which should help).
Whilst you were very careful to try and discuss the uncertainty when numbers were first introduced, I think you occasionally later used them in more ‘soundbite’ form without sufficient qualifiers (or at least less than I would feel comfortable with). (Examples are the ‘Inside EA: A 1:6 ratio means 7 rapes per 6 women on average.’ section and the “rough estimate of 103 − 607 male rapists in EA” quote, when these depend strongly on assumptions about the relationships of demographics and criminality etc.).
This may just be a matter of taste, as as I said you do already go to lengths to discuss the uncertainty, and I seem to to favour much more discussion/labeling of uncertainty than average.
I think 2&3 might somewhat explain why you seem to have felt that other commenters had not read the post.
Thanks for writing this up! This does seem to be an important argument not made often enough.
To my knowledge this has been covered a couple of times before, although not as thoroughly.
Once by Oxford Prioritization Project however they approached it from the other end, instead asking “what absolute percentage x-risk reduction would you need to get for £10,000 for it to be as cost effective as AMF” and finding the answer of 4 x 10^-8%. I think your model gives £10,000 as reducing x-risk by 10^-9%, which fits with your conclusion of close but not quite as good as global poverty.
Note they use 5% before 2100 as their risk, also do not consider QALYs, instead only looking at ‘lives saved’ which is likely bias them against AMF, since it mostly saves children.
We also calculated this as part of the Causal Networks Model I worked on with Denise Melchin at CEA over the summer. The conclusion is mentioned briefly here under ‘existential effectiveness’.
I think our model was basically the same as yours, although we were explicitly interested in the chance of existential risk before 2050, and did not include probabilistic elements. We also tried to work in QALYs, although most of our figures were more bullish than yours. We used by default:
7% chance of existential risk by 2050, which in retrospect seems extremely high, but I think was based on a survey from a conference.
The world population in 2050 will be 9.8 Billion, and each death will be worth −25 QALYs (so 245 billion QALYs at stake, very similar to yours)
For the effectiveness of research, we assumed that 10,000 researchers working for 10 years would reduce x-risk by 1% point (i.e. from 7% to 6%). We also (unreasonably) assumed each researcher year cost £50,000 (where I think the true number should be at least double that, if not much more).
Our model then had various other complicated effects, modelling both ‘theoretical’ and ‘practical’ x-risk based on government/industry willingness to use the advances, but these were second order and can mostly be ignored.
Ignoring these second order effects then, our model suggested it would cost £5 billion to reduce x-risk by 1% point, which corresponds to a cost of about £2 per QALY. In retrospect this should be at least 1 or 2 orders of magnitude higher (increasing researcher cost and decreasing x-risk possibility by and order of magnitude each).
I find your x-risk chance somewhat low, I think 5% before 2100 seems more likely. Your cost-per-percent to reduce x-risk also works out as much higher than the one we used, but seems more justified (ours was just pulled from the air as ‘reasonable sounding’).