Thanks, this back and forth is very helpful. I think I’ve got a clearer idea about what you’re saying.
I think I disagree that it’s reasonable to assume that there will be a fixed N = 10^35 future lives, regardless of whether it ends up Malthusian. If it ends up not Malthusian, I think I’d expect the number of people in the future to be far less than whatever the max imposed by resource constraints is, ie much less than 10^35.
So I think that changes the calculation of E[saving one life], without much changing E[preventing extinction], because you need to split out the cases where Malthusianism is true vs false.
E[saving one life] is 1 if Malthusianism is true, or something fraction of the future if Malthusianism is false, but if it’s false, then we should expect the future to be much smaller than 10^35. So the EV will be much less than 10^35.
E[preventing extinction] is 10^35 if Malthusianism is true, and much less if it’s false. But you don’t need that high a credence to get an EV around 10^35.
So I guess all that to say that I think your argument is right and also action relevant, except I think the future is much smaller in non-Malthusian worlds, so there’s a somewhat bigger gap than “just” 10^10. I’m not sure how much bigger.
What do you think about that?
Actually, computer science conferences are peer reviewed. They play a similar role as journals in other fields. I think it’s just a historical curiosity that it’s conferences rather than journals that are the prestigious places to publish in CS!
Of course, this doesn’t change the overall picture of some AI work and much AI safety work not being peer reviewed.