I think once you take account of diminishing returns and the non-robustness of the x-risk estimates, there’s a good chance you’d end up estimating the cost per present life saved of GiveWell is cheaper than donating to xrisk. So the claim ‘neartermists should donate to xrisk’ seems likely wrong.
I agree with Carl the US govt should spend more on x-risk, even just to protect their own citizens.
I think the typical person is not a neartermist, so might well end up thinking x-risk is more cost-effective than GiveWell if they thought it through. Though it would depend a lot on what considerations you include or not.
From a pure messaging pov, I agree we should default to opening with “there might be an xrisk soon” rather than “there might be trillions of future generations”, since it’s the most important message and is more likely to be well-received. I see that as the strategy of the Precipice, or of pieces directly pitching AI xrisk. But I think it’s also important to promote longtermism independently, and/or mention it as an additional reason to prioritise about xrisk a few steps after opening with it.
I think once you take account of diminishing returns and the non-robustness of the x-risk estimates, there’s a good chance you’d end up estimating the cost per present life saved of GiveWell is cheaper than donating to xrisk. So the claim ‘neartermists should donate to xrisk’ seems likely wrong.
I agree with Carl the US govt should spend more on x-risk, even just to protect their own citizens.
I think the typical person is not a neartermist, so might well end up thinking x-risk is more cost-effective than GiveWell if they thought it through. Though it would depend a lot on what considerations you include or not.
From a pure messaging pov, I agree we should default to opening with “there might be an xrisk soon” rather than “there might be trillions of future generations”, since it’s the most important message and is more likely to be well-received. I see that as the strategy of the Precipice, or of pieces directly pitching AI xrisk. But I think it’s also important to promote longtermism independently, and/or mention it as an additional reason to prioritise about xrisk a few steps after opening with it.