Agree there’s something to your 1-3 counterarguments but I find the fourth less convincing, maybe more because of semantics than actual substantive disagreement. Why? A difference in net effect of x-risk reduction of 0.01% Vs. 0.001% is pretty massive. These especially matter if the expected value is massive, because sometimes the same expected value holds across multiple areas. For example, preventing asteroid related X risk Vs. Ai Vs. Bio Vs. runaway climate change; (by definition) all same EV (if you take the arguments at face value). But plausibility of each approach and individual interventions within that would be pretty high variance.
A difference in net effect of x-risk reduction of 0.01% Vs. 0.001% is pretty massive. These especially matter if the expected value is massive, because sometimes the same expected value holds across multiple areas. For example, preventing asteroid related X risk Vs. Ai Vs. Bio Vs. runaway climate change; (by definition) all same EV (if you take the arguments at face value). But plausibility of each approach and individual interventions within that would be pretty high variance.
I’m a bit uncertain as to what you are arguing/disputing here. To clarify on my end, my 4th point was mainly just saying “when comparing long-termist vs. near-termist causes, the concern over ‘epistemic and accountability risks endemic to long-termism’ seems relatively unimportant given [my previous 3 points and/or] the orders of magnitude of difference in expected value between near-termism vs. long-termism.”
Your new comment seems to be saying that an order-of-magnitude uncertainty factor is important when comparing cause areas within long-termism, rather than when comparing between overall long-termism and overall near-termism. I will briefly respond to that claim in the next paragraph, but if your new comment is actually still arguing your original point that the potential for bias is concerning enough that it makes the expected value of long-termism less than or just roughly equal to that of near-termism, I’m confused how you came to that conclusion. Could you clarify which argument you are now trying to make?
Regarding the inter-long-termism comparisons, I’ll just say one thing for now: some cause areas still seem significantly less important than other areas. For example, it might make sense for you to focus on x-risks from asteroids or other cosmic events if you have decades of experience in astrophysics and the field is currently undersaturated (although if you are a talented scientist it might make sense for you to offer some intellectual support to AI, bio, or even climate). However, the x-risk from asteroids is many orders of magnitude smaller than that from AI and probably even biological threats. Thus, even an uncertainty factor that for some reason only reduces your estimate of expected x-risk reduction via AI or bio work by a factor of 10 (e.g., from 0.001% to 0.0001%) without also affecting your estimate of expected x-risk reduction via work on asteroid safety will probably not have much effect on the direction of the inequality (i.e., x > y; 0.1x > y).
Agree there’s something to your 1-3 counterarguments but I find the fourth less convincing, maybe more because of semantics than actual substantive disagreement. Why? A difference in net effect of x-risk reduction of 0.01% Vs. 0.001% is pretty massive. These especially matter if the expected value is massive, because sometimes the same expected value holds across multiple areas. For example, preventing asteroid related X risk Vs. Ai Vs. Bio Vs. runaway climate change; (by definition) all same EV (if you take the arguments at face value). But plausibility of each approach and individual interventions within that would be pretty high variance.
I’m a bit uncertain as to what you are arguing/disputing here. To clarify on my end, my 4th point was mainly just saying “when comparing long-termist vs. near-termist causes, the concern over ‘epistemic and accountability risks endemic to long-termism’ seems relatively unimportant given [my previous 3 points and/or] the orders of magnitude of difference in expected value between near-termism vs. long-termism.”
Your new comment seems to be saying that an order-of-magnitude uncertainty factor is important when comparing cause areas within long-termism, rather than when comparing between overall long-termism and overall near-termism. I will briefly respond to that claim in the next paragraph, but if your new comment is actually still arguing your original point that the potential for bias is concerning enough that it makes the expected value of long-termism less than or just roughly equal to that of near-termism, I’m confused how you came to that conclusion. Could you clarify which argument you are now trying to make?
Regarding the inter-long-termism comparisons, I’ll just say one thing for now: some cause areas still seem significantly less important than other areas. For example, it might make sense for you to focus on x-risks from asteroids or other cosmic events if you have decades of experience in astrophysics and the field is currently undersaturated (although if you are a talented scientist it might make sense for you to offer some intellectual support to AI, bio, or even climate). However, the x-risk from asteroids is many orders of magnitude smaller than that from AI and probably even biological threats. Thus, even an uncertainty factor that for some reason only reduces your estimate of expected x-risk reduction via AI or bio work by a factor of 10 (e.g., from 0.001% to 0.0001%) without also affecting your estimate of expected x-risk reduction via work on asteroid safety will probably not have much effect on the direction of the inequality (i.e., x > y; 0.1x > y).