I guess this might be the “last, properly funded EA analysis” unless something came out after that which I missed (“last” in that going forward it seems funders are doubling down on AI and might not rethink this decision in the near future)? I think the takeaway from this work by Rethink Priorities for me is that it is not at all unreasonable to focus on other things than AI, as going all in on AI seemed to require a set of quite extreme beliefs/assumptions. Would be happy to be corrected if my simple take-away might be overly naive.
I think to me cubic or faster value increases and that we will mostly have a future with very low risk and that it is only now, or during a few periods that risk will be extremely high. In a sense, I see these assumptions in tension as high value often is accompanied by high risk. I was just made aware that even sending digital being to far-away galaxies looks extremely expensive energy-wise, even if one keep only the minimum power requirement during a multi-year travel between solar systems. I guess in essence, I feel like to justify these assumptions one would have to really look into what these assumptions materially mean, and use historical precedent and reasonable analysis across a wide range of scenarios to see if they make sense. For me this is more intuition and a scepticism that enough work have been done to get certainty of these assumptions. To some degree, I also feel like AI safety was a direction where funders might get more of a feeling “of doing something”—something I have been at fault at myself. Something like just chipping away at the stubborn problems of poverty/global health, or animal welfare is likely to remain “unsolved” problems even with billions more invested. Moreover, they do not have novelty, and these “industries” are less prone to be affected while AI is new and one can see more systemic effects. Maybe this last point actually drives at something supporting AI safety—it might be more tractable in a sense. Sorry this was long and not underpinned by much analysis so would welcome any analysis on these points, especially analysis that might change my mind.
I do think one issue people may be underrating is that we might just not bother with space colonization, if the distances and costs mean that no one on Earth will ever see significant material gain from it.
I think that given a few generations of expansion to different stars in all directions, it is not implausible (i.e. at least 25% chance) that X-risk becomes extremely low (i.e. under 1 in 100,000 per century, once there are say, 60 colonies with expansion plans, and a lot less once there are 1000 colonies.) After all, we’ve already survived a million years, and most X-risks not from AI seem mostly to apply to single planet civilizations, plus the lightspeed barrier makes it hard for a risk to reach everywhere at once. But I think I agree that thinking through this stuff is very, very hard, and I’m sympathetic to David Thorstad’s claim that if we keep finding ways current estimates of the value of X-risk reduction could be wildly wrong, at some point we should just lose trust in current estimates (see here for Thorstad making the claim: https://reflectivealtruism.com/2023/11/03/mistakes-in-the-moral-mathematics-of-existential-risk-part-5-implications/), even though I am a lot less confident than Thorstad is that very low future per year risk is an “extreme” assumption.
It is disturbing to me how much Thorstad’s work on this stuff seems to have been ignored by leading orgs; it is very serious work criticizing key assumptions that they base their decisions on, even if I personally think he tends to push points in his favour a bit far. I assume the same is true for the Rethink report you cite, although it is long and complicated enough, unlike Thorstad’s short blog posts, that I haven’t read any of it.
Actually reading this again, I think maybe you have a point about complexity of arguments/assumptions. Not sure if it is Occam’s Razor, but if one has to contort an argument into this weird, windy argument with unusual assumptions—maybe this hard attempt at something like “rationalization” should be a warning flag. That said, the world is complex and unpredictable, so perhaps reasoning about it is complex too—I guess this is an age-old debate with no clear answer!
Animal welfare on the other hands seems so extremely easy to argue is important. Global poverty a little less so but still easier than x-risk (more about whether handing out mosquito nets is better than economic growth, democracy, human rights, etc.).
I just have to call out the amazing work by Rethink Priorities and those that funded this sequence of analyses (not sure who that is, would welcome info!): https://forum.effectivealtruism.org/s/WdL3LE5LHvTwWmyqj
I guess this might be the “last, properly funded EA analysis” unless something came out after that which I missed (“last” in that going forward it seems funders are doubling down on AI and might not rethink this decision in the near future)? I think the takeaway from this work by Rethink Priorities for me is that it is not at all unreasonable to focus on other things than AI, as going all in on AI seemed to require a set of quite extreme beliefs/assumptions. Would be happy to be corrected if my simple take-away might be overly naive.
What are the “extreme beliefs” you have in mind?
I think to me cubic or faster value increases and that we will mostly have a future with very low risk and that it is only now, or during a few periods that risk will be extremely high. In a sense, I see these assumptions in tension as high value often is accompanied by high risk. I was just made aware that even sending digital being to far-away galaxies looks extremely expensive energy-wise, even if one keep only the minimum power requirement during a multi-year travel between solar systems. I guess in essence, I feel like to justify these assumptions one would have to really look into what these assumptions materially mean, and use historical precedent and reasonable analysis across a wide range of scenarios to see if they make sense. For me this is more intuition and a scepticism that enough work have been done to get certainty of these assumptions. To some degree, I also feel like AI safety was a direction where funders might get more of a feeling “of doing something”—something I have been at fault at myself. Something like just chipping away at the stubborn problems of poverty/global health, or animal welfare is likely to remain “unsolved” problems even with billions more invested. Moreover, they do not have novelty, and these “industries” are less prone to be affected while AI is new and one can see more systemic effects. Maybe this last point actually drives at something supporting AI safety—it might be more tractable in a sense. Sorry this was long and not underpinned by much analysis so would welcome any analysis on these points, especially analysis that might change my mind.
I do think one issue people may be underrating is that we might just not bother with space colonization, if the distances and costs mean that no one on Earth will ever see significant material gain from it.
I think that given a few generations of expansion to different stars in all directions, it is not implausible (i.e. at least 25% chance) that X-risk becomes extremely low (i.e. under 1 in 100,000 per century, once there are say, 60 colonies with expansion plans, and a lot less once there are 1000 colonies.) After all, we’ve already survived a million years, and most X-risks not from AI seem mostly to apply to single planet civilizations, plus the lightspeed barrier makes it hard for a risk to reach everywhere at once. But I think I agree that thinking through this stuff is very, very hard, and I’m sympathetic to David Thorstad’s claim that if we keep finding ways current estimates of the value of X-risk reduction could be wildly wrong, at some point we should just lose trust in current estimates (see here for Thorstad making the claim: https://reflectivealtruism.com/2023/11/03/mistakes-in-the-moral-mathematics-of-existential-risk-part-5-implications/), even though I am a lot less confident than Thorstad is that very low future per year risk is an “extreme” assumption.
It is disturbing to me how much Thorstad’s work on this stuff seems to have been ignored by leading orgs; it is very serious work criticizing key assumptions that they base their decisions on, even if I personally think he tends to push points in his favour a bit far. I assume the same is true for the Rethink report you cite, although it is long and complicated enough, unlike Thorstad’s short blog posts, that I haven’t read any of it.
Actually reading this again, I think maybe you have a point about complexity of arguments/assumptions. Not sure if it is Occam’s Razor, but if one has to contort an argument into this weird, windy argument with unusual assumptions—maybe this hard attempt at something like “rationalization” should be a warning flag. That said, the world is complex and unpredictable, so perhaps reasoning about it is complex too—I guess this is an age-old debate with no clear answer!
Animal welfare on the other hands seems so extremely easy to argue is important. Global poverty a little less so but still easier than x-risk (more about whether handing out mosquito nets is better than economic growth, democracy, human rights, etc.).