1) Regardless of who is right about when AGI might be around (and bear in mind that we still have no proper definition for this), OP is right to call for more peer-reviewed scrutiny from people who are outsiders to both EA and AI.
This is just healthy, and regardless of whether this peer-reviewed reaches the same or different conclusions, NOT doing it automatically provokes legitimate fears that the EA movement is biased because so many of its members have personal (and financial) stakes in AI.
See this point of view by Shazeda Ahmed https://overthinkpodcast.com/episode-101-transcript She’s an information scholar who has looked at AI and its links with EA and one of the critics of the lack of a counter-narrative.
I, for one, will tend to be skeptical of conclusions reached by a small pool of similar (demographically, economically, but also in the way they approach an issue) people as I will feel like there was a missed opportunity for true debate and different perspectives.
I take the point that these are technical discussions and therefore it makes it difficult to involve the general public into this debate, but not doing so creates the appearance (and often, more worryingly, the reality) of bias.
This can harm the EA movement as a whole (from my perspective it already does). I’d love to see a more vocal and organised opposition that is empowered, respected, and funded to genuinely test assumptions.
2) Studying and devoting resources to preparing the world to AI technology doesn’t seem like a bad idea given:
Low probability × massive stakes still justifies large resource allocation.
But, as OP seems to suggest it becomes an issue when that focus so prevalent that other just as important / likely issues are neglected because of that. It seems that EA’s focus on rationality and ‘counterfactuality’ means that they should encourage people to work in fields that are truly neglected.
But can we really say that AI is still neglected given the massive outpourings of both private and public money into the sector? It is now very fashionable to work in AI, and a wide-spread belief is that doing so warrants a comfortable salary. Can we say the same thing about, say, the threat of nuclear annihilation, or biosecurity risk, or climate adaptation?
3) In response to the argument that ‘even a false alarm would still produce valuable governance infrastructure’ Yes, but at what cost? I don’t see much discussion on whether all those resources would be better spent elsewhere.
Working on AI isn’t the same as doing EA work on AI to reduce X-risk. Most people working in AI are just trying to make the AI more capable and reliable. There probably is a case for saying that “more reliable” is actually EA X-risk work in disguise, even if unintentionally, but it’s definitely not obvious this is true.
I’m not sure exactly, but ALLFED and GCRI have had to shrink, and ORCG, Good Ancestors, Global Shield, EA Hotel, Institute for Law & AI (name change from Legal Priorities Project), etc have had to pivot to approximately all AI work. SFF is now almost all AI.
1) Regardless of who is right about when AGI might be around (and bear in mind that we still have no proper definition for this), OP is right to call for more peer-reviewed scrutiny from people who are outsiders to both EA and AI.
This is just healthy, and regardless of whether this peer-reviewed reaches the same or different conclusions, NOT doing it automatically provokes legitimate fears that the EA movement is biased because so many of its members have personal (and financial) stakes in AI.
See this point of view by Shazeda Ahmed https://overthinkpodcast.com/episode-101-transcript She’s an information scholar who has looked at AI and its links with EA and one of the critics of the lack of a counter-narrative.
I, for one, will tend to be skeptical of conclusions reached by a small pool of similar (demographically, economically, but also in the way they approach an issue) people as I will feel like there was a missed opportunity for true debate and different perspectives.
I take the point that these are technical discussions and therefore it makes it difficult to involve the general public into this debate, but not doing so creates the appearance (and often, more worryingly, the reality) of bias.
This can harm the EA movement as a whole (from my perspective it already does).
I’d love to see a more vocal and organised opposition that is empowered, respected, and funded to genuinely test assumptions.
2) Studying and devoting resources to preparing the world to AI technology doesn’t seem like a bad idea given:
Low probability × massive stakes still justifies large resource allocation.
But, as OP seems to suggest it becomes an issue when that focus so prevalent that other just as important / likely issues are neglected because of that. It seems that EA’s focus on rationality and ‘counterfactuality’ means that they should encourage people to work in fields that are truly neglected.
But can we really say that AI is still neglected given the massive outpourings of both private and public money into the sector? It is now very fashionable to work in AI, and a wide-spread belief is that doing so warrants a comfortable salary. Can we say the same thing about, say, the threat of nuclear annihilation, or biosecurity risk, or climate adaptation?
3) In response to the argument that ‘even a false alarm would still produce valuable governance infrastructure’ Yes, but at what cost? I don’t see much discussion on whether all those resources would be better spent elsewhere.
Working on AI isn’t the same as doing EA work on AI to reduce X-risk. Most people working in AI are just trying to make the AI more capable and reliable. There probably is a case for saying that “more reliable” is actually EA X-risk work in disguise, even if unintentionally, but it’s definitely not obvious this is true.
I agree, though I think the large reduction in EA funding for non-AI GCR work is not optimal (but I’m biased with my ALLFED association).
How much reduction in funding for non-AI global catastrophic risks has there been…?
I’m not sure exactly, but ALLFED and GCRI have had to shrink, and ORCG, Good Ancestors, Global Shield, EA Hotel, Institute for Law & AI (name change from Legal Priorities Project), etc have had to pivot to approximately all AI work. SFF is now almost all AI.
That’s deeply disturbing.