Working on AI isnāt the same as doing EA work on AI to reduce X-risk. Most people working in AI are just trying to make the AI more capable and reliable. There probably is a case for saying that āmore reliableā is actually EA X-risk work in disguise, even if unintentionally, but itās definitely not obvious this is true.
Iām not sure exactly, but ALLFED and GCRI have had to shrink, and ORCG, Good Ancestors, Global Shield, EA Hotel, Institute for Law & AI (name change from Legal Priorities Project), etc have had to pivot to approximately all AI work. SFF is now almost all AI.
Working on AI isnāt the same as doing EA work on AI to reduce X-risk. Most people working in AI are just trying to make the AI more capable and reliable. There probably is a case for saying that āmore reliableā is actually EA X-risk work in disguise, even if unintentionally, but itās definitely not obvious this is true.
I agree, though I think the large reduction in EA funding for non-AI GCR work is not optimal (but Iām biased with my ALLFED association).
How much reduction in funding for non-AI global catastrophic risks has there been�
Iām not sure exactly, but ALLFED and GCRI have had to shrink, and ORCG, Good Ancestors, Global Shield, EA Hotel, Institute for Law & AI (name change from Legal Priorities Project), etc have had to pivot to approximately all AI work. SFF is now almost all AI.
Thatās deeply disturbing.