I think you make a lot of good points as to why other causes should not have their funding reduced that much. But I didn’t see you making the point that in particular nuclear and pandemic risks could increase because of AI, so the case for funding them remains relatively strong. So maybe a compromise is reducing funding for global poverty/​animal welfare/​climate projects that have long timelines for impact, increasing funding for AI, and maintaining it for nuclear and pandemic? My understanding of what is happening now is that global poverty/​animal welfare funding is being maintained, but non-AI X-risk funding has fallen dramatically.
Thanks for the helpful addition. I’m not an expert in the x-risk funding landscape, so I’ll defer to you. Sounds like your suggestion could be a sensible one on cross-cause prio grounds. It’s possible that this dynamic illustrates a different pitfall of only making prio judgments at the level of big cause areas. If we lump AI in with other x-risks and hold cause-level funding steady, funding between AI and non-AI x-risks becomes zero sum.
I think you make a lot of good points as to why other causes should not have their funding reduced that much. But I didn’t see you making the point that in particular nuclear and pandemic risks could increase because of AI, so the case for funding them remains relatively strong. So maybe a compromise is reducing funding for global poverty/​animal welfare/​climate projects that have long timelines for impact, increasing funding for AI, and maintaining it for nuclear and pandemic? My understanding of what is happening now is that global poverty/​animal welfare funding is being maintained, but non-AI X-risk funding has fallen dramatically.
Thanks for the helpful addition. I’m not an expert in the x-risk funding landscape, so I’ll defer to you. Sounds like your suggestion could be a sensible one on cross-cause prio grounds. It’s possible that this dynamic illustrates a different pitfall of only making prio judgments at the level of big cause areas. If we lump AI in with other x-risks and hold cause-level funding steady, funding between AI and non-AI x-risks becomes zero sum.