Yes—if your timelines are short, then everything starts to look like it flows through a bottleneck of there actually still being a world in 2030 (2028, 2025..), which requires a global moratorium on AGI (as there is not enough time left for Alignment to be solved otherwise). There are a few people/orgs now working on this. Not sure which is the absolute best in terms of bang for buck though.
Yes—if your timelines are short, then everything starts to look like it flows through a bottleneck of there actually still being a world in 2030 (2028, 2025..), which requires a global moratorium on AGI (as there is not enough time left for Alignment to be solved otherwise). There are a few people/orgs now working on this. Not sure which is the absolute best in terms of bang for buck though.
All of these are new (post-GPT-4): Centre for AI Policy, Artificial Intelligence Policy Institute, PauseAI, Stop AGI, Campaign for AI Safety, Holly Elmore, Safer AI, Stake Out AI.
Also, pre-existing: Centre for AI Safety, Future of Life Institute.