I think others at 80k are best placed to answer this (for time zone reasons I’m most active in this thread right now), but for what it’s worth, I’m worried about the loss at the top of the EA funnel! I think it’s worth it overall, but I think this is definitely a hit.
That said, I’m not sure AI risk has to be abstract or speculative! AI is everywhere, I think feels very real to some people, can feel realer than others, and the problems we’re encountering are rapidly less speculative (we have papers showing at least some amount of alignment faking, scheming, obfuscation of chain of thought, reward hacking, all that stuff!)
One question I have is how much it will be the case in the future that people looking for a general “doing good” framework will in fact bounce off of the new 80k. For instance, it could be the case that AI is so ubiquitous that it would feel totally out of touch to not be discussing it a lot. More compellingly to me, I think it’s 80k’s job to make the connection; doing good in the current world requires taking AI and its capabilities and risks seriously. We are in an age of AI, and that has implications for all possible routes to doing good.
I like your take on reputation considerations; I think lots of us will definitely have to eat non-zero crow if things really plateau, but I think the evidence is strong enough to care deeply about this and prioritize it, and I don’t want to obscure that we believe that for the reputational benefit.
I think others at 80k are best placed to answer this (for time zone reasons I’m most active in this thread right now), but for what it’s worth, I’m worried about the loss at the top of the EA funnel! I think it’s worth it overall, but I think this is definitely a hit.
That said, I’m not sure AI risk has to be abstract or speculative! AI is everywhere, I think feels very real to some people, can feel realer than others, and the problems we’re encountering are rapidly less speculative (we have papers showing at least some amount of alignment faking, scheming, obfuscation of chain of thought, reward hacking, all that stuff!)
One question I have is how much it will be the case in the future that people looking for a general “doing good” framework will in fact bounce off of the new 80k. For instance, it could be the case that AI is so ubiquitous that it would feel totally out of touch to not be discussing it a lot. More compellingly to me, I think it’s 80k’s job to make the connection; doing good in the current world requires taking AI and its capabilities and risks seriously. We are in an age of AI, and that has implications for all possible routes to doing good.
I like your take on reputation considerations; I think lots of us will definitely have to eat non-zero crow if things really plateau, but I think the evidence is strong enough to care deeply about this and prioritize it, and I don’t want to obscure that we believe that for the reputational benefit.