I don’t think the scale or expected value affects this strategy question directly. You still just use a strategy that is going to be most likely to achieve the goal.
If the goal is something you have really widespread agreement on, that probably leans you towards an uncompromising, radical ask approach. Seems like things might be going pretty well for AI safety in that respect, though I don’t know if it’s been established that people are buying into the high probability of doom arguments that much. I suspect that we are much less far along than the climate change movement in that respect, for example. And even if support were much greater, I wouldn’t agree with a lot of this post.
Oh, my expertise is in animal advocacy, not AI safety FYI
I don’t think the scale or expected value affects this strategy question directly. You still just use a strategy that is going to be most likely to achieve the goal.
If the goal is something you have really widespread agreement on, that probably leans you towards an uncompromising, radical ask approach. Seems like things might be going pretty well for AI safety in that respect, though I don’t know if it’s been established that people are buying into the high probability of doom arguments that much. I suspect that we are much less far along than the climate change movement in that respect, for example. And even if support were much greater, I wouldn’t agree with a lot of this post.
Oh, my expertise is in animal advocacy, not AI safety FYI