When comparing the impact of time-buying vs direct work, the probability of success for both activities is negated by the number of people pushing capabilities. So it cancels out, and you don’t need to think about the number of people in opposition.
Time-buying (slowing down AGI development) seems more directly opposed to the interests of those pushing capabilities than working on AGI safety.
If the alignment tax is low (to the tune of an open-source Python package that just lets you do “pip install alignment”) I expect all the major AGI labs to be willing to pay it. Maybe they’ll even thank you.
On the other hand, asking people to hold off on building AGI (though I agree there’s more and less clever ways to do it in practice) seems to scale poorly especially with the number of people wanting to do AGI research, and to a lesser degree the number of people doing AI/ML research in general. Or even non-researchers whose livelihoods depends on such advancements. At the very least, I do not expect effort needed to persuade people to be constant with respect to the number of people with a stake in AGI development.
Fair points. On the third hand, the more AGI researchers there are, the more “targets” there are for important arguments to reach, and the higher impact systematic AI governance interventions will have.
At this point, I seem to have lost track of my probabilities somewhere in the branches, let me try to go back and find it...
Time-buying (slowing down AGI development) seems more directly opposed to the interests of those pushing capabilities than working on AGI safety.
If the alignment tax is low (to the tune of an open-source Python package that just lets you do “pip install alignment”) I expect all the major AGI labs to be willing to pay it. Maybe they’ll even thank you.
On the other hand, asking people to hold off on building AGI (though I agree there’s more and less clever ways to do it in practice) seems to scale poorly especially with the number of people wanting to do AGI research, and to a lesser degree the number of people doing AI/ML research in general. Or even non-researchers whose livelihoods depends on such advancements. At the very least, I do not expect effort needed to persuade people to be constant with respect to the number of people with a stake in AGI development.
Fair points. On the third hand, the more AGI researchers there are, the more “targets” there are for important arguments to reach, and the higher impact systematic AI governance interventions will have.
At this point, I seem to have lost track of my probabilities somewhere in the branches, let me try to go back and find it...
Good discussion, ty. ^^