AI might help us develop more aligned AI, leading to a correlation between the two’s incidence date
The same correlation might happen as a result of new ways to look at AGI leading to more novel and innovative avenues in alignment research (though with earlier effect)
Progress in alignment is very likely to translate directly into AGI progress (see the Pragmatic AI Safety agenda, OpenAI, and DeepMind)
An actual AI takeover will diminish humanity’s ability to come up with an alignment solution, though the TAI probably wants to solve the problem for next-gen AGI
Takeoff speeds will of course significantly affect these dynamics
Just off the top of my mind, I’d be curious to hear more.
And just to dive into some of these dynamics:
AI might help us develop more aligned AI, leading to a correlation between the two’s incidence date
The same correlation might happen as a result of new ways to look at AGI leading to more novel and innovative avenues in alignment research (though with earlier effect)
Progress in alignment is very likely to translate directly into AGI progress (see the Pragmatic AI Safety agenda, OpenAI, and DeepMind)
An actual AI takeover will diminish humanity’s ability to come up with an alignment solution, though the TAI probably wants to solve the problem for next-gen AGI
Takeoff speeds will of course significantly affect these dynamics
Just off the top of my mind, I’d be curious to hear more.