In the “a case for hope” section, it looks like your example analysis assumes that the “AGI timeline” and “AI safety timeline” are independent random variables, since your equation describes sampling from them independently. Isn’t that really unlikely to be true?
AI might help us develop more aligned AI, leading to a correlation between the two’s incidence date
The same correlation might happen as a result of new ways to look at AGI leading to more novel and innovative avenues in alignment research (though with earlier effect)
Progress in alignment is very likely to translate directly into AGI progress (see the Pragmatic AI Safety agenda, OpenAI, and DeepMind)
An actual AI takeover will diminish humanity’s ability to come up with an alignment solution, though the TAI probably wants to solve the problem for next-gen AGI
Takeoff speeds will of course significantly affect these dynamics
Just off the top of my mind, I’d be curious to hear more.
Indeed, I think there are a lot of dynamics that might arise in the combination of these two timelines. This is also one of the reasons why it is used solely for illustrating the point that we might be able to calculate this if we can model these dynamics. We hope to use our reports as a way to dive deeper and deeper into the materia of how to properly analyze our progress. A next post will have more details in relation to this.
In the “a case for hope” section, it looks like your example analysis assumes that the “AGI timeline” and “AI safety timeline” are independent random variables, since your equation describes sampling from them independently. Isn’t that really unlikely to be true?
And just to dive into some of these dynamics:
AI might help us develop more aligned AI, leading to a correlation between the two’s incidence date
The same correlation might happen as a result of new ways to look at AGI leading to more novel and innovative avenues in alignment research (though with earlier effect)
Progress in alignment is very likely to translate directly into AGI progress (see the Pragmatic AI Safety agenda, OpenAI, and DeepMind)
An actual AI takeover will diminish humanity’s ability to come up with an alignment solution, though the TAI probably wants to solve the problem for next-gen AGI
Takeoff speeds will of course significantly affect these dynamics
Just off the top of my mind, I’d be curious to hear more.
Indeed, I think there are a lot of dynamics that might arise in the combination of these two timelines. This is also one of the reasons why it is used solely for illustrating the point that we might be able to calculate this if we can model these dynamics. We hope to use our reports as a way to dive deeper and deeper into the materia of how to properly analyze our progress. A next post will have more details in relation to this.