Thanks for your kind words, and for reading.
Thanks for pointing out these pieces. I like the breakdown of the different dimensions of long-term vs. near-term.
Broadly, I agree with you that the document could benefit from more about premise 5. I’ll consider revising to add some.
I’m definitely concerned about misuse scenarios too (and I think lines here can get blurry—see e.g. Katja Grace’s recent post); but I wanted, in this document, to focus on misalignment in particular. The question of how to weigh misuse vs. misalignment risk, and how the two are similar/different more generally, seems like a big one, so I’ll mostly leave it for another time (one big practical difference is that misalignment makes certain types of technical work more relevant).
Eventually, the disempowerment has to scale to ~all of humanity (a la premise 5), so that would qualify as TAI in the “transition as big of a deal as the industrial revolution” sense. However, it’s true that my timelines condition in premise 1 (e.g., APS systems become possible and financially feasible) is weaker than Ajeya’s.