Do you have the intuition that absent further technological development, human values would drift arbitrarily far?
Certainly not arbitrarily far. I also think that technological development (esp. the emergence of agriculture and modern industry) has played a much larger role in changing the world over time than random value drift has.
[E]ven non-extinction AI is enabling a new set of possibilities that modern-day humans would endorse much less than the decisions of future humans otherwise.
I definitely think that’s true. But I also think that was true of agriculture, relative to the values of hunter-gatherer societies.
To be clear, I’m not downplaying the likelihood or potential importance of any of the three crisper concerns I listed. For example, I think that AI progress could conceivably lead to a future that is super alienating and bad.
I’m just (a) somewhat pedantically arguing that we shouldn’t frame the concerns as being about a “loss of control over the future” and (b) suggesting that you can rationally have all these same concerns even if you come to believe that technical alignment issues aren’t actually a big deal.
Certainly not arbitrarily far. I also think that technological development (esp. the emergence of agriculture and modern industry) has played a much larger role in changing the world over time than random value drift has.
I definitely think that’s true. But I also think that was true of agriculture, relative to the values of hunter-gatherer societies.
To be clear, I’m not downplaying the likelihood or potential importance of any of the three crisper concerns I listed. For example, I think that AI progress could conceivably lead to a future that is super alienating and bad.
I’m just (a) somewhat pedantically arguing that we shouldn’t frame the concerns as being about a “loss of control over the future” and (b) suggesting that you can rationally have all these same concerns even if you come to believe that technical alignment issues aren’t actually a big deal.