But we also have strong reasons to trust that some process designed our cooperative instincts to allow groups of humans to cooperate effectively.
â[A]llowing [small] groups of humans to cooperate effectivelyâ is very far from âmaking the far future better, impartially speakingâ. Iâd be interested in your responses to the arguments here.
I also think that many individuals need to decide how to make their lives go well in pretty confusing circumstances. Imagine deciding whether to immigrate to America in the 1700s, or how to live in the shadow of the Cold War, or whether to genetically engineer your children.
First, itâs not clear to me these people werenât clueless â i.e. really had more reason to choose whatever they chose than the alternatives â depending on how long a time horizon they were aiming to make go well.
Second, insofar as we think these peopleâs choices were justified, I donât see why you think their instincts gave them such justification. Why would these instincts track unprecedented consequences so well?
and the net value of the things that itâs done so far may well be dominated by what updates it makes based on that experience
I donât think âmay wellâ gets us very far. Can you say more why this hypothesis is so much more likely than, say, âthe dominant impacts are the damage thatâs already been doneâ, or âthe dominant impacts will come from near-future decisions, made by actors who are still too ignorant about the extremely complex system theyâre intervening inâ?
do you think that, if we had a theory of sociopolitics that was about as good as 20th-century economics, then we wouldnât be clueless about how to do sociopolitical interventions (like founding AI safety movements) effectively?
No, because I think âfounding AI safety movements that succeed at making the far future go betterâ is a pretty out-of-distribution kind of sociopolitical intervention.
â[A]llowing [small] groups of humans to cooperate effectivelyâ is very far from âmaking the far future better, impartially speakingâ. Iâd be interested in your responses to the arguments here.
First, itâs not clear to me these people werenât clueless â i.e. really had more reason to choose whatever they chose than the alternatives â depending on how long a time horizon they were aiming to make go well.
Second, insofar as we think these peopleâs choices were justified, I donât see why you think their instincts gave them such justification. Why would these instincts track unprecedented consequences so well?
I donât think âmay wellâ gets us very far. Can you say more why this hypothesis is so much more likely than, say, âthe dominant impacts are the damage thatâs already been doneâ, or âthe dominant impacts will come from near-future decisions, made by actors who are still too ignorant about the extremely complex system theyâre intervening inâ?
No, because I think âfounding AI safety movements that succeed at making the far future go betterâ is a pretty out-of-distribution kind of sociopolitical intervention.