Thanks for the explanation. I agree it’s possible that smarter people could coordinate better and produce better outcomes for the world. I did recognise in my original post that a factor suggesting the future could be better was that, as people get richer and have their basic needs met, it’s easier to become altruistic. I find that argument very plausible; it was the asymmetry one I found unconvincing.
FWIW, I’m fine with others disagreeing with my view. It would be great to find out I’m wrong and that there is more evidence to suggest the future is rosier in expectation than I had originally thought. I just wanted people to let me know if there was a logical error or something in my original post, so thank you for taking the time to explain your thinking (and for retracting your disagreement on further consideration).
I think it’s healthy to be happy about being in disagreement with other EAs about something. Either that means you can outperform them, or it means you’re misunderstanding something. But if you believed the same thing, then you for sure aren’t outperforming them. : )
I think the future depends to a large extent on what people in control of extremely powerfwl AI ends up doing with it, conditional on humanity surviving the transition to that era. We should probably speculate on what we would want those people to do, and try to prepare authoritative and legible documents that such people will be motivated to read.
Thanks for the explanation. I agree it’s possible that smarter people could coordinate better and produce better outcomes for the world. I did recognise in my original post that a factor suggesting the future could be better was that, as people get richer and have their basic needs met, it’s easier to become altruistic. I find that argument very plausible; it was the asymmetry one I found unconvincing.
FWIW, I’m fine with others disagreeing with my view. It would be great to find out I’m wrong and that there is more evidence to suggest the future is rosier in expectation than I had originally thought. I just wanted people to let me know if there was a logical error or something in my original post, so thank you for taking the time to explain your thinking (and for retracting your disagreement on further consideration).
I think it’s healthy to be happy about being in disagreement with other EAs about something. Either that means you can outperform them, or it means you’re misunderstanding something. But if you believed the same thing, then you for sure aren’t outperforming them. : )
I think the future depends to a large extent on what people in control of extremely powerfwl AI ends up doing with it, conditional on humanity surviving the transition to that era. We should probably speculate on what we would want those people to do, and try to prepare authoritative and legible documents that such people will be motivated to read.