I upvoted but disagreed. I have a rosier view of plausible future worlds where people are as selfish as they are now, just smarter. They’d be coordinating better, and be more wisely selfish, which means they’d benefit the world more in order to benefit from trade. I admit it could go either way, however. If they just selfishly want factory-farmed meat, and the torture is just a byproduct.
I realise that this view doesn’t go against what you say at all, so I retract my disagreement.
(I should mention that the best comments are always the ones that are upvoted but disagreed with, since those tend to be the most informative or most needed. ^^)
Thanks for the explanation. I agree it’s possible that smarter people could coordinate better and produce better outcomes for the world. I did recognise in my original post that a factor suggesting the future could be better was that, as people get richer and have their basic needs met, it’s easier to become altruistic. I find that argument very plausible; it was the asymmetry one I found unconvincing.
FWIW, I’m fine with others disagreeing with my view. It would be great to find out I’m wrong and that there is more evidence to suggest the future is rosier in expectation than I had originally thought. I just wanted people to let me know if there was a logical error or something in my original post, so thank you for taking the time to explain your thinking (and for retracting your disagreement on further consideration).
I think it’s healthy to be happy about being in disagreement with other EAs about something. Either that means you can outperform them, or it means you’re misunderstanding something. But if you believed the same thing, then you for sure aren’t outperforming them. : )
I think the future depends to a large extent on what people in control of extremely powerfwl AI ends up doing with it, conditional on humanity surviving the transition to that era. We should probably speculate on what we would want those people to do, and try to prepare authoritative and legible documents that such people will be motivated to read.
I upvoted but disagreed. I have a rosier view of plausible future worlds where people are as selfish as they are now, just smarter. They’d be coordinating better, and be more wisely selfish, which means they’d benefit the world more in order to benefit from trade. I admit it could go either way, however. If they just selfishly want factory-farmed meat, and the torture is just a byproduct.
I realise that this view doesn’t go against what you say at all, so I retract my disagreement.
(I should mention that the best comments are always the ones that are upvoted but disagreed with, since those tend to be the most informative or most needed. ^^)
Thanks for the explanation. I agree it’s possible that smarter people could coordinate better and produce better outcomes for the world. I did recognise in my original post that a factor suggesting the future could be better was that, as people get richer and have their basic needs met, it’s easier to become altruistic. I find that argument very plausible; it was the asymmetry one I found unconvincing.
FWIW, I’m fine with others disagreeing with my view. It would be great to find out I’m wrong and that there is more evidence to suggest the future is rosier in expectation than I had originally thought. I just wanted people to let me know if there was a logical error or something in my original post, so thank you for taking the time to explain your thinking (and for retracting your disagreement on further consideration).
I think it’s healthy to be happy about being in disagreement with other EAs about something. Either that means you can outperform them, or it means you’re misunderstanding something. But if you believed the same thing, then you for sure aren’t outperforming them. : )
I think the future depends to a large extent on what people in control of extremely powerfwl AI ends up doing with it, conditional on humanity surviving the transition to that era. We should probably speculate on what we would want those people to do, and try to prepare authoritative and legible documents that such people will be motivated to read.