Gotcha, so if I understand correctly, you’re more so leaning on uncertainty for being mostly indifferent rather than on thinking you’d actually be indifferent if you understood exactly what would happen in the long run. This makes sense.
(I have a different perspective on decision making that has high stakes under uncertainty and I don’t personally feel sympathetic to this sort of cluelessness perspective as a heuristic in most cases or as a terminal moral view. See also the CLR work on cluelessness. Separately, my intuitions around cluelessness imply that, to the extent I put weight on this, when I’m clueless, I get more worried about unilateralists curse and downside which you don’t seem to put much weight on, though just rounding all kinda-uncertain long run effects to zero isn’t a crazy perspective.)
On the galaxy brained pont: I’m sympathetic to arguments against being too galaxy brained, so I see where you’re coming from there, but from my perspective, I was already responding to an argument which is one galaxy brain level deep.
I think the broader argument about AI takeover being bad from a longtermist perspective is not galaxy brained and the specialization of this argument to your flavor of preference utilitarianism also isn’t galaxy brained: you have some specific moral views (in this case about prefence utilitarianism) and all else equal you’d expect humans to share these moral views more than AIs that end up taking over despite their developers not wanting the AI to take over. So (all else equal) this makes AI takeover look bad, because if beings share your preferences, then more good stuff will happen.
Then you made a somewhat galaxy brained response to this about how you don’t actually care about shared preferences due to preference utilitarianism (because after all, you’re fine with any preferences right?). But, I don’t think this objection holds because there are a number of (somewhat galaxy brained) reasons why specifically optimizing for preference utilitarianism and related things may greatly outperform control by beings with arbitrary preferences.
From my perspective the argument looks sort of like:
Non galaxy brained argument for AI takeover being bad
Somewhat galaxy brained rebuttal by you about preference utilitarianism meaning you don’t actually care about this sort of preference similarity argument case for avoiding nonconsensual AI takeover
My somewhat galaxy brained response, but which is only galaxy brained substantially because it’s responding to a galaxy brained perspective abiut details of the long run future.
I’m sympathetic to cutting off at an earlier point and rejecting all galaxy brained arguments. But, I think the preference utilitarian argument you’re giving is already quite galaxy brained and sensitive to details of the long run future.
I’m sympathetic to cutting off at an earlier point and rejecting all galaxy brained arguments.
As am I. At least when it comes to the important action-relevant question of whether to work on AI development, in the final analysis, I’d probably simplify my reasoning to something like, “Accelerating general-purpose technology seems good because it improves people’s lives.” This perspective roughly guides my moral views on not just AI, but also human genetic engineering, human cloning, and most other potentially transformative technologies.
I mention my views on preference utilitarianism mainly to explain why I don’t particularly value preserving humanity as a species beyond preserving the individual humans who are alive now. I’m not mentioning it to commit to any form of galaxy-brained argument that I think makes acceleration look great for the long-term. In practice, the key reason I support accelerating most technology, including AI, is simply the belief that doing so would be directly beneficial to people who exist or who will exist in the near-term.
And to be clear, we could separately discuss what effect this reasoning has on the more abstract question of whether AI takeover is bad or good in expectation, but here I’m focusing just on the most action-relevant point that seems salient to me, which is whether I should choose to work on AI development based on these considerations.
Gotcha, so if I understand correctly, you’re more so leaning on uncertainty for being mostly indifferent rather than on thinking you’d actually be indifferent if you understood exactly what would happen in the long run. This makes sense.
(I have a different perspective on decision making that has high stakes under uncertainty and I don’t personally feel sympathetic to this sort of cluelessness perspective as a heuristic in most cases or as a terminal moral view. See also the CLR work on cluelessness. Separately, my intuitions around cluelessness imply that, to the extent I put weight on this, when I’m clueless, I get more worried about unilateralists curse and downside which you don’t seem to put much weight on, though just rounding all kinda-uncertain long run effects to zero isn’t a crazy perspective.)
On the galaxy brained pont: I’m sympathetic to arguments against being too galaxy brained, so I see where you’re coming from there, but from my perspective, I was already responding to an argument which is one galaxy brain level deep.
I think the broader argument about AI takeover being bad from a longtermist perspective is not galaxy brained and the specialization of this argument to your flavor of preference utilitarianism also isn’t galaxy brained: you have some specific moral views (in this case about prefence utilitarianism) and all else equal you’d expect humans to share these moral views more than AIs that end up taking over despite their developers not wanting the AI to take over. So (all else equal) this makes AI takeover look bad, because if beings share your preferences, then more good stuff will happen.
Then you made a somewhat galaxy brained response to this about how you don’t actually care about shared preferences due to preference utilitarianism (because after all, you’re fine with any preferences right?). But, I don’t think this objection holds because there are a number of (somewhat galaxy brained) reasons why specifically optimizing for preference utilitarianism and related things may greatly outperform control by beings with arbitrary preferences.
From my perspective the argument looks sort of like:
Non galaxy brained argument for AI takeover being bad
Somewhat galaxy brained rebuttal by you about preference utilitarianism meaning you don’t actually care about this sort of preference similarity argument case for avoiding nonconsensual AI takeover
My somewhat galaxy brained response, but which is only galaxy brained substantially because it’s responding to a galaxy brained perspective abiut details of the long run future.
I’m sympathetic to cutting off at an earlier point and rejecting all galaxy brained arguments. But, I think the preference utilitarian argument you’re giving is already quite galaxy brained and sensitive to details of the long run future.
As am I. At least when it comes to the important action-relevant question of whether to work on AI development, in the final analysis, I’d probably simplify my reasoning to something like, “Accelerating general-purpose technology seems good because it improves people’s lives.” This perspective roughly guides my moral views on not just AI, but also human genetic engineering, human cloning, and most other potentially transformative technologies.
I mention my views on preference utilitarianism mainly to explain why I don’t particularly value preserving humanity as a species beyond preserving the individual humans who are alive now. I’m not mentioning it to commit to any form of galaxy-brained argument that I think makes acceleration look great for the long-term. In practice, the key reason I support accelerating most technology, including AI, is simply the belief that doing so would be directly beneficial to people who exist or who will exist in the near-term.
And to be clear, we could separately discuss what effect this reasoning has on the more abstract question of whether AI takeover is bad or good in expectation, but here I’m focusing just on the most action-relevant point that seems salient to me, which is whether I should choose to work on AI development based on these considerations.