I’m sympathetic to cutting off at an earlier point and rejecting all galaxy brained arguments.
As am I. At least when it comes to the important action-relevant question of whether to work on AI development, in the final analysis, I’d probably simplify my reasoning to something like, “Accelerating general-purpose technology seems good because it improves people’s lives.” This perspective roughly guides my moral views on not just AI, but also human genetic engineering, human cloning, and most other potentially transformative technologies.
I mention my views on preference utilitarianism mainly to explain why I don’t particularly value preserving humanity as a species beyond preserving the individual humans who are alive now. I’m not mentioning it to commit to any form of galaxy-brained argument that I think makes acceleration look great for the long-term. In practice, the key reason I support accelerating most technology, including AI, is simply the belief that doing so would be directly beneficial to people who exist or who will exist in the near-term.
And to be clear, we could separately discuss what effect this reasoning has on the more abstract question of whether AI takeover is bad or good in expectation, but here I’m focusing just on the most action-relevant point that seems salient to me, which is whether I should choose to work on AI development based on these considerations.
As am I. At least when it comes to the important action-relevant question of whether to work on AI development, in the final analysis, I’d probably simplify my reasoning to something like, “Accelerating general-purpose technology seems good because it improves people’s lives.” This perspective roughly guides my moral views on not just AI, but also human genetic engineering, human cloning, and most other potentially transformative technologies.
I mention my views on preference utilitarianism mainly to explain why I don’t particularly value preserving humanity as a species beyond preserving the individual humans who are alive now. I’m not mentioning it to commit to any form of galaxy-brained argument that I think makes acceleration look great for the long-term. In practice, the key reason I support accelerating most technology, including AI, is simply the belief that doing so would be directly beneficial to people who exist or who will exist in the near-term.
And to be clear, we could separately discuss what effect this reasoning has on the more abstract question of whether AI takeover is bad or good in expectation, but here I’m focusing just on the most action-relevant point that seems salient to me, which is whether I should choose to work on AI development based on these considerations.