Do you think non-altruistic interventions for AI alignment (i.e. AI safety “prepping”) make sense? If so, do you have suggestions for concrete actions to take, and if not, why do you think they don’t make sense?
(Note: I previously asked a similar question addressed at someone else, but I am curious for Buck’s thoughts on this.)
I don’t think you can prep that effectively for x-risk-level AI outcomes, obviously.
I think you can prep for various transformative technologies; you could for example buy shares of computer hardware manufacturers if you think that they’ll be worth more due to increased value of computation as AI productivity increases. I haven’t thought much about this, and I’m sure this is dumb for some reason, but maybe you could try to buy land in cheap places in the hope that in a transhuman utopia the land will be extremely valuable (the property rights might not carry through, but it might be worth the gamble for sufficiently cheap land).
I think it’s probably at least slightly worthwhile to do good and hope that you can sell some of your impact certificates after good AI outcomes.
You should ask Carl Shulman, I’m sure he’d have a good answer.
Do you think non-altruistic interventions for AI alignment (i.e. AI safety “prepping”) make sense? If so, do you have suggestions for concrete actions to take, and if not, why do you think they don’t make sense?
(Note: I previously asked a similar question addressed at someone else, but I am curious for Buck’s thoughts on this.)
I don’t think you can prep that effectively for x-risk-level AI outcomes, obviously.
I think you can prep for various transformative technologies; you could for example buy shares of computer hardware manufacturers if you think that they’ll be worth more due to increased value of computation as AI productivity increases. I haven’t thought much about this, and I’m sure this is dumb for some reason, but maybe you could try to buy land in cheap places in the hope that in a transhuman utopia the land will be extremely valuable (the property rights might not carry through, but it might be worth the gamble for sufficiently cheap land).
I think it’s probably at least slightly worthwhile to do good and hope that you can sell some of your impact certificates after good AI outcomes.
You should ask Carl Shulman, I’m sure he’d have a good answer.