Some of them have low p(doom from AI) and aren’t longtermists, which justifies the high-level decision of working on an existentially dangerous technology with sufficiently large benefits.
I do think their actual specific actions are not commensurate with the level of importance or moral seriousness that they claim to attach to their work, given those stated motivations.
Thanks for the comment Linch! Just to spell this out:
“Some of them have low p(doom from AI) and aren’t longtermists, which justifies the high-level decision of working on an existentially dangerous technology with sufficiently large benefits. ”
I would consider this acceptable if their p(doom) was ~ < 0.01%
I find this pretty incredulous TBH
“I do think their actual specific actions are not commensurate with the level of importance or moral seriousness that they claim to attach to their work, given those stated motivations.”
I’m a bit confused by this part. Are you saying you believe the importance / seriousness the person claims their work has is not reflected in the actions they actually take? In what way are you saying they do this?
I would consider this acceptable if their p(doom) was ~ < 0.01%
I find this pretty incredulous TBH
I dunno man, at least some people think that AGI is the best path to curing cancer. That seems like a big deal! If you aren’t a longtermist at all, speeding up the cure for cancer by a year is probably worth quite a bit of x-risk.
Are you saying you believe the importance / seriousness the person claims their work has is not reflected in the actions they actually take?
Yes.
In what way are you saying they do this?
Lab people shitpost, lab people take competition extremely seriously (if your actual objective is “curing cancer,” it seems a bit discordant to be that worried that Silicon Valley Startup #2 will cure cancer before you), people take infosec not at all seriously, the bizarre cultish behavior after the Sam Altman firing, and so forth.
Some of them have low p(doom from AI) and aren’t longtermists, which justifies the high-level decision of working on an existentially dangerous technology with sufficiently large benefits.
I do think their actual specific actions are not commensurate with the level of importance or moral seriousness that they claim to attach to their work, given those stated motivations.
Thanks for the comment Linch! Just to spell this out:
“Some of them have low p(doom from AI) and aren’t longtermists, which justifies the high-level decision of working on an existentially dangerous technology with sufficiently large benefits. ”
I would consider this acceptable if their p(doom) was ~ < 0.01%
I find this pretty incredulous TBH
“I do think their actual specific actions are not commensurate with the level of importance or moral seriousness that they claim to attach to their work, given those stated motivations.”
I’m a bit confused by this part. Are you saying you believe the importance / seriousness the person claims their work has is not reflected in the actions they actually take? In what way are you saying they do this?
I dunno man, at least some people think that AGI is the best path to curing cancer. That seems like a big deal! If you aren’t a longtermist at all, speeding up the cure for cancer by a year is probably worth quite a bit of x-risk.
Yes.
Lab people shitpost, lab people take competition extremely seriously (if your actual objective is “curing cancer,” it seems a bit discordant to be that worried that Silicon Valley Startup #2 will cure cancer before you), people take infosec not at all seriously, the bizarre cultish behavior after the Sam Altman firing, and so forth.