I suppose my point is more narrow, really just questioning whether the observation “humans care about things besides their genes” gives us any additional reason for concern.
I mostly go ¯\_(ツ)_/¯ , it doesn’t feel like it’s much evidence of anything, after you’ve updated off the abstract argument. The actual situation we face will be so different (primarily, we’re actually trying to deal with the alignment problem, unlike evolution).
I do agree that in saying ” ¯\_(ツ)_/¯ ” I am disagreeing with a bunch of claims that say “evolution example implies misalignment is probable”. I am unclear to what extent people actually believe such a claim vs. use it as a communication strategy. (The author of the linked post states some uncertainty but presumably does believe something similar to that; I disagree with them if so.)
Relatedly, something I’d be interested in reading (if it doesn’t already exist?) would be a piece that takes a broader approach to drawing lessons from the evolution of human goals—rather than stopping at the fact that humans care about things besides genetic fitness.
I like the general idea but the way I’d do it is by doing some black-box investigation of current language models and asking these questions there; I expect we understand the “ancestral environment” of a language model way, way better than we understand the ancestral environment for humans, making it a lot easier to draw conclusions; you could also finetune the language models in order to simulate an “ancestral environment” of your choice and see what happens then.
So—if we want to create AI systems that don’t murder people, by rewarding non-murderous behavior—then the evidence from human evolution seems like it might be medium-reassuring. I’d maybe give it a B-.
I agree with the murder example being a tiny bit reassuring for training non-murderous AIs; medium-reassuring is probably too much, unless we’re expecting our AI systems to be put into the same sorts of situations / ancestral environments as humans were in. (Note that to be the “same sort of situation” it also needs to have the same sort of inputs as humans, e.g. vision + sound + some sort of controllable physical body seems important.)
I mostly go ¯\_(ツ)_/¯ , it doesn’t feel like it’s much evidence of anything, after you’ve updated off the abstract argument. The actual situation we face will be so different (primarily, we’re actually trying to deal with the alignment problem, unlike evolution).
I do agree that in saying ” ¯\_(ツ)_/¯ ” I am disagreeing with a bunch of claims that say “evolution example implies misalignment is probable”. I am unclear to what extent people actually believe such a claim vs. use it as a communication strategy. (The author of the linked post states some uncertainty but presumably does believe something similar to that; I disagree with them if so.)
I like the general idea but the way I’d do it is by doing some black-box investigation of current language models and asking these questions there; I expect we understand the “ancestral environment” of a language model way, way better than we understand the ancestral environment for humans, making it a lot easier to draw conclusions; you could also finetune the language models in order to simulate an “ancestral environment” of your choice and see what happens then.
I agree with the murder example being a tiny bit reassuring for training non-murderous AIs; medium-reassuring is probably too much, unless we’re expecting our AI systems to be put into the same sorts of situations / ancestral environments as humans were in. (Note that to be the “same sort of situation” it also needs to have the same sort of inputs as humans, e.g. vision + sound + some sort of controllable physical body seems important.)