I don’t currently understand what area of work you’re trying to point out with this question. You might want to be more specific to get good answers.
Here are some different things which you might be trying to talk about:
From a philosophical perspective, when is ML itself unethical due to effectively causing the death of some agent? (Or replace ML with other selection techniques.)
If we have economic selection pressures over digital minds/AIs what sorts of predictable problematic outcomes result?
If we select ML systems to achieve good results according to a lossy reward signal we might run into issue (e.g. reward hacking), what can we do to resolve this?
For (2) you might be interested in the sort of discussion in “Age of EM”, though I expect that the situation is pretty different in the de novo AI case.
I don’t currently understand what area of work you’re trying to point out with this question. You might want to be more specific to get good answers.
Here are some different things which you might be trying to talk about:
From a philosophical perspective, when is ML itself unethical due to effectively causing the death of some agent? (Or replace ML with other selection techniques.)
If we have economic selection pressures over digital minds/AIs what sorts of predictable problematic outcomes result?
If we select ML systems to achieve good results according to a lossy reward signal we might run into issue (e.g. reward hacking), what can we do to resolve this?
For (2) you might be interested in the sort of discussion in “Age of EM”, though I expect that the situation is pretty different in the de novo AI case.
I’ve clarified the question, does it make more sense now?
Yes.