Ok so maybe my idea is just nonsense but I think we could come up with super smart humans who could then understand what AI is doing. Like, genetically engineer them, or put a machine in their brain to make them supersmart humans. So, someone who is working on AI safety research isn’t working on how to enhance humans like this, and maybe they miss out on that opportunity, which causes relative (though not absolute) harm.
Ok so maybe my idea is just nonsense but I think we could come up with super smart humans who could then understand what AI is doing. Like, genetically engineer them, or put a machine in their brain to make them supersmart humans. So, someone who is working on AI safety research isn’t working on how to enhance humans like this, and maybe they miss out on that opportunity, which causes relative (though not absolute) harm.