Re: feasibility of AI alignment research, Metaculus already has Control Problem solved before AGI invented
. Do you have a sense of what further questions would be valuable?
I don’t have anything available for this offhand—I’d have to put serious thought into what questions are at the most productive intersection of “resolvable”, “a good fit for Metaculus” and “capturing something important.” Something about warning signs (“will an AI system steal at least $10 million?”) could be good.
Re: feasibility of AI alignment research, Metaculus already has Control Problem solved before AGI invented . Do you have a sense of what further questions would be valuable?
I don’t have anything available for this offhand—I’d have to put serious thought into what questions are at the most productive intersection of “resolvable”, “a good fit for Metaculus” and “capturing something important.” Something about warning signs (“will an AI system steal at least $10 million?”) could be good.