From what I’ve read moral uncertainty tends to work in favour of longtermists, provided you’re happy to do something like maximising expected choice-worthiness. E.g. see here for moral uncertainty about population axiology implying we should choose options preferred by total utilitarianism (disclaimer—I’ve only read the abstract!). If Greaves and MacAskill’s claim about the robustness of longtermism to different moral views is fair, it seems longtermism should remain fairly robust in the face of moral uncertainty.
In terms of complex cluelessness in a more empirical sense, I admit I haven’t properly considered the possibility that something like “researching AI alignment” may have realistic downsides. I do however find it a tougher sell that we’re complexly clueless about working on AI alignment in the same way that we are about giving to AMF.
From what I’ve read moral uncertainty tends to work in favour of longtermists, provided you’re happy to do something like maximising expected choice-worthiness. E.g. see here for moral uncertainty about population axiology implying we should choose options preferred by total utilitarianism (disclaimer—I’ve only read the abstract!). If Greaves and MacAskill’s claim about the robustness of longtermism to different moral views is fair, it seems longtermism should remain fairly robust in the face of moral uncertainty.
In terms of complex cluelessness in a more empirical sense, I admit I haven’t properly considered the possibility that something like “researching AI alignment” may have realistic downsides. I do however find it a tougher sell that we’re complexly clueless about working on AI alignment in the same way that we are about giving to AMF.