I wonder if one were to make an argument for a candidate strictly across causes which are more EA consensus/funded by Open Phil. X candidate is good for animal welfare, global health and development, and pandemic and AI catastrophic/existential risk.
Relevant to x-risks, quoting Zvi:
I think this is a good instance of focusing through cause areas and one I had in mind