Some questions that might be cruxy and important for money allocation:
Because there is some evidence that superforecaster aggregation might underperform in AI capabilities, how should epistemic weight be distributed between generalist forecasters, domain experts, and algorithmic prediction models? What evidence exists/can be gotten about their relative track records?
Are there better ways to do AIS CEA? What are they?
Is there productive work to be done in inter-cause comparison among new potential cause areas (i.e. digital minds, space governance, etc)? What types of assumptions do these rely on? I ask because it seems like people typically go into these fields because “woah, those numbers are really big,” but that sort of reasoning applies to lots of those fields and doesn’t tell you very much about resource distribution.
What are the reputational effects for EA (for people inside and outside the movement) going (more) all in on certain causes and then being wrong (i.e. AI is and continues to be a bubble)? Should this change how much EA should go in on things? Under what assumptions?
Some questions that might be cruxy and important for money allocation:
Because there is some evidence that superforecaster aggregation might underperform in AI capabilities, how should epistemic weight be distributed between generalist forecasters, domain experts, and algorithmic prediction models? What evidence exists/can be gotten about their relative track records?
Are there better ways to do AIS CEA? What are they?
Is there productive work to be done in inter-cause comparison among new potential cause areas (i.e. digital minds, space governance, etc)? What types of assumptions do these rely on? I ask because it seems like people typically go into these fields because “woah, those numbers are really big,” but that sort of reasoning applies to lots of those fields and doesn’t tell you very much about resource distribution.
What are the reputational effects for EA (for people inside and outside the movement) going (more) all in on certain causes and then being wrong (i.e. AI is and continues to be a bubble)? Should this change how much EA should go in on things? Under what assumptions?