What do you think EA as a movement should do differently if we took seriously the views that (1) “≥10% probability of AGI in ≤10 years is crunch time”, (2) “crunch time means we should be doing a lot more than we are currently doing”, and (3) “‘≥10% probability of AGI in ≤10 years’ is true”?
This. Most of the other AGI x-risk related ideas on the FTX Future Fund project ideas comment thread. And just generally be allocating about 100-1000x more resources to the problem. Maybe that’s too much for EA as it currently stands. But in an ideal world, more resources would be devoted to AI Alignment than to AI capabilities.
What do you think EA as a movement should do differently if we took seriously the views that (1) “≥10% probability of AGI in ≤10 years is crunch time”, (2) “crunch time means we should be doing a lot more than we are currently doing”, and (3) “‘≥10% probability of AGI in ≤10 years’ is true”?
This. Most of the other AGI x-risk related ideas on the FTX Future Fund project ideas comment thread. And just generally be allocating about 100-1000x more resources to the problem. Maybe that’s too much for EA as it currently stands. But in an ideal world, more resources would be devoted to AI Alignment than to AI capabilities.