In most endeavors, you expect to receive many nos before receiving a yes (eg applying to schools, jobs, publishing papers/books, startups, etc). In EA it’s common to receive one no and for people to give up.
I think this would only make sense if it was in a field where talent / value was easy to spot and evaluate and there were good feedback loops. But AI safety is far more like evaluating startup founders than evaluating bridge-builders.
Except even more difficult to evaluate, because at least with for-profit founders, you find out years later if they made money or not! With ethics, you can’t even tell if you’re going in the right direction!
If that’s the case, we should have more evaluators, so that there’s less people who slip through the cracks.
Good question! So, that’s important, but I’m less worried about this because:
All these donors were giving anyways. This just gives them more / better options to choose from.
Donors are only one step in the chain for the unilaterlist curse. If people fund a bad idea, then it’ll get ripped to shreds on the Forum :P
LTFF is also composed of fallible humans who might miss large downside risk projects.
I’m far more worried about the bureaucrat’s curse in AI safety.
In most endeavors, you expect to receive many nos before receiving a yes (eg applying to schools, jobs, publishing papers/books, startups, etc). In EA it’s common to receive one no and for people to give up.
I think this would only make sense if it was in a field where talent / value was easy to spot and evaluate and there were good feedback loops. But AI safety is far more like evaluating startup founders than evaluating bridge-builders.
Except even more difficult to evaluate, because at least with for-profit founders, you find out years later if they made money or not! With ethics, you can’t even tell if you’re going in the right direction!
If that’s the case, we should have more evaluators, so that there’s less people who slip through the cracks.
I discuss something similar in another comment thread here.