I don’t know of particular estimates. I do know that different (smart, reasonable, well-informed) people would give very different answers—at least one would even say that the marginal AI safety researcher has negative expected value.
Personally, I’m optimistic that even if you’re skeptical of AI safety research in general, you can get positive expected value by (as a lower bound) doing something like giving particular researchers whose judgment you trust money to support researchers they think are promising.
My guess is that the typical AI-concerned community leader would say at least a one in 10 billion chance for $1,000.
I don’t know of particular estimates. I do know that different (smart, reasonable, well-informed) people would give very different answers—at least one would even say that the marginal AI safety researcher has negative expected value.
Personally, I’m optimistic that even if you’re skeptical of AI safety research in general, you can get positive expected value by (as a lower bound) doing something like giving particular researchers whose judgment you trust money to support researchers they think are promising.
My guess is that the typical AI-concerned community leader would say at least a one in 10 billion chance for $1,000.