0.01% risk at 1 billion would mean that $100 billion would reduce risk by 1%. That’s probably more money than that available to all of EA at the moment. I guess that wouldn’t seem like a good buy to me.
$100bn to reduce the risk by 100 basis points seems like a good deal to me, if you think you can model the risk like that. If I’ve understood the correctly, that would be the equivalent price of $10tn to avoid a certain extinction; which is less than 20% of global GDP. Bargain!
Average benefit =/= marginal benefit. So if the first 10B buys really good stuff, the marginal 1B might be much much worse.
We’ll probably have more money in the future
In worlds where we don’t have access to more money, we can choose not to scale up projects
I’d be especially excited if people submit projects to us of a fairly decent range of cost-effectiveness, so we can fund a bunch of things to start with, and have the option of funding everything later if we have the money for it.
There’s no fundamental law that says most x-risk is reducible, there might be a fair number of worlds where we’re either doomed or saved by default.
But I’m also interested in modeling out the x-risk reduction potential of various projects, and dynamically adjusting the bar.
0.01% risk at 1 billion would mean that $100 billion would reduce risk by 1%. That’s probably more money than that available to all of EA at the moment. I guess that wouldn’t seem like a good buy to me.
$100bn to reduce the risk by 100 basis points seems like a good deal to me, if you think you can model the risk like that. If I’ve understood the correctly, that would be the equivalent price of $10tn to avoid a certain extinction; which is less than 20% of global GDP. Bargain!
Some quick counterpoints:
Average benefit =/= marginal benefit. So if the first 10B buys really good stuff, the marginal 1B might be much much worse.
We’ll probably have more money in the future
In worlds where we don’t have access to more money, we can choose not to scale up projects
I’d be especially excited if people submit projects to us of a fairly decent range of cost-effectiveness, so we can fund a bunch of things to start with, and have the option of funding everything later if we have the money for it.
There’s no fundamental law that says most x-risk is reducible, there might be a fair number of worlds where we’re either doomed or saved by default.
But I’m also interested in modeling out the x-risk reduction potential of various projects, and dynamically adjusting the bar.
The “By ‘0.01%‘, do you mean ‘0.01pp’” thing might also loom here. 0.01pp reduction is a much better buy than 0.01% reduction!