We’re absolutely open to (and all interested in) catastrophic risks other than artificial intelligence. The fund is the long term future fund, and we believe that catastrophic risks are highly relevant to our long term future.
Trying to infer the motivation for the question I can add that in my own modelling getting AGI right seems highly important, and is the thing I’m most worried about, but I’m far from certain that another of the catastrophic risks we face won’t be catastrophic enough to threaten our existence or to delay progress toward AGI until civilisation recovers. I expect that the fund will make grants to non-AGI risk reduction projects.
If the motivation for the question is more how we will judge non-AI projects, see Habryka’s response for a general discussion of project evaluation.
Under what conditions would you consider making a grant directed towards catastrophic risks other than artificial intelligence?
We’re absolutely open to (and all interested in) catastrophic risks other than artificial intelligence. The fund is the long term future fund, and we believe that catastrophic risks are highly relevant to our long term future.
Trying to infer the motivation for the question I can add that in my own modelling getting AGI right seems highly important, and is the thing I’m most worried about, but I’m far from certain that another of the catastrophic risks we face won’t be catastrophic enough to threaten our existence or to delay progress toward AGI until civilisation recovers. I expect that the fund will make grants to non-AGI risk reduction projects.
If the motivation for the question is more how we will judge non-AI projects, see Habryka’s response for a general discussion of project evaluation.