You make a helpful point. We’ve focused on a pretty extreme claim, but there are more nuanced discussions in the area that we think are important. We do think that “AI might solve this” can take chunks out of the expected value of lots of projects (and we’ve started kicking around some ideas for analyzing this). We’ve also done some work about how the background probabilities of x-risk affect the expected value of x-risk projects.
I don’t think that we can swap one general heuristic (e.g. AI futures make other work useless) for a more moderate one (e.g. AI futures reduce EV by 50%). The possibilities that “AI might make this problem worse” or “AI might raise the stakes of decisions we make now” can also amplify the EV of our current projects. Figuring out how AI futures affect cost-effectiveness estimates today is complicated, tricky, and necessary!
You make a helpful point. We’ve focused on a pretty extreme claim, but there are more nuanced discussions in the area that we think are important. We do think that “AI might solve this” can take chunks out of the expected value of lots of projects (and we’ve started kicking around some ideas for analyzing this). We’ve also done some work about how the background probabilities of x-risk affect the expected value of x-risk projects.
I don’t think that we can swap one general heuristic (e.g. AI futures make other work useless) for a more moderate one (e.g. AI futures reduce EV by 50%). The possibilities that “AI might make this problem worse” or “AI might raise the stakes of decisions we make now” can also amplify the EV of our current projects. Figuring out how AI futures affect cost-effectiveness estimates today is complicated, tricky, and necessary!