Yes. I would have been happy to say that, in general, I expect work of this type is less likely to be useful than other research work that does not try to predict the long-run future of humanity.
Sorry, I think I wasn’t clear. Let me make the case for the ex ante value of the Open Phil report in more detail:
1. Ex ante, it was plausible that the report would have concluded “we should not expect lots of growth in the near future”.
2. If the report had this conclusion, then we should update that AI risk is much less important than we currently think. (I am not arguing that “lots of growth ⇒ transformative AI”, I am arguing that “not much growth ⇒ no transformative AI”.)
3. This would be a very significant and important update (especially for Open Phil). It would presumably lead them to put less money into AI and more money into other areas.
4. Therefore, the report was ex ante quite valuable since it had a non-trivial chance of leading to major changes in cause prioritization.
Presumably you disagree with 1, 2, 3 or 4; I’m not sure which one.
Sorry, I think I wasn’t clear. Let me make the case for the ex ante value of the Open Phil report in more detail:
1. Ex ante, it was plausible that the report would have concluded “we should not expect lots of growth in the near future”.
2. If the report had this conclusion, then we should update that AI risk is much less important than we currently think. (I am not arguing that “lots of growth ⇒ transformative AI”, I am arguing that “not much growth ⇒ no transformative AI”.)
3. This would be a very significant and important update (especially for Open Phil). It would presumably lead them to put less money into AI and more money into other areas.
4. Therefore, the report was ex ante quite valuable since it had a non-trivial chance of leading to major changes in cause prioritization.
Presumably you disagree with 1, 2, 3 or 4; I’m not sure which one.