Thank you very much for all of these thoughts. It is very interesting and I will have to read all of these links when I have the time.
I totally took the view that the EA community relies a lot on EV calculations somewhat based on vague experience without doing a full assessment of the level of reliance, which would have been ideal, so the posted examples are very useful.
*
To clarify one points:
If the post is against the use of quantitative models in general, then I do in fact disagree with the post.
I was not at all against quantitative models. Most of the DMDU stuff is quantitative models. I was arguing against the overuse of quantitative models of a particular type.
*
To answer one question
would you have been confident that the conclusion would have agreed with our prior beliefs before the report was done?
Yes. I would have been happy to say that, in general, I expect work of this type is less likely to be useful than other research work that does not try to predict the long-run future of humanity. (This is in a general sense, not considering factors like the researchers background and skills and so forth).
Yes. I would have been happy to say that, in general, I expect work of this type is less likely to be useful than other research work that does not try to predict the long-run future of humanity.
Sorry, I think I wasn’t clear. Let me make the case for the ex ante value of the Open Phil report in more detail:
1. Ex ante, it was plausible that the report would have concluded “we should not expect lots of growth in the near future”.
2. If the report had this conclusion, then we should update that AI risk is much less important than we currently think. (I am not arguing that “lots of growth ⇒ transformative AI”, I am arguing that “not much growth ⇒ no transformative AI”.)
3. This would be a very significant and important update (especially for Open Phil). It would presumably lead them to put less money into AI and more money into other areas.
4. Therefore, the report was ex ante quite valuable since it had a non-trivial chance of leading to major changes in cause prioritization.
Presumably you disagree with 1, 2, 3 or 4; I’m not sure which one.
Dear MichaelStJules and rohinmshah
Thank you very much for all of these thoughts. It is very interesting and I will have to read all of these links when I have the time.
I totally took the view that the EA community relies a lot on EV calculations somewhat based on vague experience without doing a full assessment of the level of reliance, which would have been ideal, so the posted examples are very useful.
*
To clarify one points:
I was not at all against quantitative models. Most of the DMDU stuff is quantitative models. I was arguing against the overuse of quantitative models of a particular type.
*
To answer one question
Yes. I would have been happy to say that, in general, I expect work of this type is less likely to be useful than other research work that does not try to predict the long-run future of humanity. (This is in a general sense, not considering factors like the researchers background and skills and so forth).
Sorry, I think I wasn’t clear. Let me make the case for the ex ante value of the Open Phil report in more detail:
1. Ex ante, it was plausible that the report would have concluded “we should not expect lots of growth in the near future”.
2. If the report had this conclusion, then we should update that AI risk is much less important than we currently think. (I am not arguing that “lots of growth ⇒ transformative AI”, I am arguing that “not much growth ⇒ no transformative AI”.)
3. This would be a very significant and important update (especially for Open Phil). It would presumably lead them to put less money into AI and more money into other areas.
4. Therefore, the report was ex ante quite valuable since it had a non-trivial chance of leading to major changes in cause prioritization.
Presumably you disagree with 1, 2, 3 or 4; I’m not sure which one.