I think “major insights” is potentially a somewhat loaded framing; it seems to imply that only highly conceptual considerations that change our minds about previously-accepted big picture claims count as significant progress. I think very early on, EA produced a number of somewhat arguments and considerations which felt like “major insights” in that they caused major swings in the consensus of what cause areas to prioritize at a very high level; I think that probably reflected that the question was relatively new and there was low-hanging fruit. I think we shouldn’t expect future progress to take the form of “major insights” that wildly swing views about a basic, high-level question as much (although I still think that’s possible).
Since 2015, I think we’ve seen good analysis and discussion of AI timelines and takeoff speeds, discussion of specific AI risks that go beyond the classic scenario presented in Superintellilgence, better characterization of multipolar and distributed AI scenarios, some interesting and more quantitative debates on giving now vs giving later and “hinge of history” vs “patient” long-termism, etc. None of these have provided definitive / authoritative answers, but they all feel useful to me as someone trying to prioritize where Open Phil dollars should go.
I’m not sure how to answer this; I think taking into account the expected low-hanging fruit effect, and the relatively low investment in this research, progress has probably been pretty good, but I’m very uncertain about the degree of progress I “should have expected” on priors.
I think ideally the world as a whole would be investing much more in this type of work than it is now. A lot of the bottleneck to this is that the work is not very well-scoped or broken into tractable sub-problems, which makes it hard for a large number of people to be quickly on-boarded to it.
Related to the above, I’d love for the work to become better-scoped over time—this is one thing we prioritize highly at Open Phil.
Yeah, to be clear, I don’t intend to imply that we should expect there to have many been “major insights” after EA’s early years, or that that’s the only thing that’s useful. Tobias Baumann said on Max’s post:
I think there haven’t been any novel major insights since 2015, for your threshold of “novel” and “major”.
Notwithstanding that, I believe that we’ve made significant progress and that work on macrostrategy was and continues to be valuable. Most of that value is in many smaller insights, or in the refinement and diffusion of ideas that aren’t strictly speaking novel.
That’s basically my view too, and it sounds like your view is sort-of similar. Though your comment makes me notice that some things that don’t seem explicitly captured by either Max’s question or Tobias’s response are:
better framings and disentanglement, to lay the groundwork for future minor/major insights
I.e., things that help make topics more “well-scoped or broken into tractable sub-problems”
better framings, to help us just think through something or be able to form intuitions more easily/reliably
things that are more concrete and practical than what people usually think of as “insights”
E.g., better estimates for some parameter
(ETA: I’ve now copied Ajeya’s answer to these questions as an answer to Max’s post.)
I think “major insights” is potentially a somewhat loaded framing; it seems to imply that only highly conceptual considerations that change our minds about previously-accepted big picture claims count as significant progress. I think very early on, EA produced a number of somewhat arguments and considerations which felt like “major insights” in that they caused major swings in the consensus of what cause areas to prioritize at a very high level; I think that probably reflected that the question was relatively new and there was low-hanging fruit. I think we shouldn’t expect future progress to take the form of “major insights” that wildly swing views about a basic, high-level question as much (although I still think that’s possible).
Since 2015, I think we’ve seen good analysis and discussion of AI timelines and takeoff speeds, discussion of specific AI risks that go beyond the classic scenario presented in Superintellilgence, better characterization of multipolar and distributed AI scenarios, some interesting and more quantitative debates on giving now vs giving later and “hinge of history” vs “patient” long-termism, etc. None of these have provided definitive / authoritative answers, but they all feel useful to me as someone trying to prioritize where Open Phil dollars should go.
I’m not sure how to answer this; I think taking into account the expected low-hanging fruit effect, and the relatively low investment in this research, progress has probably been pretty good, but I’m very uncertain about the degree of progress I “should have expected” on priors.
I think ideally the world as a whole would be investing much more in this type of work than it is now. A lot of the bottleneck to this is that the work is not very well-scoped or broken into tractable sub-problems, which makes it hard for a large number of people to be quickly on-boarded to it.
Related to the above, I’d love for the work to become better-scoped over time—this is one thing we prioritize highly at Open Phil.
Thanks!
Yeah, to be clear, I don’t intend to imply that we should expect there to have many been “major insights” after EA’s early years, or that that’s the only thing that’s useful. Tobias Baumann said on Max’s post:
That’s basically my view too, and it sounds like your view is sort-of similar. Though your comment makes me notice that some things that don’t seem explicitly captured by either Max’s question or Tobias’s response are:
better framings and disentanglement, to lay the groundwork for future minor/major insights
I.e., things that help make topics more “well-scoped or broken into tractable sub-problems”
better framings, to help us just think through something or be able to form intuitions more easily/reliably
things that are more concrete and practical than what people usually think of as “insights”
E.g., better estimates for some parameter
(ETA: I’ve now copied Ajeya’s answer to these questions as an answer to Max’s post.)