I think âmajor insightsâ is potentially a somewhat loaded framing; it seems to imply that only highly conceptual considerations that change our minds about previously-accepted big picture claims count as significant progress. I think very early on, EA produced a number of somewhat arguments and considerations which felt like âmajor insightsâ in that they caused major swings in the consensus of what cause areas to prioritize at a very high level; I think that probably reflected that the question was relatively new and there was low-hanging fruit. I think we shouldnât expect future progress to take the form of âmajor insightsâ that wildly swing views about a basic, high-level question as much (although I still think thatâs possible).
Since 2015, I think weâve seen good analysis and discussion of AI timelines and takeoff speeds, discussion of specific AI risks that go beyond the classic scenario presented in Superintellilgence, better characterization of multipolar and distributed AI scenarios, some interesting and more quantitative debates on giving now vs giving later and âhinge of historyâ vs âpatientâ long-termism, etc. None of these have provided definitive /â authoritative answers, but they all feel useful to me as someone trying to prioritize where Open Phil dollars should go.
Iâm not sure how to answer this; I think taking into account the expected low-hanging fruit effect, and the relatively low investment in this research, progress has probably been pretty good, but Iâm very uncertain about the degree of progress I âshould have expectedâ on priors.
I think ideally the world as a whole would be investing much more in this type of work than it is now. A lot of the bottleneck to this is that the work is not very well-scoped or broken into tractable sub-problems, which makes it hard for a large number of people to be quickly on-boarded to it.
Related to the above, Iâd love for the work to become better-scoped over timeâthis is one thing we prioritize highly at Open Phil.
Yeah, to be clear, I donât intend to imply that we should expect there to have many been âmajor insightsâ after EAâs early years, or that thatâs the only thing thatâs useful. Tobias Baumann said on Maxâs post:
I think there havenât been any novel major insights since 2015, for your threshold of ânovelâ and âmajorâ.
Notwithstanding that, I believe that weâve made significant progress and that work on macrostrategy was and continues to be valuable. Most of that value is in many smaller insights, or in the refinement and diffusion of ideas that arenât strictly speaking novel.
Thatâs basically my view too, and it sounds like your view is sort-of similar. Though your comment makes me notice that some things that donât seem explicitly captured by either Maxâs question or Tobiasâs response are:
better framings and disentanglement, to lay the groundwork for future minor/âmajor insights
I.e., things that help make topics more âwell-scoped or broken into tractable sub-problemsâ
better framings, to help us just think through something or be able to form intuitions more easily/âreliably
things that are more concrete and practical than what people usually think of as âinsightsâ
E.g., better estimates for some parameter
(ETA: Iâve now copied Ajeyaâs answer to these questions as an answer to Maxâs post.)
Interesting answer :)
That made me think to ask the following questions, which are sort-of a tangent and sort-of a generalisation of the kind of questions Alex HT asked:
(These questions are inspired by a post by Max Daniel.)
Do you think many major insights from longtermist macrostrategy or global priorities research have been found since 2015?
If so, what would you say are some of the main ones?
Do you think the progress has been at a good pace (however you want to interpret that)?
Do you think that this pushes for or against allocating more resources (labour, money, etc.) towards that type of work?
Do you think that this suggests we should change how we do this work, or emphasise some types of it more?
(Feel very free to just answer some of these, answer variants of them, etc.)
I think âmajor insightsâ is potentially a somewhat loaded framing; it seems to imply that only highly conceptual considerations that change our minds about previously-accepted big picture claims count as significant progress. I think very early on, EA produced a number of somewhat arguments and considerations which felt like âmajor insightsâ in that they caused major swings in the consensus of what cause areas to prioritize at a very high level; I think that probably reflected that the question was relatively new and there was low-hanging fruit. I think we shouldnât expect future progress to take the form of âmajor insightsâ that wildly swing views about a basic, high-level question as much (although I still think thatâs possible).
Since 2015, I think weâve seen good analysis and discussion of AI timelines and takeoff speeds, discussion of specific AI risks that go beyond the classic scenario presented in Superintellilgence, better characterization of multipolar and distributed AI scenarios, some interesting and more quantitative debates on giving now vs giving later and âhinge of historyâ vs âpatientâ long-termism, etc. None of these have provided definitive /â authoritative answers, but they all feel useful to me as someone trying to prioritize where Open Phil dollars should go.
Iâm not sure how to answer this; I think taking into account the expected low-hanging fruit effect, and the relatively low investment in this research, progress has probably been pretty good, but Iâm very uncertain about the degree of progress I âshould have expectedâ on priors.
I think ideally the world as a whole would be investing much more in this type of work than it is now. A lot of the bottleneck to this is that the work is not very well-scoped or broken into tractable sub-problems, which makes it hard for a large number of people to be quickly on-boarded to it.
Related to the above, Iâd love for the work to become better-scoped over timeâthis is one thing we prioritize highly at Open Phil.
Thanks!
Yeah, to be clear, I donât intend to imply that we should expect there to have many been âmajor insightsâ after EAâs early years, or that thatâs the only thing thatâs useful. Tobias Baumann said on Maxâs post:
Thatâs basically my view too, and it sounds like your view is sort-of similar. Though your comment makes me notice that some things that donât seem explicitly captured by either Maxâs question or Tobiasâs response are:
better framings and disentanglement, to lay the groundwork for future minor/âmajor insights
I.e., things that help make topics more âwell-scoped or broken into tractable sub-problemsâ
better framings, to help us just think through something or be able to form intuitions more easily/âreliably
things that are more concrete and practical than what people usually think of as âinsightsâ
E.g., better estimates for some parameter
(ETA: Iâve now copied Ajeyaâs answer to these questions as an answer to Maxâs post.)