Hm, I think I’d say progress at this stage largely looks like being better able to cash out disagreements about big-picture and long-term questions in terms of disagreements about more narrow, empirical, or near-term questions, and then trying to further break down and ultimately answer these sub-questions to try to figure out which big picture view(s) are most correct. I think given the relatively small amount of effort put into it so far and the intrinsic difficulty of this project, returns have been pretty good on that front—it feels like people are having somewhat narrower and more tractable arguments as time goes on.
I’m not sure about what exact skillsets the field most needs. I think the field right now is still in a very early stage and could use a lot of disentanglement research, and it’s often pretty chaotic and contingent what “qualifies” someone for this kind of work. Deep familiarity with the existing discourse and previous arguments/attempts at disentanglement is often useful, and some sort of quantitative background (e.g. economics or computer science or math) or mindset is often useful, and subject matter expertise (in this case machine learning and AI more broadly) is often useful, but none of these things are obviously necessary or sufficient. Often it’s just that someone happens to strike upon an approach to the question that has some purchase, they write it up on the EA Forum or LessWrong, and it strikes a chord with others and results in more progress along those lines.
I think “major insights” is potentially a somewhat loaded framing; it seems to imply that only highly conceptual considerations that change our minds about previously-accepted big picture claims count as significant progress. I think very early on, EA produced a number of somewhat arguments and considerations which felt like “major insights” in that they caused major swings in the consensus of what cause areas to prioritize at a very high level; I think that probably reflected that the question was relatively new and there was low-hanging fruit. I think we shouldn’t expect future progress to take the form of “major insights” that wildly swing views about a basic, high-level question as much (although I still think that’s possible).
Since 2015, I think we’ve seen good analysis and discussion of AI timelines and takeoff speeds, discussion of specific AI risks that go beyond the classic scenario presented in Superintellilgence, better characterization of multipolar and distributed AI scenarios, some interesting and more quantitative debates on giving now vs giving later and “hinge of history” vs “patient” long-termism, etc. None of these have provided definitive / authoritative answers, but they all feel useful to me as someone trying to prioritize where Open Phil dollars should go.
I’m not sure how to answer this; I think taking into account the expected low-hanging fruit effect, and the relatively low investment in this research, progress has probably been pretty good, but I’m very uncertain about the degree of progress I “should have expected” on priors.
I think ideally the world as a whole would be investing much more in this type of work than it is now. A lot of the bottleneck to this is that the work is not very well-scoped or broken into tractable sub-problems, which makes it hard for a large number of people to be quickly on-boarded to it.
Related to the above, I’d love for the work to become better-scoped over time—this is one thing we prioritize highly at Open Phil.
Yeah, to be clear, I don’t intend to imply that we should expect there to have many been “major insights” after EA’s early years, or that that’s the only thing that’s useful. Tobias Baumann said on Max’s post:
I think there haven’t been any novel major insights since 2015, for your threshold of “novel” and “major”.
Notwithstanding that, I believe that we’ve made significant progress and that work on macrostrategy was and continues to be valuable. Most of that value is in many smaller insights, or in the refinement and diffusion of ideas that aren’t strictly speaking novel.
That’s basically my view too, and it sounds like your view is sort-of similar. Though your comment makes me notice that some things that don’t seem explicitly captured by either Max’s question or Tobias’s response are:
better framings and disentanglement, to lay the groundwork for future minor/major insights
I.e., things that help make topics more “well-scoped or broken into tractable sub-problems”
better framings, to help us just think through something or be able to form intuitions more easily/reliably
things that are more concrete and practical than what people usually think of as “insights”
E.g., better estimates for some parameter
(ETA: I’ve now copied Ajeya’s answer to these questions as an answer to Max’s post.)
Hm, I think I’d say progress at this stage largely looks like being better able to cash out disagreements about big-picture and long-term questions in terms of disagreements about more narrow, empirical, or near-term questions, and then trying to further break down and ultimately answer these sub-questions to try to figure out which big picture view(s) are most correct. I think given the relatively small amount of effort put into it so far and the intrinsic difficulty of this project, returns have been pretty good on that front—it feels like people are having somewhat narrower and more tractable arguments as time goes on.
I’m not sure about what exact skillsets the field most needs. I think the field right now is still in a very early stage and could use a lot of disentanglement research, and it’s often pretty chaotic and contingent what “qualifies” someone for this kind of work. Deep familiarity with the existing discourse and previous arguments/attempts at disentanglement is often useful, and some sort of quantitative background (e.g. economics or computer science or math) or mindset is often useful, and subject matter expertise (in this case machine learning and AI more broadly) is often useful, but none of these things are obviously necessary or sufficient. Often it’s just that someone happens to strike upon an approach to the question that has some purchase, they write it up on the EA Forum or LessWrong, and it strikes a chord with others and results in more progress along those lines.
Interesting answer :)
That made me think to ask the following questions, which are sort-of a tangent and sort-of a generalisation of the kind of questions Alex HT asked:
(These questions are inspired by a post by Max Daniel.)
Do you think many major insights from longtermist macrostrategy or global priorities research have been found since 2015?
If so, what would you say are some of the main ones?
Do you think the progress has been at a good pace (however you want to interpret that)?
Do you think that this pushes for or against allocating more resources (labour, money, etc.) towards that type of work?
Do you think that this suggests we should change how we do this work, or emphasise some types of it more?
(Feel very free to just answer some of these, answer variants of them, etc.)
I think “major insights” is potentially a somewhat loaded framing; it seems to imply that only highly conceptual considerations that change our minds about previously-accepted big picture claims count as significant progress. I think very early on, EA produced a number of somewhat arguments and considerations which felt like “major insights” in that they caused major swings in the consensus of what cause areas to prioritize at a very high level; I think that probably reflected that the question was relatively new and there was low-hanging fruit. I think we shouldn’t expect future progress to take the form of “major insights” that wildly swing views about a basic, high-level question as much (although I still think that’s possible).
Since 2015, I think we’ve seen good analysis and discussion of AI timelines and takeoff speeds, discussion of specific AI risks that go beyond the classic scenario presented in Superintellilgence, better characterization of multipolar and distributed AI scenarios, some interesting and more quantitative debates on giving now vs giving later and “hinge of history” vs “patient” long-termism, etc. None of these have provided definitive / authoritative answers, but they all feel useful to me as someone trying to prioritize where Open Phil dollars should go.
I’m not sure how to answer this; I think taking into account the expected low-hanging fruit effect, and the relatively low investment in this research, progress has probably been pretty good, but I’m very uncertain about the degree of progress I “should have expected” on priors.
I think ideally the world as a whole would be investing much more in this type of work than it is now. A lot of the bottleneck to this is that the work is not very well-scoped or broken into tractable sub-problems, which makes it hard for a large number of people to be quickly on-boarded to it.
Related to the above, I’d love for the work to become better-scoped over time—this is one thing we prioritize highly at Open Phil.
Thanks!
Yeah, to be clear, I don’t intend to imply that we should expect there to have many been “major insights” after EA’s early years, or that that’s the only thing that’s useful. Tobias Baumann said on Max’s post:
That’s basically my view too, and it sounds like your view is sort-of similar. Though your comment makes me notice that some things that don’t seem explicitly captured by either Max’s question or Tobias’s response are:
better framings and disentanglement, to lay the groundwork for future minor/major insights
I.e., things that help make topics more “well-scoped or broken into tractable sub-problems”
better framings, to help us just think through something or be able to form intuitions more easily/reliably
things that are more concrete and practical than what people usually think of as “insights”
E.g., better estimates for some parameter
(ETA: I’ve now copied Ajeya’s answer to these questions as an answer to Max’s post.)