I agree with other answers that in terms of “discrete” insights, there probably wasn’t anything that qualifies as “major” and “novel” according to the above definitions.
I’d say the following were the three major broader developments, though unclear to what extent they were caused by macrostrategy research narrowly construed:
Patient philanthropy: significant development of the theoretical foundations and some practical steps (e.g. the Founders Pledge research report on potentially setting up a long-term fund).
Though the idea and some of the basic arguments probably aren’t novel, see this comment thread below.
Reduced emphasis on a very small list of “top cause areas”. (Visible e.g. here and here, though of course there must have been significant research and discussion prior to such conclusions.)
Diversification of AI risk concerns: less focus on “superintelligent AI kills everyone after rapid takeoff because of poorly specified values” and more research into other sources of AI risk.
I used to think there was less actual as opposed to publicly visible change, and less due to new research to the extent there was change. But it seems that a perception of significant change is more common.
In previous personal discussions, I think people have made fair points around my bar maybe being generally unreasonable. I.e. it’s the default for any research field that major insights don’t appear out of nowhere, and that it’s almost always possible to find similar previous ideas: in other words, research progress being the cumulative effect of many small new ideas and refinements of them.
I think this is largely correct, but that it’s still correct to update negatively on the value of research if past progress has been less good on the spectra of majority and novelty. However, overall I’m now most interested in the sort of question asked here to better understand what kind of progress we’re aiming for rather than for assessing the total value of a field.
FWIW, here are some suggestions for potential “major and novel” insights others have made in personal communication (not necessarily with a strong claim made by the source that they meet the bar, also in some discussions I might have phrased my questions a bit differently):
Nanotech / atomically precise manufacturing / grey goo isn’t a major x-risk
[NB I’m not sure that I agree with APM not being a major x-risk, though ‘grey goo’ specifically may be a distraction. I do have the vague sense that some people in, say, the 90s or until the early 2010s were more concerned about APM then the typical longtermist is now.]
My comments were:
“Hmm, maybe though not sure. Particularly uncertain whether this was because new /insights/ were found or just due to broadly social effects and things like AI becoming more prominent?”
“Also, to what extent did people ever believe this? Maybe this one FHI survey where nanotech was quite high up the x-risk list was just a fluke due to a weird sample?”
Brian Tomasik pointed out: “I think the nanotech-risk orgs from the 2000s were mainly focused on non-grey goo stuff: http://www.crnano.org/dangers.htm″
Climate change is an x-risk factor
My comment was: “Agree it’s important, but is it sufficiently non-obvious and new? My prediction (60%) is that if I asked Brian [Tomasik] when he first realized that this claim is true (even if perhaps not using that terminology) he’d point to a year before 2014.”
We should build an AI policy field
My comment was: “[snarky] This is just extremely obvious unless you have unreasonably high credence in certain rapid-takeoff views, or are otherwise blinded by obviously insane strawman rationalist memes (‘politics is the mind-killer’ [aware that this referred to a quite different dynamic originally], policy work can’t be heavy-tailed [cf. the recent Ben Pace vs. Richard Ngo thing]). [/snarky]
I agree that this was an important development within the distribution of EA opinions, and has affected EA resource allocation quite dramatically. But it doesn’t seem like an insight that was found by research narrowly construed, more like a strategic insight of the kind business CEOs will sometimes have, and like a reasonably obvious meme that has successfully propagated through the community.”
Surrogate goals research is important
My comment was: “Okay, maaybe. But again 70% that if I asked Eliezer when he first realized that surrogate goals are a thing, he’d give a year prior to 2014.”
My comment was: “Aren’t the basic ideas here much older than 5 years, and specifically have appeared in older writings by Paul Almond and have been part of ‘LessWrong folklore’ for a while?Possible that there’s a more recent crisp insight around probable environment hacking—don’t really know what that is.”
Importance of the offense-defense balance and security
My comment was: “Interesting candidate, thanks! Haven’t sufficiently looked at this stuff to have a sense of whether it’s really major/important. I am reasonably confident it’s new.”
[Actually I’m not a bit puzzled why I wrote the last thing. Seems new at most in terms of “popular/widely known within EA”?]
Internal optimizers
My comment was: “Also an interesting candidate. My impression is to put it more in the ‘refinement’ box, but that might be seriously wrong because I think I get very little about this stuff except probably a strawman of the basic concern.”
Bargaining/coordination failures being important
My comment was: “This seems much older [...]? Or are you pointing to things that are very different from e.g. the Racing to the Precipice paper?”
Two-step approaches to AI alignment
My comment was: “This seems kind of plausible, thanks! It’s also in some ways related to the thing that seems most like a counterexample to me so far, which is the idea of a ‘Long Reflection’. (Where my main reservation is whether this actually makes sense / is desirable [...].)”
More ‘elite focus’
My comment was: “Seems more like a business-CEO kind of insight, but maybe there’s macrostrategy research it is based on which I’m not aware of?”
I think people have made fair points around my bar maybe being generally unreasonable. I.e. it’s the default for any research field that major insights don’t appear out of nowhere, and that it’s almost always possible to find similar previous ideas [...]
I think this is largely correct, but that it’s still correct to update negatively on the value of research if past progress has been less good on the spectra of majority and novelty.
I don’t understand the last sentence there. In particular, I’m not sure what you mean “less good” in comparison to.
Do you mean if past progress has been less major and novel than expected? If so, then I’d agree that it’s correct to update negatively if that’s the case.
But given the point about “the default for any research field”, it seems unclear to me whether it’s actually been less major and novel than expected. Perhaps instead we’ve had roughly the sort and amount of progress that people would’ve expected ~2015 when thinking that more money and people should flow towards doing longtermist macrostrategy/GPR?
So here are three things I think you might mean:
“Longtermist macrostrategy/GPR’s insights have been even less major and novel than one would typically expect. So we should update negatively about the value of more work in this field in particular (including relative to work in other fields) - perhaps it’s unusually intractable.”
“People who advocated, funded, or did longtermist macrostrategy/GPR had failed to recognise that research fields rarely have major insights out of nowhere, and thus overestimated the value of more research in general. So we should update negatively about the value of more research in general, including in relation to this field.”
“People who advocated, funded, or did longtermist macrostrategy/GPR mistakenly thought that that field would be an exception to the general pattern of fields rarely having major, novel insights. Now that we have evidence that they were probably mistaken, we should update negatively about this field in particular (moving towards thinking it’s more like other fields).”
Is one of those an accurate description of your view?
Among your three points, I believe something like 1 (for an appropriate reference class to determine “typical”, probably something closer to ‘early-stage fields’ than ‘all fields’). Though not by a lot, and I also haven’t thought that much about how much to expect, and could relatively easily be convinced that I expected too much.
I don’t think I believe 2 or 3. I don’t have much specific information about assumptions made by people who advocated for or funded macrostrategy research, but a priori I’d find it surprising if they had made these mistakes to a strong extent.
I also haven’t thought much about how much one should typically expect in a random field, how that should increase or decrease for this field in the last 5 years just because of how many people and dollars it got (compared to other fields), or how what was produced in the last 5 years in this field compares to that.
But one thing that strikes me is that longtermist macrostrategy/GPR researchers over the past 5 years have probably had substantially less training and experience than researchers in most academic fields we’d probably compare this to. (I haven’t really checked this, but I’d guess it’s true.)
So maybe if there was less novel or less major insights from this field than we should typically expect of a field with the same amount of people and dollars, this can be explained by the people having less human capital, rather than by the field being intrinsically harder to make progress on?
(It could also perhaps be explained if the unusual approaches that are decently often taken in this field tend to be less effective—e.g., more generalist/shallow work rather than deeper dives into narrower topics, and more blog post style work.)
My quick take:
I agree with other answers that in terms of “discrete” insights, there probably wasn’t anything that qualifies as “major” and “novel” according to the above definitions.
I’d say the following were the three major broader developments, though unclear to what extent they were caused by macrostrategy research narrowly construed:
Patient philanthropy: significant development of the theoretical foundations and some practical steps (e.g. the Founders Pledge research report on potentially setting up a long-term fund).
Though the idea and some of the basic arguments probably aren’t novel, see this comment thread below.
Reduced emphasis on a very small list of “top cause areas”. (Visible e.g. here and here, though of course there must have been significant research and discussion prior to such conclusions.)
Diversification of AI risk concerns: less focus on “superintelligent AI kills everyone after rapid takeoff because of poorly specified values” and more research into other sources of AI risk.
I used to think there was less actual as opposed to publicly visible change, and less due to new research to the extent there was change. But it seems that a perception of significant change is more common.
In previous personal discussions, I think people have made fair points around my bar maybe being generally unreasonable. I.e. it’s the default for any research field that major insights don’t appear out of nowhere, and that it’s almost always possible to find similar previous ideas: in other words, research progress being the cumulative effect of many small new ideas and refinements of them.
I think this is largely correct, but that it’s still correct to update negatively on the value of research if past progress has been less good on the spectra of majority and novelty. However, overall I’m now most interested in the sort of question asked here to better understand what kind of progress we’re aiming for rather than for assessing the total value of a field.
FWIW, here are some suggestions for potential “major and novel” insights others have made in personal communication (not necessarily with a strong claim made by the source that they meet the bar, also in some discussions I might have phrased my questions a bit differently):
Nanotech / atomically precise manufacturing / grey goo isn’t a major x-risk
[NB I’m not sure that I agree with APM not being a major x-risk, though ‘grey goo’ specifically may be a distraction. I do have the vague sense that some people in, say, the 90s or until the early 2010s were more concerned about APM then the typical longtermist is now.]
My comments were:
“Hmm, maybe though not sure. Particularly uncertain whether this was because new /insights/ were found or just due to broadly social effects and things like AI becoming more prominent?”
“Also, to what extent did people ever believe this? Maybe this one FHI survey where nanotech was quite high up the x-risk list was just a fluke due to a weird sample?”
Brian Tomasik pointed out: “I think the nanotech-risk orgs from the 2000s were mainly focused on non-grey goo stuff: http://www.crnano.org/dangers.htm″
Climate change is an x-risk factor
My comment was: “Agree it’s important, but is it sufficiently non-obvious and new? My prediction (60%) is that if I asked Brian [Tomasik] when he first realized that this claim is true (even if perhaps not using that terminology) he’d point to a year before 2014.”
We should build an AI policy field
My comment was: “[snarky] This is just extremely obvious unless you have unreasonably high credence in certain rapid-takeoff views, or are otherwise blinded by obviously insane strawman rationalist memes (‘politics is the mind-killer’ [aware that this referred to a quite different dynamic originally], policy work can’t be heavy-tailed [cf. the recent Ben Pace vs. Richard Ngo thing]). [/snarky]
I agree that this was an important development within the distribution of EA opinions, and has affected EA resource allocation quite dramatically. But it doesn’t seem like an insight that was found by research narrowly construed, more like a strategic insight of the kind business CEOs will sometimes have, and like a reasonably obvious meme that has successfully propagated through the community.”
Surrogate goals research is important
My comment was: “Okay, maaybe. But again 70% that if I asked Eliezer when he first realized that surrogate goals are a thing, he’d give a year prior to 2014.”
Acausal trade, acausal threats, MSR, probable environment hacking
My comment was: “Aren’t the basic ideas here much older than 5 years, and specifically have appeared in older writings by Paul Almond and have been part of ‘LessWrong folklore’ for a while?Possible that there’s a more recent crisp insight around probable environment hacking—don’t really know what that is.”
Importance of the offense-defense balance and security
My comment was: “Interesting candidate, thanks! Haven’t sufficiently looked at this stuff to have a sense of whether it’s really major/important. I am reasonably confident it’s new.”
[Actually I’m not a bit puzzled why I wrote the last thing. Seems new at most in terms of “popular/widely known within EA”?]
Internal optimizers
My comment was: “Also an interesting candidate. My impression is to put it more in the ‘refinement’ box, but that might be seriously wrong because I think I get very little about this stuff except probably a strawman of the basic concern.”
Bargaining/coordination failures being important
My comment was: “This seems much older [...]? Or are you pointing to things that are very different from e.g. the Racing to the Precipice paper?”
Two-step approaches to AI alignment
My comment was: “This seems kind of plausible, thanks! It’s also in some ways related to the thing that seems most like a counterexample to me so far, which is the idea of a ‘Long Reflection’. (Where my main reservation is whether this actually makes sense / is desirable [...].)”
More ‘elite focus’
My comment was: “Seems more like a business-CEO kind of insight, but maybe there’s macrostrategy research it is based on which I’m not aware of?”
Interesting thoughts, thanks :)
I don’t understand the last sentence there. In particular, I’m not sure what you mean “less good” in comparison to.
Do you mean if past progress has been less major and novel than expected? If so, then I’d agree that it’s correct to update negatively if that’s the case.
But given the point about “the default for any research field”, it seems unclear to me whether it’s actually been less major and novel than expected. Perhaps instead we’ve had roughly the sort and amount of progress that people would’ve expected ~2015 when thinking that more money and people should flow towards doing longtermist macrostrategy/GPR?
So here are three things I think you might mean:
“Longtermist macrostrategy/GPR’s insights have been even less major and novel than one would typically expect. So we should update negatively about the value of more work in this field in particular (including relative to work in other fields) - perhaps it’s unusually intractable.”
“People who advocated, funded, or did longtermist macrostrategy/GPR had failed to recognise that research fields rarely have major insights out of nowhere, and thus overestimated the value of more research in general. So we should update negatively about the value of more research in general, including in relation to this field.”
“People who advocated, funded, or did longtermist macrostrategy/GPR mistakenly thought that that field would be an exception to the general pattern of fields rarely having major, novel insights. Now that we have evidence that they were probably mistaken, we should update negatively about this field in particular (moving towards thinking it’s more like other fields).”
Is one of those an accurate description of your view?
Yes, I meant “less than expected”.
Among your three points, I believe something like 1 (for an appropriate reference class to determine “typical”, probably something closer to ‘early-stage fields’ than ‘all fields’). Though not by a lot, and I also haven’t thought that much about how much to expect, and could relatively easily be convinced that I expected too much.
I don’t think I believe 2 or 3. I don’t have much specific information about assumptions made by people who advocated for or funded macrostrategy research, but a priori I’d find it surprising if they had made these mistakes to a strong extent.
I also haven’t thought much about how much one should typically expect in a random field, how that should increase or decrease for this field in the last 5 years just because of how many people and dollars it got (compared to other fields), or how what was produced in the last 5 years in this field compares to that.
But one thing that strikes me is that longtermist macrostrategy/GPR researchers over the past 5 years have probably had substantially less training and experience than researchers in most academic fields we’d probably compare this to. (I haven’t really checked this, but I’d guess it’s true.)
So maybe if there was less novel or less major insights from this field than we should typically expect of a field with the same amount of people and dollars, this can be explained by the people having less human capital, rather than by the field being intrinsically harder to make progress on?
(It could also perhaps be explained if the unusual approaches that are decently often taken in this field tend to be less effective—e.g., more generalist/shallow work rather than deeper dives into narrower topics, and more blog post style work.)