I think there haven’t been any novel major insights since 2015, for your threshold of “novel” and “major”.
Notwithstanding that, I believe that we’ve made significant progress and that work on macrostrategy was and continues to be valuable. Most of that value is in many smaller insights, or in the refinement and diffusion of ideas that aren’t strictly speaking novel. For instance:
The recent work on patient longtermism seems highly relevant and plausibly meets the bar for being “major”. This isn’t novel—Robin Hanson wrote about it in 2011, and Benjamin Franklin arguably implemented the idea in 1790 - but I still think that it’s a significant contribution. (There is a big difference between an idea being mentioned somewhere, possibly in very “hidden” places, and that idea being sufficiently widespread in the community to have a real impact.)
Effective altruists are now considering a much wider variety of causes than in 2015 (see e.g. here). Perhaps none of those meet your bar for being “major”, but I think that the “discovery” (scare quotes because probably none of those is the first mention) of causes such as Reducing long-term risks from malevolent actors, invertebrate welfare, or space governance constitutes significant progress. S-risks have also gained more traction, although again the basic idea is from before 2015.
Views on the future of artificial intelligence have become much more nuanced and diverse, compared to the relatively narrow focus on the “Bostrom-Yudkowsky view” that was more prevalent in 2015. I think this does meet the bar for “major”, although it is arguably not a single insight: relevant factors include takeoff speeds, whether AI is best thought of as a unified agent, or the likelihood of successful alignment by default. (And many critiques of the Bostrom-Yudkowsky view were written pre-2015, so it also isn’t really novel.)
The ideas behind patient altruism have received substantial discussion in academia:
The basic theory of optimal consumption was developed by Frank Ramsey in 1928 and there is a lot of relevant literature.
The concept of using low discount rates when making present vs. future tradeoffs was used in an applied context at least as long ago as 2007, in the Stern review of climate change.
But this literature doesn’t seem well-known among EAs. I personally didn’t know about any of it until Phil Trammell cited some of it in his paper on patient philanthropy. Trammell also argued that most people use too high a discount rate, so patient philanthropists should compensate by not donating any money; as far as I know, this is a novel argument.
Trammell also argued that most people use too high a discount rate, so patient philanthropists should compensate by not donating any money; as far as I know, this is a novel argument.
This has been much discussed from before the beginning of EA, Robin Hanson being a particularly devoted proponent.
Hanson has advocated for investing for future giving, and I don’t doubt he had this intuition in mind. But I’m actually not aware of any source in which he says that the condition under which zero-time-preference philanthropists should invest for future giving is that the interest rate incorporates beneficiaries’ pure time preference. I only know that he’s said that the relevant condition is when the interest rate is (a) positive or (b) higher than the growth rate. Do you have a particular source in mind?
Also, who made the “pure time preference in the interest rate means patient philanthropists should invest” point pre-Hanson? (Not trying to get credit for being the first to come up with this really basic idea, I just want to know whom to read/cite!)
I don’t know the provenance of the idea, but I recall Paul Christiano making the point about pure time preference during the debate on giving now vs later at the ?2014 GWWC weekend away.
My recollection is that back in 2008-12 discussions would often cite the Stern Review, which reduced pure time preference to 0.1% per year, and thus concluded massive climate investments would pay off, the critiques of it noting that it would by the same token call for immense savings rates (97.5% according to Dasgupta 2006), and the defenses by Stern and various philosophers that pure time preference of 0 was philosophically appropriate.
In private discussions and correspondence it was used to make the point that absent cosmically exceptional short-term impact the patient longtermist consequentialist would save. I cited it for this in this 2012 blog post. People also discussed how this would go away if sufficient investment was applied patiently (whether for altruistic or other reasons), ending the era of dreamtime finance by driving pure time preference towards zero.
The post cites the Stern discussion to make the point that (non-discounted) utilitarian policymakers would implement more investment, but to my mind that’s quite different from the point that absent cosmically exceptional short-term impact the patient longtermist consequentialist would save. Utilitarian policymakers might implement more redistribution too. Given policymakers as they are, we’re still left with the question of how utilitarian philanthropists with their fixed budgets should prioritize between filling the redistribution gap and filling the investment gap.
In any event, if you/Owen have any more unpublished pre-2015 insights from private correspondence, please consider posting them, so those of us who weren’t there don’t have to go through the bother of rediscovering them. : )
“The post cites the Stern discussion to make the point that (non-discounted) utilitarian policymakers would implement more investment, but to my mind that’s quite different from the point that absent cosmically exceptional short-term impact the patient longtermist consequentialist would save.”
That was explicitly discussed at the time. I cited the blog post as a historical reference illustrating that such considerations were in mind, not as a comprehensive publication of everything people discussed at the time, when in fact there wasn’t one. That’s one reason, in addition to your novel contributions, I’m so happy about your work! GPI also has a big hopper of projects adding a lot of value by further developing and explicating ideas that are not radically novel so that they have more impact and get more improvement and critical feedback.
If you would like to do further recorded discussions about your research, I’m happy to do so anytime.
That post just makes the claim that “all we really need are positive interest rates”. My own point which you were referring to in the original comment is that, at least in the context of poverty alleviation (/increasing human consumption more generally), what we need is pure time preference incorporated into interest rates. This condition is neither necessary nor sufficient for positive interest rates.
Hanson’s post then says something which sounds kind of like my point, namely that we can infer that it’s better for us as philanthropists to invest than to spend if we see our beneficiaries doing some of both. But I could never figure out what he was saying exactly, or how it was compatible with the point he was trying to make that all we really need are positive interest rates.
One thing I’d add: My guess is that part of why Max asked about novel insights is that he’s wondering what the marginal value of longtermist macrostrategy or global priorities research has been since 2015, as one input into predictions about the marginal value of more such research. Or at least, that’s a big part of why I find this question interesting.
So another interesting question is what is required for us to have “many smaller insights” and “the refinement and diffusion of ideas that aren’t strictly speaking novel”? E.g., does that require orgs like FHI and CLR? Or could we do that without paid full-time researchers, just via a bunch of people blogging in their spare time?
I don’t know about generating many smaller insights or refining ideas. But I’d guess that mere “diffusion” probably doesn’t require full-time researchers, just good and well-respected communicators.
But I’d also guess that there’s another thing that happened: Active critique and screening of a large set of potentially important insights, to identify those that are actually important and correct (or sufficiently likely to be correct to warrant major shifts in decisions). And that process seems likely to benefit substantially from having orgs like FHI and CLR. Both because the set of potentially important insights might be very large, and because effectively screening them might be something most people can’t easily do.
And I’d guess that ideas tend to diffuse more and more as they do better in the screening process.
But I only got involved in EA in 2018, and only got inside peaks into some EA orgs this year, so a lot of the above is guesswork.
So another interesting question is what is required for us to have “many smaller insights” and “the refinement and diffusion of ideas that aren’t strictly speaking novel”? E.g., does that require orgs like FHI and CLR? Or could we do that without paid full-time researchers, just via a bunch of people blogging in their spare time?
I think that’s a very interesting question, and one I’ve sometimes wondered about.
Oversimplifying a bit, my answer is: We need neither just bloggers nor just orgs like FHI and CLR. Instead, we need to move from a model where epistemic progress is achieved by individuals to one where it is achieved by a system characterized by a diversification of epistemic tasks, specialization, and division of labor. (So in many ways I think: we need to become more like academia.)
Very roughly, it seems to me that early intellectual progress in EA often happened via distinct and actionable insights found by individuals. E.g. “AI alignment is super important” or “donating to the best as opposed to typical charities is really important” or “current charity evaluators don’t help with finding impactful charities” or “wow, if I donate 10% of my income I can save many lives over my lifetime” or “oh wait, there are orders of magnitudes more wild than farmed animals, so we need to consider the impact of farmed animal advocacy on wild animals”.
(Of course, it’s a spectrum. Discussion and collaboration were still important, my claim is just that there were significantly more “insights within individuals” than later.)
But it seems to me that most low-hanging fruits have been plucked. So it can be useful to look at other more mature epistemic endeavours. And if I reflect on those it strikes me that in some sense most of the important cognition isn’t located in any single mind. E.g. for complex questions about the world, it’s the system of science that delivers answers via irreducible properties like “scientific consensus”. And while in hindsight it’s often possible to summarize epistemic progress in a way that can be understood by individuals, and looks like it could have been achieved by them, the actual progress was distributed across many minds.
(Similarly, the political system doesn’t deliver good policies because there’s a superintelligent policymaker but because of checks and balances etc.; the justice system doesn’t deliver good settlement of disputes because there’s a super-Salomonic judge but because of the rules governing court cases that have different roles such as attorneys, the prosecution, judges, etc.)
This also explains why, I think correctly, discussions on how to improve science usually focus on systemic properties like funding, incentives, and institutions. As opposed to, say, how to improve the IQ or rationality of individual scientists.
And similarly, I think we need to focus less on how to improve individuals and more on how to set up a system that can deliver epistemic progress across larger time scales and larger numbers of people less selected by who happens to know whom.
This is really interesting and I’d like to hear more. Feel free to just answer the easiest questions:
Do you have any thoughts on how to set up a better system for EA research, and how it should be more like academia?
What kinds of specialisation do you think we’d want—subject knowledge? Along different subject lines to academia?
Do you think EA should primarily use existing academia for training new researchers, or should there be lots of RSP-type things?
What do you see as the current route into longtermist research? It seems like entry-level research roles are relatively rare, and generally need research experience. Do you think this is a good model?
[Off the top of my head. I don’t feel like my thoughts on this are very developed, so I’d probably say different things after thinking about it for 1-10 more hours.]
[ETA: On a second reading, I think some of the claims below are unhelpfully flippant and, depending on how one reads them, uncharitable. I don’t want to spend the significant time required for editing, but want to flag that I think my dispassionate views are not super well represented below.]
Do you have any thoughts on how to set up a better system for EA research, and how it should be more like academia?
Things that immediately come to mind, not necessarily the most important levers:
Identify skills or bodies of knowledge that seem relevant for longtermist research, and where necessary design curricula for deliberate practice of these. In addition to having other downsides, I think our norms of single-dimensional evaluations of people (I feel like I hear much more often that someone is “promising” or “impressive” than that they’re “good at <ability or skill>”) are evidence of a harmful laziness that helps entrench the status quo.
Possibly something like a double-blind within-EA peer review system for some publications could be good.
More publicly accessible and easily searchable content, ideally collected or indexed by central hubs. This does not necessarily mean more standard academic publications. I think that e.g. some content that currently only exists in nonpublic Google docs isn’t published solely because of (i) exaggerated worries about info hazards or (ii) exaggerated worries that non-polished content might reflect badly on the author. (Though in other cases I think there are valid reasons not to publish.) If there was a place where it was culturally OK to publish rough drafts, this could help.
This is more fuzzy, but I think it would be valuable to have a more output-oriented culture. (At the margin—I definitely agree that too much emphasis on producing output can be harmful in some situations or if taken too far.)
Culturally, but also when making e.g. concrete hiring decisions, we should put less emphasis on “does this person seem smart?” and more on “does this person have a track record of achievements?”. (Again, this is at the margin, and there are exceptions.) Cf. how this changes over the progression of a career in academia—to get into a good university as undergraduate you need to have good grades, which is closer to “does this person seem smart?”, but to get tenure you need to have publications, which is closer to “does this person have a track record of achievements?” [I say this as someone with a conspicuous dearth of achievements but ability to project and evidence of smartness, i.e. someone who has benefitted from the status quo.]
We should evaluate research less by asking “how immediately action-relevant or impactful is this?” and more by asking “has this isolated a plausibly relevant question, and does it a good job at answering it?”.
What kinds of specialisation do you think we’d want—subject knowledge? Along different subject lines to academia?
Subject knowledge
Methods (e.g. it regularly happens to me that someone I’m mentoring has a question that is essentially just about statistics but I can’t answer it nor do I know anyone easily available in my network who can; it seems like a bit of a travesty to be in a situation where a lot of people worship Bayes’s Rule but very few have the knowledge of even a 1-semester long course in applied statistics)
I expect that some of the resulting specialists would have a natural home in existing academic disciplines and others wouldn’t, but I can’t immediately think of examples.
Do you think EA should primarily use existing academia for training new researchers, or should there be lots of RSP-type things?
I think in principle it’d be great if there were more RSP-type things, but I’m not sure if I think they’re good to expand at the margin given opportunity costs.
However, I expect that for most people the best training setup would not be RSP-type things but a combination of:
Full-time work/study in academia or at some “elite organization” with good mentoring and short feedback loops.
EA-focused “enrichment interventions” that essentially don’t substitute for conventional full-time work/study (e.g. weekend seminars, fellowship in term breaks or work sabbaticals). Participants would be selected for EA motivation, there would be opportunity for interaction with EA researchers and other people working at EA orgs, the content would be focused on core EA issues, etc.
This is because I do agree there are important components of “EA/rationalist mindware and knowledge” without which I expect even super smart and extremely skilled people to have little impact. But I’m really skeptical that the best way to transmit these is to have people hang out for years in insular low-stimulation environments. I think we can transmit them in much less time, and in a way that doesn’t compete as much with robustly useful skill acquisition, and if not then we can figure out how to do this.
I expect RSP-type things to be targeted at people in more exceptional circumstances, e.g. they have good plans that don’t fit into existing institutions or they need time to “switch fields”.
We should evaluate research less by asking “how immediately action-relevant or impactful is this?” and more by asking “has this isolated a plausibly relevant question, and does it a good job at answering it?”.
Could you say more about why you think that that shift at the margin would be good?
In many cases, doing thorough work on a narrow question and providing immediately impactful findings is simply too hard. This used to work well in the early days of EA when more low-hanging fruit was available, but rarely works any more.
Instead of having 10 shallow takes on immediately actionable question X, I’d rather have 10 thorough takes on different subquestions Y_1, …, Y_10, even if it’s not immediately obvious how exactly they help with answering X (there should be some plausible relation, however). Maybe I expect 8 of these 10 takes to be useless, but unlike adding more shallow takes on X the thorough takes on the 2 remaining subquestions enable incremental and distributed intellectual progress:
It may allow us to identify new subquestions we weren’t able to find through doing shallow takes on X.
Someone else can build on the work, and e.g. do a thorough take on another subquestions that helps illuminate how it relates to X, what else we need to know to use the thorough findings to make progress on Y etc.
The expected benefit from unknown unknowns is larger. Random examples: the economic historians who assembled data on historic GDP growth presumably didn’t anticipate that their data would feature in outside-view arguments on the plausibility of AGI takeoff this century. (Though if you had asked them, they probably would have been able to see that this is a plausible use—there probably are other examples where the delayed use/benefit is more surprising.)
It’s often more instrumentally useful because it better fulfills non-EA criteria for excellence or credibility.
I think this is especially important when trying to build bridges between EA research and academia with the vision to make more academic research helpful to EA happen.
It’s also important because non-EA actors often have different criteria for when they’re willing to act on research findings. I think EAs tend to be unusually willing to act on epistemic states like “this seems 30% likely to me, even if I can’t fully defend this or even say why exactly I believe this” (I think this is good), but if they wanted to convince some other actor (e.g. a government or big firm) to act they’ll need more legible arguments.
One recent examples that’s salient to me and illustrates what strikes me as a bit off here is the discussion on Leopold Aschenbrenner’s paper on x-risk and growth in the comments to this post. A lot of the discussion seemed to be motivated by the question “How much should this paper update our all-things-considered view on whether it’s net good to accelerate economic growth?”. It strikes me that this is very different from the questions I’d ask about that paper, and also quite far removed from why, as I said, I think this paper was a great contribution.
These reasons are more like:
As best as I can tell (significantly because of reactions by other people with more domain expertise), the paper is quite impressive to academic economists, and so could have large instrumental benefits for building bridges.
While it didn’t even occur to me to update my all-things-considered take on whether it’d be good to accelerate growth much, I think the paper does a really thorough job at modeling one aspect that’s relevant to this question. Once we have 10 to 100 papers like it, I think I’ll have learned a lot and will be in a great position to update my all-things-considered take. But, crucially, the paper is one clear step in this direction in a way in which an EA Forum post with bottom line “I spent 40 hours researching whether accelerating economic growth is net good, and here is what I think” simply is not.
Interesting, thanks. I’m not sure whether I overall agree, but I think this glimpse of your models on this topic will be useful to me.
One clarifying question:
Instead of having 10 shallow takes on immediately actionable question X, I’d rather have 10 thorough takes on different subquestions Y_1, …, Y_10
My first thought was “But wait, wouldn’t 10 thorough takes take more time than 10 shallow takes, making this comparison unfair?” But now I think maybe you meant both sets of investigations to take a similar amount of time, but the former to be “shallow” in relation to the larger topic—i.e., the “shallow takes” involve the same amount of total analysis as the “thorough takes”, but they’re analysing such a big topic that they can only provide a shallow look at each component. Is that right?
the “shallow takes” involve the same amount of total analysis as the “thorough takes”, but they’re analysing such a big topic that they can only provide a shallow look at each component
Yes, that’s what I had in mind. Thanks for clarifying!
Thanks, that’s helpful for thinking about my career (and thanks for asking that question Michael!)
Edit: helpful for thinking about my career because I’m thinking about getting economics training, which seems useful for answering specific sub-questions in detail (‘Existential Risk and Economic Growth’ being the perfect example of this), but one economic model alone is very unlikely to resolve a big question.
I think you’re very likely doing this anyway, but I’d recommend to get a range of perspectives on these questions. As I said, my own views here don’t feel that resilient, and I also know that several epistemic peers disagree with me on some of the above.
I think there haven’t been any novel major insights since 2015, for your threshold of “novel” and “major”.
Notwithstanding that, I believe that we’ve made significant progress and that work on macrostrategy was and continues to be valuable. Most of that value is in many smaller insights, or in the refinement and diffusion of ideas that aren’t strictly speaking novel. For instance:
The recent work on patient longtermism seems highly relevant and plausibly meets the bar for being “major”. This isn’t novel—Robin Hanson wrote about it in 2011, and Benjamin Franklin arguably implemented the idea in 1790 - but I still think that it’s a significant contribution. (There is a big difference between an idea being mentioned somewhere, possibly in very “hidden” places, and that idea being sufficiently widespread in the community to have a real impact.)
Effective altruists are now considering a much wider variety of causes than in 2015 (see e.g. here). Perhaps none of those meet your bar for being “major”, but I think that the “discovery” (scare quotes because probably none of those is the first mention) of causes such as Reducing long-term risks from malevolent actors, invertebrate welfare, or space governance constitutes significant progress. S-risks have also gained more traction, although again the basic idea is from before 2015.
Views on the future of artificial intelligence have become much more nuanced and diverse, compared to the relatively narrow focus on the “Bostrom-Yudkowsky view” that was more prevalent in 2015. I think this does meet the bar for “major”, although it is arguably not a single insight: relevant factors include takeoff speeds, whether AI is best thought of as a unified agent, or the likelihood of successful alignment by default. (And many critiques of the Bostrom-Yudkowsky view were written pre-2015, so it also isn’t really novel.)
The ideas behind patient altruism have received substantial discussion in academia:
The basic theory of optimal consumption was developed by Frank Ramsey in 1928 and there is a lot of relevant literature.
The concept of using low discount rates when making present vs. future tradeoffs was used in an applied context at least as long ago as 2007, in the Stern review of climate change.
The idea of postponing spending was discussed in “Discounting and the Paradox of the Infinitely Postponed Splurge” and other related literature.
But this literature doesn’t seem well-known among EAs. I personally didn’t know about any of it until Phil Trammell cited some of it in his paper on patient philanthropy. Trammell also argued that most people use too high a discount rate, so patient philanthropists should compensate by not donating any money; as far as I know, this is a novel argument.
This has been much discussed from before the beginning of EA, Robin Hanson being a particularly devoted proponent.
Hanson has advocated for investing for future giving, and I don’t doubt he had this intuition in mind. But I’m actually not aware of any source in which he says that the condition under which zero-time-preference philanthropists should invest for future giving is that the interest rate incorporates beneficiaries’ pure time preference. I only know that he’s said that the relevant condition is when the interest rate is (a) positive or (b) higher than the growth rate. Do you have a particular source in mind?
Also, who made the “pure time preference in the interest rate means patient philanthropists should invest” point pre-Hanson? (Not trying to get credit for being the first to come up with this really basic idea, I just want to know whom to read/cite!)
I don’t know the provenance of the idea, but I recall Paul Christiano making the point about pure time preference during the debate on giving now vs later at the ?2014 GWWC weekend away.
My recollection is that back in 2008-12 discussions would often cite the Stern Review, which reduced pure time preference to 0.1% per year, and thus concluded massive climate investments would pay off, the critiques of it noting that it would by the same token call for immense savings rates (97.5% according to Dasgupta 2006), and the defenses by Stern and various philosophers that pure time preference of 0 was philosophically appropriate.
In private discussions and correspondence it was used to make the point that absent cosmically exceptional short-term impact the patient longtermist consequentialist would save. I cited it for this in this 2012 blog post. People also discussed how this would go away if sufficient investment was applied patiently (whether for altruistic or other reasons), ending the era of dreamtime finance by driving pure time preference towards zero.
Sorry—maybe I’m being blind, but I’m not seeing what citation you’d be referring to in that blog post. Where should I be looking?
The Stern discussion.
The post cites the Stern discussion to make the point that (non-discounted) utilitarian policymakers would implement more investment, but to my mind that’s quite different from the point that absent cosmically exceptional short-term impact the patient longtermist consequentialist would save. Utilitarian policymakers might implement more redistribution too. Given policymakers as they are, we’re still left with the question of how utilitarian philanthropists with their fixed budgets should prioritize between filling the redistribution gap and filling the investment gap.
In any event, if you/Owen have any more unpublished pre-2015 insights from private correspondence, please consider posting them, so those of us who weren’t there don’t have to go through the bother of rediscovering them. : )
“The post cites the Stern discussion to make the point that (non-discounted) utilitarian policymakers would implement more investment, but to my mind that’s quite different from the point that absent cosmically exceptional short-term impact the patient longtermist consequentialist would save.”
That was explicitly discussed at the time. I cited the blog post as a historical reference illustrating that such considerations were in mind, not as a comprehensive publication of everything people discussed at the time, when in fact there wasn’t one. That’s one reason, in addition to your novel contributions, I’m so happy about your work! GPI also has a big hopper of projects adding a lot of value by further developing and explicating ideas that are not radically novel so that they have more impact and get more improvement and critical feedback.
If you would like to do further recorded discussions about your research, I’m happy to do so anytime.
Thanks! No need to inflict another recording of my voice on the world for now, I think, but glad to hear you like how the project coming.
It seems you’re right. I did a little searching and found Hanson making that argument here: https://www.overcomingbias.com/2013/04/more-now-means-less-later.html
That post just makes the claim that “all we really need are positive interest rates”. My own point which you were referring to in the original comment is that, at least in the context of poverty alleviation (/increasing human consumption more generally), what we need is pure time preference incorporated into interest rates. This condition is neither necessary nor sufficient for positive interest rates.
Hanson’s post then says something which sounds kind of like my point, namely that we can infer that it’s better for us as philanthropists to invest than to spend if we see our beneficiaries doing some of both. But I could never figure out what he was saying exactly, or how it was compatible with the point he was trying to make that all we really need are positive interest rates.
Could you elaborate?
I liked this answer.
One thing I’d add: My guess is that part of why Max asked about novel insights is that he’s wondering what the marginal value of longtermist macrostrategy or global priorities research has been since 2015, as one input into predictions about the marginal value of more such research. Or at least, that’s a big part of why I find this question interesting.
So another interesting question is what is required for us to have “many smaller insights” and “the refinement and diffusion of ideas that aren’t strictly speaking novel”? E.g., does that require orgs like FHI and CLR? Or could we do that without paid full-time researchers, just via a bunch of people blogging in their spare time?
I don’t know about generating many smaller insights or refining ideas. But I’d guess that mere “diffusion” probably doesn’t require full-time researchers, just good and well-respected communicators.
But I’d also guess that there’s another thing that happened: Active critique and screening of a large set of potentially important insights, to identify those that are actually important and correct (or sufficiently likely to be correct to warrant major shifts in decisions). And that process seems likely to benefit substantially from having orgs like FHI and CLR. Both because the set of potentially important insights might be very large, and because effectively screening them might be something most people can’t easily do.
And I’d guess that ideas tend to diffuse more and more as they do better in the screening process.
But I only got involved in EA in 2018, and only got inside peaks into some EA orgs this year, so a lot of the above is guesswork.
I think that’s a very interesting question, and one I’ve sometimes wondered about.
Oversimplifying a bit, my answer is: We need neither just bloggers nor just orgs like FHI and CLR. Instead, we need to move from a model where epistemic progress is achieved by individuals to one where it is achieved by a system characterized by a diversification of epistemic tasks, specialization, and division of labor. (So in many ways I think: we need to become more like academia.)
Very roughly, it seems to me that early intellectual progress in EA often happened via distinct and actionable insights found by individuals. E.g. “AI alignment is super important” or “donating to the best as opposed to typical charities is really important” or “current charity evaluators don’t help with finding impactful charities” or “wow, if I donate 10% of my income I can save many lives over my lifetime” or “oh wait, there are orders of magnitudes more wild than farmed animals, so we need to consider the impact of farmed animal advocacy on wild animals”.
(Of course, it’s a spectrum. Discussion and collaboration were still important, my claim is just that there were significantly more “insights within individuals” than later.)
But it seems to me that most low-hanging fruits have been plucked. So it can be useful to look at other more mature epistemic endeavours. And if I reflect on those it strikes me that in some sense most of the important cognition isn’t located in any single mind. E.g. for complex questions about the world, it’s the system of science that delivers answers via irreducible properties like “scientific consensus”. And while in hindsight it’s often possible to summarize epistemic progress in a way that can be understood by individuals, and looks like it could have been achieved by them, the actual progress was distributed across many minds.
(Similarly, the political system doesn’t deliver good policies because there’s a superintelligent policymaker but because of checks and balances etc.; the justice system doesn’t deliver good settlement of disputes because there’s a super-Salomonic judge but because of the rules governing court cases that have different roles such as attorneys, the prosecution, judges, etc.)
This also explains why, I think correctly, discussions on how to improve science usually focus on systemic properties like funding, incentives, and institutions. As opposed to, say, how to improve the IQ or rationality of individual scientists.
And similarly, I think we need to focus less on how to improve individuals and more on how to set up a system that can deliver epistemic progress across larger time scales and larger numbers of people less selected by who happens to know whom.
This is really interesting and I’d like to hear more. Feel free to just answer the easiest questions:
Do you have any thoughts on how to set up a better system for EA research, and how it should be more like academia?
What kinds of specialisation do you think we’d want—subject knowledge? Along different subject lines to academia?
Do you think EA should primarily use existing academia for training new researchers, or should there be lots of RSP-type things?
What do you see as the current route into longtermist research? It seems like entry-level research roles are relatively rare, and generally need research experience. Do you think this is a good model?
[Off the top of my head. I don’t feel like my thoughts on this are very developed, so I’d probably say different things after thinking about it for 1-10 more hours.]
[ETA: On a second reading, I think some of the claims below are unhelpfully flippant and, depending on how one reads them, uncharitable. I don’t want to spend the significant time required for editing, but want to flag that I think my dispassionate views are not super well represented below.]
Things that immediately come to mind, not necessarily the most important levers:
Identify skills or bodies of knowledge that seem relevant for longtermist research, and where necessary design curricula for deliberate practice of these. In addition to having other downsides, I think our norms of single-dimensional evaluations of people (I feel like I hear much more often that someone is “promising” or “impressive” than that they’re “good at <ability or skill>”) are evidence of a harmful laziness that helps entrench the status quo.
Possibly something like a double-blind within-EA peer review system for some publications could be good.
More publicly accessible and easily searchable content, ideally collected or indexed by central hubs. This does not necessarily mean more standard academic publications. I think that e.g. some content that currently only exists in nonpublic Google docs isn’t published solely because of (i) exaggerated worries about info hazards or (ii) exaggerated worries that non-polished content might reflect badly on the author. (Though in other cases I think there are valid reasons not to publish.) If there was a place where it was culturally OK to publish rough drafts, this could help.
This is more fuzzy, but I think it would be valuable to have a more output-oriented culture. (At the margin—I definitely agree that too much emphasis on producing output can be harmful in some situations or if taken too far.)
Culturally, but also when making e.g. concrete hiring decisions, we should put less emphasis on “does this person seem smart?” and more on “does this person have a track record of achievements?”. (Again, this is at the margin, and there are exceptions.) Cf. how this changes over the progression of a career in academia—to get into a good university as undergraduate you need to have good grades, which is closer to “does this person seem smart?”, but to get tenure you need to have publications, which is closer to “does this person have a track record of achievements?” [I say this as someone with a conspicuous dearth of achievements but ability to project and evidence of smartness, i.e. someone who has benefitted from the status quo.]
We should evaluate research less by asking “how immediately action-relevant or impactful is this?” and more by asking “has this isolated a plausibly relevant question, and does it a good job at answering it?”.
Subject knowledge
Methods (e.g. it regularly happens to me that someone I’m mentoring has a question that is essentially just about statistics but I can’t answer it nor do I know anyone easily available in my network who can; it seems like a bit of a travesty to be in a situation where a lot of people worship Bayes’s Rule but very few have the knowledge of even a 1-semester long course in applied statistics)
I expect that some of the resulting specialists would have a natural home in existing academic disciplines and others wouldn’t, but I can’t immediately think of examples.
I think in principle it’d be great if there were more RSP-type things, but I’m not sure if I think they’re good to expand at the margin given opportunity costs.
However, I expect that for most people the best training setup would not be RSP-type things but a combination of:
Full-time work/study in academia or at some “elite organization” with good mentoring and short feedback loops.
EA-focused “enrichment interventions” that essentially don’t substitute for conventional full-time work/study (e.g. weekend seminars, fellowship in term breaks or work sabbaticals). Participants would be selected for EA motivation, there would be opportunity for interaction with EA researchers and other people working at EA orgs, the content would be focused on core EA issues, etc.
This is because I do agree there are important components of “EA/rationalist mindware and knowledge” without which I expect even super smart and extremely skilled people to have little impact. But I’m really skeptical that the best way to transmit these is to have people hang out for years in insular low-stimulation environments. I think we can transmit them in much less time, and in a way that doesn’t compete as much with robustly useful skill acquisition, and if not then we can figure out how to do this.
I expect RSP-type things to be targeted at people in more exceptional circumstances, e.g. they have good plans that don’t fit into existing institutions or they need time to “switch fields”.
Interesting, thanks for sharing!
Could you say more about why you think that that shift at the margin would be good?
Several reasons:
In many cases, doing thorough work on a narrow question and providing immediately impactful findings is simply too hard. This used to work well in the early days of EA when more low-hanging fruit was available, but rarely works any more.
Instead of having 10 shallow takes on immediately actionable question X, I’d rather have 10 thorough takes on different subquestions Y_1, …, Y_10, even if it’s not immediately obvious how exactly they help with answering X (there should be some plausible relation, however). Maybe I expect 8 of these 10 takes to be useless, but unlike adding more shallow takes on X the thorough takes on the 2 remaining subquestions enable incremental and distributed intellectual progress:
It may allow us to identify new subquestions we weren’t able to find through doing shallow takes on X.
Someone else can build on the work, and e.g. do a thorough take on another subquestions that helps illuminate how it relates to X, what else we need to know to use the thorough findings to make progress on Y etc.
The expected benefit from unknown unknowns is larger. Random examples: the economic historians who assembled data on historic GDP growth presumably didn’t anticipate that their data would feature in outside-view arguments on the plausibility of AGI takeoff this century. (Though if you had asked them, they probably would have been able to see that this is a plausible use—there probably are other examples where the delayed use/benefit is more surprising.)
It’s often more instrumentally useful because it better fulfills non-EA criteria for excellence or credibility.
I think this is especially important when trying to build bridges between EA research and academia with the vision to make more academic research helpful to EA happen.
It’s also important because non-EA actors often have different criteria for when they’re willing to act on research findings. I think EAs tend to be unusually willing to act on epistemic states like “this seems 30% likely to me, even if I can’t fully defend this or even say why exactly I believe this” (I think this is good), but if they wanted to convince some other actor (e.g. a government or big firm) to act they’ll need more legible arguments.
One recent examples that’s salient to me and illustrates what strikes me as a bit off here is the discussion on Leopold Aschenbrenner’s paper on x-risk and growth in the comments to this post. A lot of the discussion seemed to be motivated by the question “How much should this paper update our all-things-considered view on whether it’s net good to accelerate economic growth?”. It strikes me that this is very different from the questions I’d ask about that paper, and also quite far removed from why, as I said, I think this paper was a great contribution.
These reasons are more like:
As best as I can tell (significantly because of reactions by other people with more domain expertise), the paper is quite impressive to academic economists, and so could have large instrumental benefits for building bridges.
While it didn’t even occur to me to update my all-things-considered take on whether it’d be good to accelerate growth much, I think the paper does a really thorough job at modeling one aspect that’s relevant to this question. Once we have 10 to 100 papers like it, I think I’ll have learned a lot and will be in a great position to update my all-things-considered take. But, crucially, the paper is one clear step in this direction in a way in which an EA Forum post with bottom line “I spent 40 hours researching whether accelerating economic growth is net good, and here is what I think” simply is not.
Interesting, thanks. I’m not sure whether I overall agree, but I think this glimpse of your models on this topic will be useful to me.
One clarifying question:
My first thought was “But wait, wouldn’t 10 thorough takes take more time than 10 shallow takes, making this comparison unfair?” But now I think maybe you meant both sets of investigations to take a similar amount of time, but the former to be “shallow” in relation to the larger topic—i.e., the “shallow takes” involve the same amount of total analysis as the “thorough takes”, but they’re analysing such a big topic that they can only provide a shallow look at each component. Is that right?
Yes, that’s what I had in mind. Thanks for clarifying!
I’m confused—did you make this comment in the wrong place?
No, but there was a copy and paste error that made the comment unintelligible. Edited now. Thanks for flagging!
Thanks, that’s helpful for thinking about my career (and thanks for asking that question Michael!)
Edit: helpful for thinking about my career because I’m thinking about getting economics training, which seems useful for answering specific sub-questions in detail (‘Existential Risk and Economic Growth’ being the perfect example of this), but one economic model alone is very unlikely to resolve a big question.
Glad it’s helpful!
I think you’re very likely doing this anyway, but I’d recommend to get a range of perspectives on these questions. As I said, my own views here don’t feel that resilient, and I also know that several epistemic peers disagree with me on some of the above.