Above, I wrote âIn many (but not all) cases, EAs could provide better answers than the existing work has on the same questions that the existing work aims to answer.â Here are some further points on that:
Iâm not sure if this would be true in most cases, and I donât think itâd be especially useful to figure out how often this is true.
I think itâs best if people just donât assume by default that this can never happen or that it would always happen, and instead look at specific features of a specific case to decide how likely this is in that case.
I think even in some cases where EAs could provide better answers than existing work has, that wouldnât be the best use of those EAsâ timeâthey may be able to âbeat the status quoâ by more, or faster, in some other areas.
One key reason why EAs could often provide better answers is that some questions that have received some non-EA attention, and that sit within mainstream topics, are still very neglected compared to their importance.[1]
A related point is that, although rational non-EA decision-makers would have incentives to reduce most types of existential risk, these incentives may be far weaker than weâd like because existential risk is a transgenerational global public good, might primarily be bad for non-humans, and might occur after these decision-makers lives or political terms are over anyway.
Another key reason why EAs could often provide better answers is that it seems many EAs have âbetter epistemicsâ than members of other communities, as discussed in section 1c.
The claims in this section are not necessarily more arrogant, insular, or epistemically immodest than saying that EAs can in some cases have positive counterfactual impact via things like taking a job in government and simply doing better than alternative candidates would at important tasks.
(That said, much or most of the impact of EAs taking such jobs may instead come from them doing different tasks, or doing them in more EA-aligned ways, rather than simply better ways.)
These claims are also arguably lent some credibility by empirical evidence of things like EAs often finding mainstream success in areas such as academia or founding companies (see also this comment thread).
[1] On the other hand, I think low levels of mainstream attention provide some outside-view evidence that the topic actually isnât especially important. This applies especially in cases where the reasons EAs think the topics matter would also, if true, mean the topics matter according to more common âworldviewsâ (see the Appendix). But I think this outside-view evidence is often fairly weak, for reasons I wonât go into here (see epistemic deference for previous discussion of similar matters).
Above, I wrote âIn many (but not all) cases, EAs could provide better answers than the existing work has on the same questions that the existing work aims to answer.â Here are some further points on that:
Iâm not sure if this would be true in most cases, and I donât think itâd be especially useful to figure out how often this is true.
I think itâs best if people just donât assume by default that this can never happen or that it would always happen, and instead look at specific features of a specific case to decide how likely this is in that case.
Previous work on epistemic deference could help with that process.
I think even in some cases where EAs could provide better answers than existing work has, that wouldnât be the best use of those EAsâ timeâthey may be able to âbeat the status quoâ by more, or faster, in some other areas.
One key reason why EAs could often provide better answers is that some questions that have received some non-EA attention, and that sit within mainstream topics, are still very neglected compared to their importance.[1]
For example, a substantial fraction of all work on Tetlock-style probabilistic forecasting was funded due to Jason Matheny (who has worked at FHI and has various other signals of strong EA-alignment), was funded by Open Philanthropy, and/âor is being done by EAs.
A related point is that, although rational non-EA decision-makers would have incentives to reduce most types of existential risk, these incentives may be far weaker than weâd like because existential risk is a transgenerational global public good, might primarily be bad for non-humans, and might occur after these decision-makers lives or political terms are over anyway.
Another key reason why EAs could often provide better answers is that it seems many EAs have âbetter epistemicsâ than members of other communities, as discussed in section 1c.
The claims in this section are not necessarily more arrogant, insular, or epistemically immodest than saying that EAs can in some cases have positive counterfactual impact via things like taking a job in government and simply doing better than alternative candidates would at important tasks.
(That said, much or most of the impact of EAs taking such jobs may instead come from them doing different tasks, or doing them in more EA-aligned ways, rather than simply better ways.)
These claims are also arguably lent some credibility by empirical evidence of things like EAs often finding mainstream success in areas such as academia or founding companies (see also this comment thread).
[1] On the other hand, I think low levels of mainstream attention provide some outside-view evidence that the topic actually isnât especially important. This applies especially in cases where the reasons EAs think the topics matter would also, if true, mean the topics matter according to more common âworldviewsâ (see the Appendix). But I think this outside-view evidence is often fairly weak, for reasons I wonât go into here (see epistemic deference for previous discussion of similar matters).