As a community, we should think more about how to create and improve our collective epistemic institutions. With that, I mean the formalized ways of creating and organizing knowledge in the community beyond individual blogs and organizations. Examples are online platforms like the EA Forum and Metaculus, events like EA Global and the Leaders Forum, surveys like the EA survey and the survey at the Leaders Forum. This strikes me as a neglected form of community-building that might be particularly high-leverage.
The case for building more and better epistemic institutions
Epistemic progress is crucial for the success of this community.
Effective altruism is about finding out how to do the most good and then doing it. For that, epistemic progress is important. Will MacAskill has even referred to effective altruism as a “research project.” Since people in this community have changed their views about how to do the most good substantially over the last ten years, we should expect that we’re still wrong about many things.
Some institutions facilitate or significantly accelerate epistemic progress.
People in this community are probably more aware of the research showing this than many other people. Ironically, we even recommend working on improving the decision-making of other organizations or communities. Aggregative forecasting is talked about most often and it seems to have solid evidence behind it. Still, it has limitations. For instance, it cannot help us with conceptual work, improving our reasoning and arguments directly, and inherently vague concepts. There is some evidence on other instruments like certain forms of expert elicitation or structured analytic techniques (e.g., devil’s advocate), but the evidence base seems less sound. It might still be worth experimenting with them. Peer review seems to be another valuable institution facilitating epistemic progress. I’m not sure if this has ever been investigated properly but it has a lot of prima facie plausibility to it.
I don’t want to argue that we already know all the institutions that facilitate epistemic progress but there are at least some that do. If we think this is sufficiently important and there are more such institutions to be designed, experimenting and expanding the research base might be among the most important things we could do.
We are not close to the perfect institutional setup.
I don’t want to overstate the case. We have already built a number of great institutions in this regard, probably much better than other communities. Again, forecasting has probably seen the most attention (e.g., Metaculus, Foretold). The other examples I mentioned at the top, however, are also important and many have improved over the last few years.
Still, I’m confident we can do better. Starting from the evidence base I sketched out above, we might start experimenting with the following institutions:
Institutionalizing devil’s advocates: So far, we have had to rely on the initiative and courage of individuals to come forward with criticism of cause areas or certain paradigms within them (e.g., here, here). Perhaps there are ways to incentivize or institutionalize such work even more or even earlier. For instance, we could set up prizes for the best critique of apparently common assumptions or priorities.
Expert surveys/elicitation: Grace et al. (2017) did one for AI timelines. The Leaders Forum survey is focused on EA-related questions. If possible, we could experiment with validating the participants or systematizing participant selection in other ways. We could also just explore many more questions this way in order to get a sense of what the most knowledgable people in a particular cause believe.
Peer review: We could simply subject more of our ideas to peer review. The fact that the Global Priorities Institute is doing so is a great step in my opinion. We could also experiment with peer review internal to the community. In addition to regular posts and shortform posts on the EA Forum, we could introduce a research category where posts have to pass the review of people in the field of that particular post (Saulius recently suggested a change along the lines). For what it’s worth, the voting system captures some of the value of this already.
Below I sketch some more ideas for epistemic institutions, which are admittedly more speculative since they have not been investigated as rigorously:
Institutionalized adversarial collaborations: Adversarial collaborations are collaborations between people with opposing viewpoints. Scott Alexander has already experimented with a format that might also work for this community specifically (2019, 2018).
Literature reviews: It’s hard to keep track of all the advances in a particular field. Literature reviews could address this. The annual AI Alignment Literature Review and Charity Comparison is a good example of this. We would probably benefit from such work in other areas as well, judging from the response to this post every year. The LessWrong Review is how this could work for effective altruism as a field.
IPCC-analogues for cause areas: Reports on the scale of the IPCC reports are not feasible or desirable at this point. It could still be very important that we keep track of the state of knowledge in a particular field. What do the experts believe? What is that based on? Where are we most uncertain?
Effective altruism wiki: Intuitively, this makes a lot of sense as a means of organizing knowledge of a particular community. Also, if the US Intelligence Community is doing it, it has to be good. I know that there have been attempts at this (e.g., arbital, priority.wiki, EAWiki). Unfortunately, these didn’t catch on as much as would be necessary to create a lot of value. Perhaps there are still ways of pulling this off though. See here and here for recent discussions.
Some of these probably work better for epistemic progress in particular fields or causes. Others work better for organizing or advancing knowledge on global priorities.
We can build or improve such institutions.
This will depend a lot on the specific institution. The fact that we have a running forecasting platform, global conferences, a forum with prizes and an intricate voting system, and all of the other things I listed makes me hopeful. Not everything will work out, but this community seems generally capable of doing such things.
There are still a number of problems we need to overcome. Some will depend on the specific institution, but we can also make some general observations:
Some institutions require the time and effort of experts in a particular field. Participating in such institutions might not be the best use of their time. We could find ways of minimizing the needed effort, offer to compensate them, or find other ways of making it worth it for them.
As a community, we might suffer from the diffusion of responsibility or authority. Nobody feels called upon or vested with the required authority to set up such institutions. I am not sure to what extent this is the case. Incubators like Charity Entrepreneurship might be able to help here. CEA could also take on an even more active role in shaping such institutions.
There might be coordination platforms around platforms such as a wiki. It’s only worth it for an individual to participate if enough other people participate. Prizes, participation by respected members, and consistently making the case for the institution might help here.
How important is this compared to other things we could be doing?
Building such institutions is a form of community-building. Arguably, this is one of the most important ways of making a difference since it offers a lot of leverage. It came second in the Leaders Forum survey. It is not the only form of community-building. How does it compare to other things in this area? Below I sketch a few considerations.
Growth
The most common form of community-building is growing the size of the community. Improving institutions strikes me as more neglected but perhaps less tractable. The importance of both depends on the size of the community. On the one hand, coordinating around and debating the merits of such institutions will only become harder with increasing size. Since it’s plausible that they also prevent the drift of the community, they might be especially important to set up early. On the other hand, institutions might only be feasible once the community reaches a certain size. Before that point, they will not be very efficient. For instance, a forecasting platform or wiki with five people does not add a lot of value. Similarly, there might not be enough experts to warrant institutions like literature reviews. Overall, I lean toward thinking additional work on institutions is more valuable at the current margin.
Epistemic norms and folkways
Norms and folkways are less formalized ways of doing things. The difference to institutions is gradual but still meaningful. Applauding others for posting criticism or making probabilistic estimates are expressions of norms or folkways. Prizes for in-depth critiques and forecasting platforms are institutions. I find it really hard to compare these since norms and folkways are pretty fuzzy and I’m not sure what dedicated work on them would look like. The most insightful thing I can say is the following: Since institutions are less malleable than norms, you only want to set them up once you have become sufficiently certain that they are a good idea. This will differ from institution to institution and speaks in favor of experimentation.
Non-epistemic institutions
These might be institutions to improve preference aggregation (i.e., voting), community retention and coherence, and so on. Since this is a very broad basket of things, making a comparison is difficult. I would definitely welcome more people thinking about this.
Conclusion
Overall, this type of work strikes me as a valuable form of community-building that we currently underinvest in, despite quite a few resources going into both community-building more generally as well as the cause of improving institutional decision-making. It would be great if these two groups could join forces more.
Acknowledgements
Thanks to Tobias Baumann and Jesse Clifton for comments on an earlier draft of this post.
The case for building more and better epistemic institutions in the effective altruism community
As a community, we should think more about how to create and improve our collective epistemic institutions. With that, I mean the formalized ways of creating and organizing knowledge in the community beyond individual blogs and organizations. Examples are online platforms like the EA Forum and Metaculus, events like EA Global and the Leaders Forum, surveys like the EA survey and the survey at the Leaders Forum. This strikes me as a neglected form of community-building that might be particularly high-leverage.
The case for building more and better epistemic institutions
Epistemic progress is crucial for the success of this community.
Effective altruism is about finding out how to do the most good and then doing it. For that, epistemic progress is important. Will MacAskill has even referred to effective altruism as a “research project.” Since people in this community have changed their views about how to do the most good substantially over the last ten years, we should expect that we’re still wrong about many things.
Some institutions facilitate or significantly accelerate epistemic progress.
People in this community are probably more aware of the research showing this than many other people. Ironically, we even recommend working on improving the decision-making of other organizations or communities. Aggregative forecasting is talked about most often and it seems to have solid evidence behind it. Still, it has limitations. For instance, it cannot help us with conceptual work, improving our reasoning and arguments directly, and inherently vague concepts. There is some evidence on other instruments like certain forms of expert elicitation or structured analytic techniques (e.g., devil’s advocate), but the evidence base seems less sound. It might still be worth experimenting with them. Peer review seems to be another valuable institution facilitating epistemic progress. I’m not sure if this has ever been investigated properly but it has a lot of prima facie plausibility to it.
I don’t want to argue that we already know all the institutions that facilitate epistemic progress but there are at least some that do. If we think this is sufficiently important and there are more such institutions to be designed, experimenting and expanding the research base might be among the most important things we could do.
We are not close to the perfect institutional setup.
I don’t want to overstate the case. We have already built a number of great institutions in this regard, probably much better than other communities. Again, forecasting has probably seen the most attention (e.g., Metaculus, Foretold). The other examples I mentioned at the top, however, are also important and many have improved over the last few years.
Still, I’m confident we can do better. Starting from the evidence base I sketched out above, we might start experimenting with the following institutions:
Institutionalizing devil’s advocates: So far, we have had to rely on the initiative and courage of individuals to come forward with criticism of cause areas or certain paradigms within them (e.g., here, here). Perhaps there are ways to incentivize or institutionalize such work even more or even earlier. For instance, we could set up prizes for the best critique of apparently common assumptions or priorities.
Expert surveys/elicitation: Grace et al. (2017) did one for AI timelines. The Leaders Forum survey is focused on EA-related questions. If possible, we could experiment with validating the participants or systematizing participant selection in other ways. We could also just explore many more questions this way in order to get a sense of what the most knowledgable people in a particular cause believe.
Peer review: We could simply subject more of our ideas to peer review. The fact that the Global Priorities Institute is doing so is a great step in my opinion. We could also experiment with peer review internal to the community. In addition to regular posts and shortform posts on the EA Forum, we could introduce a research category where posts have to pass the review of people in the field of that particular post (Saulius recently suggested a change along the lines). For what it’s worth, the voting system captures some of the value of this already.
Below I sketch some more ideas for epistemic institutions, which are admittedly more speculative since they have not been investigated as rigorously:
Institutionalized adversarial collaborations: Adversarial collaborations are collaborations between people with opposing viewpoints. Scott Alexander has already experimented with a format that might also work for this community specifically (2019, 2018).
Literature reviews: It’s hard to keep track of all the advances in a particular field. Literature reviews could address this. The annual AI Alignment Literature Review and Charity Comparison is a good example of this. We would probably benefit from such work in other areas as well, judging from the response to this post every year. The LessWrong Review is how this could work for effective altruism as a field.
IPCC-analogues for cause areas: Reports on the scale of the IPCC reports are not feasible or desirable at this point. It could still be very important that we keep track of the state of knowledge in a particular field. What do the experts believe? What is that based on? Where are we most uncertain?
Effective altruism wiki: Intuitively, this makes a lot of sense as a means of organizing knowledge of a particular community. Also, if the US Intelligence Community is doing it, it has to be good. I know that there have been attempts at this (e.g., arbital, priority.wiki, EAWiki). Unfortunately, these didn’t catch on as much as would be necessary to create a lot of value. Perhaps there are still ways of pulling this off though. See here and here for recent discussions.
Some of these probably work better for epistemic progress in particular fields or causes. Others work better for organizing or advancing knowledge on global priorities.
We can build or improve such institutions.
This will depend a lot on the specific institution. The fact that we have a running forecasting platform, global conferences, a forum with prizes and an intricate voting system, and all of the other things I listed makes me hopeful. Not everything will work out, but this community seems generally capable of doing such things.
There are still a number of problems we need to overcome. Some will depend on the specific institution, but we can also make some general observations:
Some institutions require the time and effort of experts in a particular field. Participating in such institutions might not be the best use of their time. We could find ways of minimizing the needed effort, offer to compensate them, or find other ways of making it worth it for them.
As a community, we might suffer from the diffusion of responsibility or authority. Nobody feels called upon or vested with the required authority to set up such institutions. I am not sure to what extent this is the case. Incubators like Charity Entrepreneurship might be able to help here. CEA could also take on an even more active role in shaping such institutions.
There might be coordination platforms around platforms such as a wiki. It’s only worth it for an individual to participate if enough other people participate. Prizes, participation by respected members, and consistently making the case for the institution might help here.
How important is this compared to other things we could be doing?
Building such institutions is a form of community-building. Arguably, this is one of the most important ways of making a difference since it offers a lot of leverage. It came second in the Leaders Forum survey. It is not the only form of community-building. How does it compare to other things in this area? Below I sketch a few considerations.
Growth
The most common form of community-building is growing the size of the community. Improving institutions strikes me as more neglected but perhaps less tractable. The importance of both depends on the size of the community. On the one hand, coordinating around and debating the merits of such institutions will only become harder with increasing size. Since it’s plausible that they also prevent the drift of the community, they might be especially important to set up early. On the other hand, institutions might only be feasible once the community reaches a certain size. Before that point, they will not be very efficient. For instance, a forecasting platform or wiki with five people does not add a lot of value. Similarly, there might not be enough experts to warrant institutions like literature reviews. Overall, I lean toward thinking additional work on institutions is more valuable at the current margin.
Epistemic norms and folkways
Norms and folkways are less formalized ways of doing things. The difference to institutions is gradual but still meaningful. Applauding others for posting criticism or making probabilistic estimates are expressions of norms or folkways. Prizes for in-depth critiques and forecasting platforms are institutions. I find it really hard to compare these since norms and folkways are pretty fuzzy and I’m not sure what dedicated work on them would look like. The most insightful thing I can say is the following: Since institutions are less malleable than norms, you only want to set them up once you have become sufficiently certain that they are a good idea. This will differ from institution to institution and speaks in favor of experimentation.
Non-epistemic institutions
These might be institutions to improve preference aggregation (i.e., voting), community retention and coherence, and so on. Since this is a very broad basket of things, making a comparison is difficult. I would definitely welcome more people thinking about this.
Conclusion
Overall, this type of work strikes me as a valuable form of community-building that we currently underinvest in, despite quite a few resources going into both community-building more generally as well as the cause of improving institutional decision-making. It would be great if these two groups could join forces more.
Acknowledgements
Thanks to Tobias Baumann and Jesse Clifton for comments on an earlier draft of this post.