We’ve already been experimenting with this project over the last six months. People we’ve provided advice for include: entrepreneurs who have taken the Founders’ Pledge and exited; private major donors who contacted us as a result of reading Doing Good Better; former Prime Minister Gordon Brown, for his International Commission on Financing Global Education Opportunity; and Alwaleed Philanthropies, a $30 billion foundation focused on global humanitarianism. This project is still very much in its infancy and we’ll assess its development on an ongoing basis.
Do you have plans to publish summaries of the research you do, e.g. on Wikipedia or the EA Wiki? If I remember correctly, GiveWell was originally “The Clear Fund”, and their comparative advantage was supposed to be making the research behind their grants public, instead of keeping research to themselves like most foundations. Making research public lets people criticize it, or base their giving off of it even if they didn’t request it. See also. There are certainly reasons to stay quiet in some cases, and I could understand why donors might not want their names announced, but it feels like the bias should be towards publishing.
I’d also challenge you to think about what CEA’s “secret sauce” is for doing this research for donors in a way that’s superior to whatever other group they would consult with in order to have it done. I’m not saying that you won’t do a better job, I’m just saying it seems worth thinking about.
We think that policy is an important area for effective altruism to develop into
Somepeople have argued against this. I’m also skeptical. My sense is that
This is an area where it plausibly does make sense to use a non-CEA label, since as soon as you step in to the political arena, you are inviting people to throw mud at you.
The highest leverage interventions may be at the meta-level. For example, creation of a website whose discussion culture can stay friendly and level-headed even with many participants—I suggested how this might be done at the end of this essay. Or here’s a proposal for fighting filter bubbles.
I’m generally skeptical that the intuitions which have worked for EA thus far will transfer well to the political arena. It seems like a much different animal. Again, I’d challenge you to think about whether this is your comparative advantage. The main advantage that comes to mind is that CEA has a lot of brand capital to spend, but doing political stuff is a good way to accidentally spend a lot brand capital very quickly if mud is thrown. As a flagship organization of the EA movement, there’s also a sense in which CEA draws from a pool of brand capital that belongs to the community at large. If CEA does something to discredit itself (e.g. publicly recommends a controversial policy), it’s possible for other EA organizations, or people who have identified publicly as EAs, to catch flak.
As a broad question: I understand it’s commonly advised in the business world to focus on a few “core competencies” and outsource most other functions. I’m curious whether this also makes sense in the nonprofit world.
Do you have plans to publish summaries of the research you do, e.g. on Wikipedia or the EA Wiki?
Yes, the default will be that everything we produce is published openly.
I’d also challenge you to think about what CEA’s “secret sauce” is for doing this research for >donors in a way that’s superior to whatever other group they would consult with in order to >have it done.
In most cases so far, the counterfactual is little research, rather than using some other consultancy. And in the wider landscape, there seems to be just very little in the direction of what we’d call EA charity recommendations. There’s GiveWell / Open Phil, there’s philanthropic advising that’s very heavily about understanding the preferences of the donor and finding charities that ‘fit’ those preferences, and there seems to us to be a very significant gap in the middle.
Some people have argued against this. I’m also skeptical.
In response to the linked-to article and notes: 1. I’m intuitively also very wary of EA engaging in partisan politics. Indeed, when I think of EA as applied to politics, I think of it as almost being defined by being non-partisan, opposed to tribal politics: where you come to views on policy on a case-by-case basis, weighing all the best evidence, deeply understanding all the various viewpoints (to the point of passing ideological Turing tests), being highly self-sceptical and looking out for ideological bias; 2. It’s also a major issue that whether certain policies are even good or bad can be incredibly difficult to know. E.g. when I think about AI policy, I can think of things where I know the magnitude of the impact of the policy would be very great indeed, but have no idea about the sign of the impact. Or e.g. being pro EU immigration to the UK 10 years ago (surely good! ultimately leads to the unintended consequence of Brexit (oh no, wait, I hadn’t thought about political equilibrium effects).
If that means we should abandon policy and politics as a whole, however, I think that would be wrong. Politics is a huge lever in the world, perhaps the single biggest lever, and to dismiss from the outset that whole method of making the world better would be to far too quickly narrow down our options.
This is an area where it plausibly does make sense to use a non-CEA label.
I agree that we need to think very carefully about what labels we use, and we should be very concerned with how the term ‘effective altruism’ might come to lose its meaning and value, or become the victim of malicious PR.
As a broad question: I understand it’s commonly advised in the business world to focus on a >few “core competencies” and outsource most other functions. I’m curious whether this also >makes sense in the nonprofit world.
Because of this general principle, I stress a lot about how many different things CEA is doing. I’m not sure whether the general principle is right for the sort of organisation we are, and we’re the exception to it, whether the principle just isn’t right for the sort of organisation we are, or whether we’re being irrational. My current instinct is that we should be aiming to focus more than we have done, and that we’ve just taken a good step in that direction.
In most cases so far, the counterfactual is little research, rather than using some other consultancy. And in the wider landscape, there seems to be just very little in the direction of what we’d call EA charity recommendations. There’s GiveWell / Open Phil, there’s philanthropic advising that’s very heavily about understanding the preferences of the donor and finding charities that ‘fit’ those preferences, and there seems to us to be a very significant gap in the middle.
Seems pretty convincing. This work also seems somewhat well suited to CEA, since you’re a natural point of contact for people interested in giving better, and large donors will be more impressed by recommendations made by an Oxford-affiliated organization.
If that means we should abandon policy and politics as a whole, however, I think that would be wrong. Politics is a huge lever in the world, perhaps the single biggest lever, and to dismiss from the outset that whole method of making the world better would be to far too quickly narrow down our options.
I agree that it seems like a big important lever, but I’m less certain that it’s a good fit for the profile of strengths the EA movement has currently built up. If someone was to create an app that made running ideological turing tests easy, and EAs in charge of policymaking were passing them at a much higher rate than matched controls with comparable education and ability, that’s the kind of thing that might convince me that policy was a comparative advantage. (Same for winning bets about the results of particular policies with matched controls.) So far, I’ve seen much more focus on e.g. creating people with high earning careers than creating people who score well according to these criteria. (Although that’s not the only conceivable approach—one could imagine the EA movement pushing for the legalization of prediction markets to outsource the work of making accurate predictions, for instance.)
I’m intuitively also very wary of EA engaging in partisan politics. … Or e.g. being pro EU immigration to the UK 10 years ago (surely good! ultimately leads to the unintended consequence of Brexit (oh no, wait, I hadn’t thought about political equilibrium effects).
It seems unlikely that CEA could engage in politics in a non-partisan fashion if you can’t even write a paragraph about being skeptical of partisan politics without resorting to partisan politics.
The true underlying objection to partisan politics isn’t that it involves political parties, it’s the tribal effects, which occur equally with immigration or brexit.
I don’t know how much you know about policy work by EA organizations besides CEA/GPP, so I thought I’d fill you in. There’s a lot going on.
We think that policy is an important area for effective altruism to develop into
Some people have argued against this. I’m also skeptical.
The Strategic Artificial Intelligence Research Centre (SAIRC)
The Open Philanthropy Project (Open Phil)
Sentience Politics
Stiftung fer Effektiver Altruismus/Effective Altruism Foundation (SEA/EAF)
Effective Altruism Policy Analytics (EAPA)
are all doing policy work. That’s five different organizations closely associated with effective altruism working on policy in three different countries (United States; United Kingdom; Switzerland). Even if we discount SAIRC’s association with EA, that’ still at least four organizations. I don’t know how much support policy work has in the EA community at large, outside of all these organizations, but I’m assuming if it’s enough that the sentiment won’t go away soon. It seems the effective altruism movement will be interested in policy work even if CEA itself isn’t.
I doubt there’s currently much value to be had in coordinating policy efforts between different countries. Within the EA community, solidarity to work on policy internationally, and sharing resources/research/talent between organizations might be valuable.
You said CEA has a lot of brand capital it would be sad to see blown on political projects which don’t bear fruit, and may hurt CEA’s and effective altruism’s reputation. I think CEA has more brand capital than these other organizations, except perhaps Open Phil. Of course, Open Phil is in the (non-profit) business of grantmaking, so their influence on policy will be through other organizations. This may distance them from controversy or blowback for programs run by their grantees, which are probably more experienced in navigating potential pitfalls of policy work anyway.
As a flagship organization of the EA movement, there’s also a sense in which CEA draws from a pool of brand capital that belongs to the community at large. If CEA does something to discredit itself (e.g. publicly recommends a controversial policy), it’s possible for other EA organizations, or people who have identified publicly as EAs, to catch flak
Sentience Politics and SEA/EAF seem likely to escalate rather than de-escalate policy work in the near future. If either of them discredits themselves, it might only hurt the EA brand in the German-speaking world and Scandinavia, or perhaps continental Europe. However, the work SEA/EAF has done to spread and grow effective altruism in Europe, and the projects this has enabled, seems to me one of the most promising initiatives in the whole community. So, they hold much of EA’s potential in their hands.
Anyone of the opinion effective altruism should be warier of entering the field of policy needs to keep these considerations in mind, not just what CEA does.
CEA is getting good at policy now. They have some experience with advising, and some contacts in the major parties, and can cause some changes in where major amounts of funds go. Obviously there are massive amounts of moveable funds in the public sector, and it’s hardly a matter of lobbying in direct opposition to major established interests, but about choosing important issues like aid effectiveness or risky tech that political ideology is more neutral on. And you can certainly advise on such topics while remaining above the political fray. Whether to be drawn into ideological arguments in exchange for additional hort term policy gains is a somewhat separate question.
So it doesn’t make sense at all that you’d be sceptical about political intervention by CEA.
I agree. As the flagship organisation, CEA stepping into politics is unnecessarily risky. Why not let other smaller organisations experiment with this first?
Do you have plans to publish summaries of the research you do, e.g. on Wikipedia
Wikipedia’s policies forbid original research. Publishing the research on the organization’s website and then citing it on Wikipedia would also be discouraged, because of exclusive reliance on primary sources. (And the close connection to the subject would raise eyebrows.)
I think this is worth mentioning because I’ve seen some embarrassing violations of Wikipedia policy on EA-related articles recently.
If someone at CEA reads a bunch of studies on a particular topic, and writes several well-cited paragraphs that summarize the literature, this would be appropriate for Wikipedia, no? (I agree other ways of interpreting “research” might not be.)
Exciting stuff!
Do you have plans to publish summaries of the research you do, e.g. on Wikipedia or the EA Wiki? If I remember correctly, GiveWell was originally “The Clear Fund”, and their comparative advantage was supposed to be making the research behind their grants public, instead of keeping research to themselves like most foundations. Making research public lets people criticize it, or base their giving off of it even if they didn’t request it. See also. There are certainly reasons to stay quiet in some cases, and I could understand why donors might not want their names announced, but it feels like the bias should be towards publishing.
I’d also challenge you to think about what CEA’s “secret sauce” is for doing this research for donors in a way that’s superior to whatever other group they would consult with in order to have it done. I’m not saying that you won’t do a better job, I’m just saying it seems worth thinking about.
Some people have argued against this. I’m also skeptical. My sense is that
This is an area where it plausibly does make sense to use a non-CEA label, since as soon as you step in to the political arena, you are inviting people to throw mud at you.
The highest leverage interventions may be at the meta-level. For example, creation of a website whose discussion culture can stay friendly and level-headed even with many participants—I suggested how this might be done at the end of this essay. Or here’s a proposal for fighting filter bubbles.
I’m generally skeptical that the intuitions which have worked for EA thus far will transfer well to the political arena. It seems like a much different animal. Again, I’d challenge you to think about whether this is your comparative advantage. The main advantage that comes to mind is that CEA has a lot of brand capital to spend, but doing political stuff is a good way to accidentally spend a lot brand capital very quickly if mud is thrown. As a flagship organization of the EA movement, there’s also a sense in which CEA draws from a pool of brand capital that belongs to the community at large. If CEA does something to discredit itself (e.g. publicly recommends a controversial policy), it’s possible for other EA organizations, or people who have identified publicly as EAs, to catch flak.
As a broad question: I understand it’s commonly advised in the business world to focus on a few “core competencies” and outsource most other functions. I’m curious whether this also makes sense in the nonprofit world.
Thanks so much for this comment!
Yes, the default will be that everything we produce is published openly.
In most cases so far, the counterfactual is little research, rather than using some other consultancy. And in the wider landscape, there seems to be just very little in the direction of what we’d call EA charity recommendations. There’s GiveWell / Open Phil, there’s philanthropic advising that’s very heavily about understanding the preferences of the donor and finding charities that ‘fit’ those preferences, and there seems to us to be a very significant gap in the middle.
In response to the linked-to article and notes: 1. I’m intuitively also very wary of EA engaging in partisan politics. Indeed, when I think of EA as applied to politics, I think of it as almost being defined by being non-partisan, opposed to tribal politics: where you come to views on policy on a case-by-case basis, weighing all the best evidence, deeply understanding all the various viewpoints (to the point of passing ideological Turing tests), being highly self-sceptical and looking out for ideological bias; 2. It’s also a major issue that whether certain policies are even good or bad can be incredibly difficult to know. E.g. when I think about AI policy, I can think of things where I know the magnitude of the impact of the policy would be very great indeed, but have no idea about the sign of the impact. Or e.g. being pro EU immigration to the UK 10 years ago (surely good! ultimately leads to the unintended consequence of Brexit (oh no, wait, I hadn’t thought about political equilibrium effects).
If that means we should abandon policy and politics as a whole, however, I think that would be wrong. Politics is a huge lever in the world, perhaps the single biggest lever, and to dismiss from the outset that whole method of making the world better would be to far too quickly narrow down our options.
I agree that we need to think very carefully about what labels we use, and we should be very concerned with how the term ‘effective altruism’ might come to lose its meaning and value, or become the victim of malicious PR.
Because of this general principle, I stress a lot about how many different things CEA is doing. I’m not sure whether the general principle is right for the sort of organisation we are, and we’re the exception to it, whether the principle just isn’t right for the sort of organisation we are, or whether we’re being irrational. My current instinct is that we should be aiming to focus more than we have done, and that we’ve just taken a good step in that direction.
Seems pretty convincing. This work also seems somewhat well suited to CEA, since you’re a natural point of contact for people interested in giving better, and large donors will be more impressed by recommendations made by an Oxford-affiliated organization.
I agree that it seems like a big important lever, but I’m less certain that it’s a good fit for the profile of strengths the EA movement has currently built up. If someone was to create an app that made running ideological turing tests easy, and EAs in charge of policymaking were passing them at a much higher rate than matched controls with comparable education and ability, that’s the kind of thing that might convince me that policy was a comparative advantage. (Same for winning bets about the results of particular policies with matched controls.) So far, I’ve seen much more focus on e.g. creating people with high earning careers than creating people who score well according to these criteria. (Although that’s not the only conceivable approach—one could imagine the EA movement pushing for the legalization of prediction markets to outsource the work of making accurate predictions, for instance.)
It seems unlikely that CEA could engage in politics in a non-partisan fashion if you can’t even write a paragraph about being skeptical of partisan politics without resorting to partisan politics.
Pro EU immigration as opposed to Pro EU, that’s still a policy by policy basis.
The true underlying objection to partisan politics isn’t that it involves political parties, it’s the tribal effects, which occur equally with immigration or brexit.
I don’t know how much you know about policy work by EA organizations besides CEA/GPP, so I thought I’d fill you in. There’s a lot going on.
The Strategic Artificial Intelligence Research Centre (SAIRC)
The Open Philanthropy Project (Open Phil)
Sentience Politics
Stiftung fer Effektiver Altruismus/Effective Altruism Foundation (SEA/EAF)
Effective Altruism Policy Analytics (EAPA)
are all doing policy work. That’s five different organizations closely associated with effective altruism working on policy in three different countries (United States; United Kingdom; Switzerland). Even if we discount SAIRC’s association with EA, that’ still at least four organizations. I don’t know how much support policy work has in the EA community at large, outside of all these organizations, but I’m assuming if it’s enough that the sentiment won’t go away soon. It seems the effective altruism movement will be interested in policy work even if CEA itself isn’t.
I doubt there’s currently much value to be had in coordinating policy efforts between different countries. Within the EA community, solidarity to work on policy internationally, and sharing resources/research/talent between organizations might be valuable.
You said CEA has a lot of brand capital it would be sad to see blown on political projects which don’t bear fruit, and may hurt CEA’s and effective altruism’s reputation. I think CEA has more brand capital than these other organizations, except perhaps Open Phil. Of course, Open Phil is in the (non-profit) business of grantmaking, so their influence on policy will be through other organizations. This may distance them from controversy or blowback for programs run by their grantees, which are probably more experienced in navigating potential pitfalls of policy work anyway.
Sentience Politics and SEA/EAF seem likely to escalate rather than de-escalate policy work in the near future. If either of them discredits themselves, it might only hurt the EA brand in the German-speaking world and Scandinavia, or perhaps continental Europe. However, the work SEA/EAF has done to spread and grow effective altruism in Europe, and the projects this has enabled, seems to me one of the most promising initiatives in the whole community. So, they hold much of EA’s potential in their hands.
Anyone of the opinion effective altruism should be warier of entering the field of policy needs to keep these considerations in mind, not just what CEA does.
CEA is getting good at policy now. They have some experience with advising, and some contacts in the major parties, and can cause some changes in where major amounts of funds go. Obviously there are massive amounts of moveable funds in the public sector, and it’s hardly a matter of lobbying in direct opposition to major established interests, but about choosing important issues like aid effectiveness or risky tech that political ideology is more neutral on. And you can certainly advise on such topics while remaining above the political fray. Whether to be drawn into ideological arguments in exchange for additional hort term policy gains is a somewhat separate question.
So it doesn’t make sense at all that you’d be sceptical about political intervention by CEA.
I agree. As the flagship organisation, CEA stepping into politics is unnecessarily risky. Why not let other smaller organisations experiment with this first?
Wikipedia’s policies forbid original research. Publishing the research on the organization’s website and then citing it on Wikipedia would also be discouraged, because of exclusive reliance on primary sources. (And the close connection to the subject would raise eyebrows.)
I think this is worth mentioning because I’ve seen some embarrassing violations of Wikipedia policy on EA-related articles recently.
If someone at CEA reads a bunch of studies on a particular topic, and writes several well-cited paragraphs that summarize the literature, this would be appropriate for Wikipedia, no? (I agree other ways of interpreting “research” might not be.)
This might be alright. See these guidelines though: https://en.wikipedia.org/wiki/Wikipedia:No_original_research#Synthesis_of_published_material