Do you have plans to publish summaries of the research you do, e.g. on Wikipedia or the EA Wiki?
Yes, the default will be that everything we produce is published openly.
I’d also challenge you to think about what CEA’s “secret sauce” is for doing this research for >donors in a way that’s superior to whatever other group they would consult with in order to >have it done.
In most cases so far, the counterfactual is little research, rather than using some other consultancy. And in the wider landscape, there seems to be just very little in the direction of what we’d call EA charity recommendations. There’s GiveWell / Open Phil, there’s philanthropic advising that’s very heavily about understanding the preferences of the donor and finding charities that ‘fit’ those preferences, and there seems to us to be a very significant gap in the middle.
Some people have argued against this. I’m also skeptical.
In response to the linked-to article and notes: 1. I’m intuitively also very wary of EA engaging in partisan politics. Indeed, when I think of EA as applied to politics, I think of it as almost being defined by being non-partisan, opposed to tribal politics: where you come to views on policy on a case-by-case basis, weighing all the best evidence, deeply understanding all the various viewpoints (to the point of passing ideological Turing tests), being highly self-sceptical and looking out for ideological bias; 2. It’s also a major issue that whether certain policies are even good or bad can be incredibly difficult to know. E.g. when I think about AI policy, I can think of things where I know the magnitude of the impact of the policy would be very great indeed, but have no idea about the sign of the impact. Or e.g. being pro EU immigration to the UK 10 years ago (surely good! ultimately leads to the unintended consequence of Brexit (oh no, wait, I hadn’t thought about political equilibrium effects).
If that means we should abandon policy and politics as a whole, however, I think that would be wrong. Politics is a huge lever in the world, perhaps the single biggest lever, and to dismiss from the outset that whole method of making the world better would be to far too quickly narrow down our options.
This is an area where it plausibly does make sense to use a non-CEA label.
I agree that we need to think very carefully about what labels we use, and we should be very concerned with how the term ‘effective altruism’ might come to lose its meaning and value, or become the victim of malicious PR.
As a broad question: I understand it’s commonly advised in the business world to focus on a >few “core competencies” and outsource most other functions. I’m curious whether this also >makes sense in the nonprofit world.
Because of this general principle, I stress a lot about how many different things CEA is doing. I’m not sure whether the general principle is right for the sort of organisation we are, and we’re the exception to it, whether the principle just isn’t right for the sort of organisation we are, or whether we’re being irrational. My current instinct is that we should be aiming to focus more than we have done, and that we’ve just taken a good step in that direction.
In most cases so far, the counterfactual is little research, rather than using some other consultancy. And in the wider landscape, there seems to be just very little in the direction of what we’d call EA charity recommendations. There’s GiveWell / Open Phil, there’s philanthropic advising that’s very heavily about understanding the preferences of the donor and finding charities that ‘fit’ those preferences, and there seems to us to be a very significant gap in the middle.
Seems pretty convincing. This work also seems somewhat well suited to CEA, since you’re a natural point of contact for people interested in giving better, and large donors will be more impressed by recommendations made by an Oxford-affiliated organization.
If that means we should abandon policy and politics as a whole, however, I think that would be wrong. Politics is a huge lever in the world, perhaps the single biggest lever, and to dismiss from the outset that whole method of making the world better would be to far too quickly narrow down our options.
I agree that it seems like a big important lever, but I’m less certain that it’s a good fit for the profile of strengths the EA movement has currently built up. If someone was to create an app that made running ideological turing tests easy, and EAs in charge of policymaking were passing them at a much higher rate than matched controls with comparable education and ability, that’s the kind of thing that might convince me that policy was a comparative advantage. (Same for winning bets about the results of particular policies with matched controls.) So far, I’ve seen much more focus on e.g. creating people with high earning careers than creating people who score well according to these criteria. (Although that’s not the only conceivable approach—one could imagine the EA movement pushing for the legalization of prediction markets to outsource the work of making accurate predictions, for instance.)
I’m intuitively also very wary of EA engaging in partisan politics. … Or e.g. being pro EU immigration to the UK 10 years ago (surely good! ultimately leads to the unintended consequence of Brexit (oh no, wait, I hadn’t thought about political equilibrium effects).
It seems unlikely that CEA could engage in politics in a non-partisan fashion if you can’t even write a paragraph about being skeptical of partisan politics without resorting to partisan politics.
The true underlying objection to partisan politics isn’t that it involves political parties, it’s the tribal effects, which occur equally with immigration or brexit.
Thanks so much for this comment!
Yes, the default will be that everything we produce is published openly.
In most cases so far, the counterfactual is little research, rather than using some other consultancy. And in the wider landscape, there seems to be just very little in the direction of what we’d call EA charity recommendations. There’s GiveWell / Open Phil, there’s philanthropic advising that’s very heavily about understanding the preferences of the donor and finding charities that ‘fit’ those preferences, and there seems to us to be a very significant gap in the middle.
In response to the linked-to article and notes: 1. I’m intuitively also very wary of EA engaging in partisan politics. Indeed, when I think of EA as applied to politics, I think of it as almost being defined by being non-partisan, opposed to tribal politics: where you come to views on policy on a case-by-case basis, weighing all the best evidence, deeply understanding all the various viewpoints (to the point of passing ideological Turing tests), being highly self-sceptical and looking out for ideological bias; 2. It’s also a major issue that whether certain policies are even good or bad can be incredibly difficult to know. E.g. when I think about AI policy, I can think of things where I know the magnitude of the impact of the policy would be very great indeed, but have no idea about the sign of the impact. Or e.g. being pro EU immigration to the UK 10 years ago (surely good! ultimately leads to the unintended consequence of Brexit (oh no, wait, I hadn’t thought about political equilibrium effects).
If that means we should abandon policy and politics as a whole, however, I think that would be wrong. Politics is a huge lever in the world, perhaps the single biggest lever, and to dismiss from the outset that whole method of making the world better would be to far too quickly narrow down our options.
I agree that we need to think very carefully about what labels we use, and we should be very concerned with how the term ‘effective altruism’ might come to lose its meaning and value, or become the victim of malicious PR.
Because of this general principle, I stress a lot about how many different things CEA is doing. I’m not sure whether the general principle is right for the sort of organisation we are, and we’re the exception to it, whether the principle just isn’t right for the sort of organisation we are, or whether we’re being irrational. My current instinct is that we should be aiming to focus more than we have done, and that we’ve just taken a good step in that direction.
Seems pretty convincing. This work also seems somewhat well suited to CEA, since you’re a natural point of contact for people interested in giving better, and large donors will be more impressed by recommendations made by an Oxford-affiliated organization.
I agree that it seems like a big important lever, but I’m less certain that it’s a good fit for the profile of strengths the EA movement has currently built up. If someone was to create an app that made running ideological turing tests easy, and EAs in charge of policymaking were passing them at a much higher rate than matched controls with comparable education and ability, that’s the kind of thing that might convince me that policy was a comparative advantage. (Same for winning bets about the results of particular policies with matched controls.) So far, I’ve seen much more focus on e.g. creating people with high earning careers than creating people who score well according to these criteria. (Although that’s not the only conceivable approach—one could imagine the EA movement pushing for the legalization of prediction markets to outsource the work of making accurate predictions, for instance.)
It seems unlikely that CEA could engage in politics in a non-partisan fashion if you can’t even write a paragraph about being skeptical of partisan politics without resorting to partisan politics.
Pro EU immigration as opposed to Pro EU, that’s still a policy by policy basis.
The true underlying objection to partisan politics isn’t that it involves political parties, it’s the tribal effects, which occur equally with immigration or brexit.