to rebalance the movement’s portfolio of outreach/recruitment/movement-building activities away from efforts that use EA/EA-related framings and towards projects that instead focus on the constituent causes. In March 2023, Open Philanthropy’s Alexander Berger invited Claire Zabel (Open Phil), James Snowden (Open Phil), Max Dalton (CEA), Nicole Ross (CEA), Niel Bowerman (80k), Will MacAskill (GPI), and myself (Open Phil, staffing the group) to join a working group on this and related questions.
In the proposals discussed, was the idea that non-AI-related causes would decrease the share of support they received from current levels? Or would eg the EAG replacement process be offset by making one of the others non-AI focused (or increase the amount of support those causes received in some other way)?
There was significant disagreement whether 80k (which was chosen as a concrete example to shed light on a more general question that many meta-orgs run into) should be more explicit about its focus on longtermism/existential risk.
I have to say, this really worries me. It seems like it should be self-evidently good after FTX and all the subsequent focus on honesty and virtue that EA organisations should be as transparent as possible about their motivations. Do we know what the rationale of the people who disagreed was?
Hey, I wasn’t a part of these discussions, but from my perspective (web director at 80k), I think we are transparent about the fact that our work comes from a longtermist perspective that suggests that existential risks are the most pressing issues. The reason we try to present, which is also the true reason, is that we think these are the areas where many of our readers, and thereofre we, can make the biggest positive impact.
Here are some of the places we talk about this:
1. Our problem profiles page (one of our most popular pages) explicitly says we rank existential risks as most pressing (ranking AI first) and explains why—both at the very top of the page “We aim to list issues where each additional person can have the most positive impact. So we focus on problems that others neglect, which are solvable, and which are unusually big in scale, often because they could affect many future generations — such as existential risks. This makes our list different from those you might find elsewhere.” and more in the FAQ, as well as in the problem profiles themselves.
2. We say at the top of our “priority paths” list that these are aimed at people who “want to help tackle the global problems we think are most pressing”, linking back to the problems ranking.
3. We also have in-depth discussions of our views on longtermism and the importance of existential risk in our advanced series.
So we are aiming to be honest about our motivations and problem prioritization, and I think we succeed. For what it’s worth I don’t often come across cases of people who have misconceptions about what issues we think are most pressing (though if you know of any such people please let me know!).
That said, I basically agree we could make these views more obvious! E.g. we don’t talk much on the front page of the site or in our ‘start here’ essay or much at the beginning of the career guide. I’m open to thinking we should.
One way of interpreting the call to make our longtermist perspective more “explicit”: I think some people think we should pitch our career advice exclusively at longtermists, or people who already want to work on x-risk. We could definitely move further in this direction, but I think we have some good reasons not to, including:
We think we offer a lot of value by introducing the ideas of longtermism and x-risk mitigation to people who aren’t familiar with these ideas already, and making the case that they are important – so narrowly targeting an audience that already shares these priorities (a very small number of people!) would mean leaving this source of impact on the table.
We have a lot of materials that can be useful to people who want to do good in their careers but won’t necessarily adopt a longtermist perspective. And insofar as having EA be “big tent” is a good thing (which I tend to think it is though am not that confident), I’m happy 80k introduces a lot of people who will take different perspectives to EA.
We are cause neutral[1] – we prioritise x-risk reduction because we think it’s most pressing, but it’s possible we could learn more that would make us change our priorities. Since we’re open to that, it seems reasonable not to fully tie our brand to longtermism or existential risk. It might even be misleading to open with x-risk, since it’d fail to communicate that we are prioriritsing that because of our views about the pressingess of existential risk reduction. And since the value proposition of our site for readers is in part to help them have more impact, I think they want to know which issues we think are most pressing.
[1] Contrast with unopinionated about causes. Cause neutrality in this usage means being open to prioritising whatever causes you think will allow you to help others the most, which you might have an opinion on.
“E.g. we don’t talk much on the front page of the site or in our ‘start here’ essay or much at the beginning of the career guide. I’m open to thinking we should. ”
I agree with this, and feel like the best transparent approach might be to put your headline findings on the front page and more clearly, because like you say you do have to dig a surprising amount to find your headline findings.
Something like (forgive the average wording)
“We think that working on longtermists causes is the best way to do good, so check these out here...”
Then maybe even as a caveat somewhere (blatant near termist plug) “some people believe near termist causes are the most important, and others due to their skills or life stage may be in a better position to work on near term causes. If you’re interested in learning more about high impact near termist causes check these out here ..”
Obviously as a web manager you could do far better with the wording but you get my drift!
That said, I basically agree we could make these views more obvious! E.g. we don’t talk much on the front page of the site or in our ‘start here’ essay or much at the beginning of the career guide. I’m open to thinking we should.
I can’t speak for other people who filled out the survey but: I agree that orgs should be transparent about their motivations.
The questions asks (basically) “should 80k be more transparent [than it currently is]”, and I think I gave a “probably not” type answer, because I think that 80k is already fairly transparent about this (e.g. it’s pretty clear when you look at their problem profiles or whatever).
In the proposals discussed, was the idea that non-AI-related causes would decrease the share of support they received from current levels? Or would eg the EAG replacement process be offset by making one of the others non-AI focused (or increase the amount of support those causes received in some other way)?
I have to say, this really worries me. It seems like it should be self-evidently good after FTX and all the subsequent focus on honesty and virtue that EA organisations should be as transparent as possible about their motivations. Do we know what the rationale of the people who disagreed was?
Hey, I wasn’t a part of these discussions, but from my perspective (web director at 80k), I think we are transparent about the fact that our work comes from a longtermist perspective that suggests that existential risks are the most pressing issues. The reason we try to present, which is also the true reason, is that we think these are the areas where many of our readers, and thereofre we, can make the biggest positive impact.
Here are some of the places we talk about this:
1. Our problem profiles page (one of our most popular pages) explicitly says we rank existential risks as most pressing (ranking AI first) and explains why—both at the very top of the page “We aim to list issues where each additional person can have the most positive impact. So we focus on problems that others neglect, which are solvable, and which are unusually big in scale, often because they could affect many future generations — such as existential risks. This makes our list different from those you might find elsewhere.” and more in the FAQ, as well as in the problem profiles themselves.
2. We say at the top of our “priority paths” list that these are aimed at people who “want to help tackle the global problems we think are most pressing”, linking back to the problems ranking.
3. We also have in-depth discussions of our views on longtermism and the importance of existential risk in our advanced series.
So we are aiming to be honest about our motivations and problem prioritization, and I think we succeed. For what it’s worth I don’t often come across cases of people who have misconceptions about what issues we think are most pressing (though if you know of any such people please let me know!).
That said, I basically agree we could make these views more obvious! E.g. we don’t talk much on the front page of the site or in our ‘start here’ essay or much at the beginning of the career guide. I’m open to thinking we should.
One way of interpreting the call to make our longtermist perspective more “explicit”: I think some people think we should pitch our career advice exclusively at longtermists, or people who already want to work on x-risk. We could definitely move further in this direction, but I think we have some good reasons not to, including:
We think we offer a lot of value by introducing the ideas of longtermism and x-risk mitigation to people who aren’t familiar with these ideas already, and making the case that they are important – so narrowly targeting an audience that already shares these priorities (a very small number of people!) would mean leaving this source of impact on the table.
We have a lot of materials that can be useful to people who want to do good in their careers but won’t necessarily adopt a longtermist perspective. And insofar as having EA be “big tent” is a good thing (which I tend to think it is though am not that confident), I’m happy 80k introduces a lot of people who will take different perspectives to EA.
We are cause neutral[1] – we prioritise x-risk reduction because we think it’s most pressing, but it’s possible we could learn more that would make us change our priorities. Since we’re open to that, it seems reasonable not to fully tie our brand to longtermism or existential risk. It might even be misleading to open with x-risk, since it’d fail to communicate that we are prioriritsing that because of our views about the pressingess of existential risk reduction. And since the value proposition of our site for readers is in part to help them have more impact, I think they want to know which issues we think are most pressing.
[1] Contrast with unopinionated about causes. Cause neutrality in this usage means being open to prioritising whatever causes you think will allow you to help others the most, which you might have an opinion on.
“E.g. we don’t talk much on the front page of the site or in our ‘start here’ essay or much at the beginning of the career guide. I’m open to thinking we should. ”
I agree with this, and feel like the best transparent approach might be to put your headline findings on the front page and more clearly, because like you say you do have to dig a surprising amount to find your headline findings.
Something like (forgive the average wording)
“We think that working on longtermists causes is the best way to do good, so check these out here...”
Then maybe even as a caveat somewhere (blatant near termist plug) “some people believe near termist causes are the most important, and others due to their skills or life stage may be in a better position to work on near term causes. If you’re interested in learning more about high impact near termist causes check these out here ..”
Obviously as a web manager you could do far better with the wording but you get my drift!
Copying from my comment above:
Update: we’ve now added some copy on this to our ‘about us’ page, the front page where we talk about ‘list of the world’ most pressing problems’, our ‘start here’ page, and the introduction to our career guide.
Thanks : ) we might workshop a few ways of getting something about this earlier in the user experience.
Update: we added some copy on this to our ‘about us’ page, the front page where we talk about ‘list of the world’ most pressing problems’, our ‘start here’ page, and the introduction to our career guide.
I can’t speak for other people who filled out the survey but: I agree that orgs should be transparent about their motivations.
The questions asks (basically) “should 80k be more transparent [than it currently is]”, and I think I gave a “probably not” type answer, because I think that 80k is already fairly transparent about this (e.g. it’s pretty clear when you look at their problem profiles or whatever).