I worry you’ve missed the most important part of the analysis. If we think what it means for a “new cause to be accepted by the effective altruism movement” that would proably be either:
It becomes a cause area touted by EA organisations like Give Well, CEA, or GWWC. In practice, this involves convincing the leadership of those organisations. If you want to get a new cause in via this route, that’s end goal you need to achieve; writing good arguments is a means to that end.
you convince individuals EA to change what they do. To a large extent, this also depends on convincing EA-org leadership, because that’s who people look to for confirmation a new cause has been vetted. This isn’t necessarily stupid on the part of individual EAs to defer to expert judgement: they might think “Oh, well if so and so aren’t convinced about X, there’s probably a reason for it”.
This seems as good as time as any to re-plug the stuff I’ve done. I think these mostly meet your criteria, but fail in some key ways.
I first posted about mental health and happiness 18 months ago and explained why poverty is less effective than most will think and mental health more effective. I think I was, at the time, lacking a particular charity recommendation though (I now think Basic Needs and Strong Minds look like reasonable picks); I agree it’s important new cause suggestions have ‘shovel ready’ project.
I argued you, whoever you are, probably don’t want to donate the Against Malaria Foundation. I explain it’s probably a mistake for EAs to focus too much on ‘saving lives’ at the expense of either ‘improving lives’ or ‘saving humanity’.
Back in August I explain why drug policy reform should be taken seriously as new cause. I agree that lacks a shovel ready project too, but, if anything, I think there was too much depth and rigour there. I’m still waiting for anyone to tell me where my EV calcs have gone wrong and drug policy reform wouldn’t be more cost-effective than anything in GiveWell’s repertoire.
Yes. I think I was over-selling my point and that was a mistake. Our back and forth was useful and I’ll have to think about it again when I look at DPR again.
By way of explanation, I think I was venting my frustrationg at the ratio of “time I spend researching and writing about drug policy reform:serious interest it received”
I think you’re right that having “an organization” talking about X is necessary for X to reach “full legitimacy”, but it’s worth pointing out that many pioneers in new areas within EA just started their own orgs (ACE, MIRI etc.) rather than trying to persuade others to support them.
Having even a nominal “project” allows you to collaborate more easily with others and starts to build credibility that isn’t just linked to you. I think perhaps you should just start MH&HR.
I’m still waiting for anyone to tell me where my EV calcs have gone wrong and drug policy reform wouldn’t be more cost-effective than anything in GiveWell’s repertoire.
One thing I’d note here is that the rigor of GiveWell analysis versus your EV calcs is very different. There are other EV calcs out there with similar rigor that promise significantly higher $/good stuff, such as most stuff in the far future cause-space.
I argued you, whoever you are, probably don’t want to donate the Against Malaria Foundation. I explain it’s probably a mistake for EAs to focus too much on ‘saving lives’ at the expense of either ‘improving lives’ or ‘saving humanity’.
the rigor of GiveWell analysis versus your EV calcs is very different
Sort of a side question, but could you say what sort of thing you had in mind? i.e. the particular sense in which GW’s calc are rigorous. I ask because I find their assumptions odd/pretty disatisfying and think they leave out loads of stuff. I mean to write about when I find time.
This isn’t to say my calculations are more rigorous than theirs. GW have loads more detail.
I think I’d broadly model rigor on a framework like this as the standard deviation of the estimate of cost-effectiveness when using a X% credibility interval (where X% is consistent across all compared intervals). Models with lower standard deviations can be said to be more rigorous as there are less (known) sources of uncertainty.
So I think we agree on some things and disagree on others. I think that getting large EA organizations to adopt the cause definitely helps but is but is not necessary. Animal rights as a whole, for example, is not mentioned at all on GiveWell or GWWC and it’s listed as a 2nd tier area by 80,000 Hours (bit.ly/2DdxCqQ), but it is still pretty clearly endorsed by EA as a whole. If by EA orgs you mean EA orgs of any size, I do think that most cause areas that are accepted by the EA movement will get organizations started in it in time. I think that causes like wild animal suffering and positive psychology are decent examples of causes that have gotten some traction without major pre-existing organizations endorsing them. It might also come down to disagreements about definitions of “in EA”.
I almost put your blogs into this post as a positive example of what I wish people would do, but I wanted to keep the post to a lower length. In general, I think your efforts on mental health have updated more than a few EAs in positive directions towards it, including myself. There has been some related external content and research on this topic in part because of your posts and I would put a nontrivial chance on some EAs in the next 1-5 years focusing exclusively on this cause area and starting something in it. In general, I would expect adoption to new causes to be fairly slow and start with small numbers of people and maybe one organization before expanding to be on the standard go-to EA list.
I think if I were to guess what is holding back mental health / positive psych as a cause area it would be having a really strong concrete charity to donate to. By strong charity, I mean strong CEA but also focus on narrow set of interventions, decent evidence base/track record, strong M&E, and decently investigated by an external EA party (would not have to be an org. Could be an individual.) Something like Strong Minds might be a good fit for this.
I made this same point in the ‘Effective Altruism’ Facebook group a while ago if anyone wanted to follow for other public conversation on the topic. I wonder if a post on the EA Forum summarizing these kinds of points and requesting evaluations or reviews of charities based on effective positive psychology interventions rigorously implemented would be a good idea.
I think this is missing some prior steps as to how a cause can be built up in the effective altruism movement. For example, a focus on risks of astronomical future suffering (“s-risks), and reducing wild animal suffering (RWAS), both largely inspired in EA by Brian Tomasik’s work, have found success in the German-speaking world and increasingly globally throughout the movement. These are causes which have both have largely circumvented attention from either the Open Philanthropy Project (Open Phil) or the Centre for Effective Altruism (CEA) and its satellite projects (e.g., GWWC, 80,000 Hours, etc.).
Since the beginning of effective altruism, global poverty alleviation and global health have been the biggest focus areas. I witnessed as the movement grew causes were developed through a mix of online coordination on the global level with social networks like Facebook, mailing lists, and fora like LessWrong, and locally or regionally with non-profit organizations focusing on outreach and research. This was the case for both AI safety and farm animal welfare, which proportionally didn’t have nearly the representation in EA five years ago that they have now.
Certainly smaller focus areas like s-risk reduction and RWAS are receiving much less attention than others in EA. However, that across multiple organizations each of those causes is respectively funded by between $100k and $1 million USD, largely from individual effective altruists, is proof of concept a cause can be built up without being touted by CEA or Open Phil. And what’s more it’s not as if the trajectory of these causes looks bleak. They’ve been building up growth momentum for years, and they’re not showing signs of slowing. So how much they achieve increasing success in the near future will provide more data about what’s possible in getting a new cause into EA. What’s more, at least RWAS is a cause that’s on Open Phil’s radar. So it’s not like grants or endorsements of these causes from Open Phil or CEA couldn’t happen in the future.
In general I think developing a cause within the effective altruism community is something which often precedes more focus from it by flagship organizations of the movement, and that the process of development often takes the form of following the kinds of steps Joey outlined above. Obviously there could be more to the process than just that. I’m working on a post to introduce a project which builds on the kinds of steps Joey pointed out, and you’ve already taken, to organize and coordinate causes in effective altruism.
The people at 80kh etc. probably have their hands full. Therefore, even though your post making the case for mental health was laudable, I can well imagine it might not result in action in the short term on their part because of heavy prioritization.
If one wants to make substantive case and roadmap for possible actions for MH, it might make sense to take the initiative and do it oneself or together with a group of interested people. Given there is enough credence for the case, this effort might lead to formation of a new EA-aligned MH organization. I for one, might be interested in helping out with making the case for MH
I worry you’ve missed the most important part of the analysis. If we think what it means for a “new cause to be accepted by the effective altruism movement” that would proably be either:
It becomes a cause area touted by EA organisations like Give Well, CEA, or GWWC. In practice, this involves convincing the leadership of those organisations. If you want to get a new cause in via this route, that’s end goal you need to achieve; writing good arguments is a means to that end.
you convince individuals EA to change what they do. To a large extent, this also depends on convincing EA-org leadership, because that’s who people look to for confirmation a new cause has been vetted. This isn’t necessarily stupid on the part of individual EAs to defer to expert judgement: they might think “Oh, well if so and so aren’t convinced about X, there’s probably a reason for it”.
This seems as good as time as any to re-plug the stuff I’ve done. I think these mostly meet your criteria, but fail in some key ways.
I first posted about mental health and happiness 18 months ago and explained why poverty is less effective than most will think and mental health more effective. I think I was, at the time, lacking a particular charity recommendation though (I now think Basic Needs and Strong Minds look like reasonable picks); I agree it’s important new cause suggestions have ‘shovel ready’ project.
I argued you, whoever you are, probably don’t want to donate the Against Malaria Foundation. I explain it’s probably a mistake for EAs to focus too much on ‘saving lives’ at the expense of either ‘improving lives’ or ‘saving humanity’.
Back in August I explain why drug policy reform should be taken seriously as new cause. I agree that lacks a shovel ready project too, but, if anything, I think there was too much depth and rigour there. I’m still waiting for anyone to tell me where my EV calcs have gone wrong and drug policy reform wouldn’t be more cost-effective than anything in GiveWell’s repertoire.
For what it’s worth, we had some back & forth regarding modeling assumptions around drug policy reform cost-effectiveness:
http://effective-altruism.com/ea/1em/costeffectiveness_analysis_drug_liberalization/bx1
I remember. I don’t think we quite got the bottom of the issue however and couldn’t agree what the right counterfactual was.
Sure, but I don’t think the right summary here is “no one has told me how my EV calc is wrong.”
A better summary probably includes something like “EV calcs are complicated and their outputs are very sensitive to the modeling assumptions used.”
Yes. I think I was over-selling my point and that was a mistake. Our back and forth was useful and I’ll have to think about it again when I look at DPR again.
By way of explanation, I think I was venting my frustrationg at the ratio of “time I spend researching and writing about drug policy reform:serious interest it received”
I think you’re right that having “an organization” talking about X is necessary for X to reach “full legitimacy”, but it’s worth pointing out that many pioneers in new areas within EA just started their own orgs (ACE, MIRI etc.) rather than trying to persuade others to support them.
Having even a nominal “project” allows you to collaborate more easily with others and starts to build credibility that isn’t just linked to you. I think perhaps you should just start MH&HR.
Interesting thoughts, actually...
What does the R stand for?
“Mental Health and Happiness Research”. Coin your own meaningless acronym if you don’t like it :)
One thing I’d note here is that the rigor of GiveWell analysis versus your EV calcs is very different. There are other EV calcs out there with similar rigor that promise significantly higher $/good stuff, such as most stuff in the far future cause-space.
I’d also note that GiveWell replied to your argument here: https://blog.givewell.org/2016/12/12/amf-population-ethics/
Sort of a side question, but could you say what sort of thing you had in mind? i.e. the particular sense in which GW’s calc are rigorous. I ask because I find their assumptions odd/pretty disatisfying and think they leave out loads of stuff. I mean to write about when I find time.
This isn’t to say my calculations are more rigorous than theirs. GW have loads more detail.
I think I’d broadly model rigor on a framework like this as the standard deviation of the estimate of cost-effectiveness when using a X% credibility interval (where X% is consistent across all compared intervals). Models with lower standard deviations can be said to be more rigorous as there are less (known) sources of uncertainty.
So I think we agree on some things and disagree on others. I think that getting large EA organizations to adopt the cause definitely helps but is but is not necessary. Animal rights as a whole, for example, is not mentioned at all on GiveWell or GWWC and it’s listed as a 2nd tier area by 80,000 Hours (bit.ly/2DdxCqQ), but it is still pretty clearly endorsed by EA as a whole. If by EA orgs you mean EA orgs of any size, I do think that most cause areas that are accepted by the EA movement will get organizations started in it in time. I think that causes like wild animal suffering and positive psychology are decent examples of causes that have gotten some traction without major pre-existing organizations endorsing them. It might also come down to disagreements about definitions of “in EA”.
I almost put your blogs into this post as a positive example of what I wish people would do, but I wanted to keep the post to a lower length. In general, I think your efforts on mental health have updated more than a few EAs in positive directions towards it, including myself. There has been some related external content and research on this topic in part because of your posts and I would put a nontrivial chance on some EAs in the next 1-5 years focusing exclusively on this cause area and starting something in it. In general, I would expect adoption to new causes to be fairly slow and start with small numbers of people and maybe one organization before expanding to be on the standard go-to EA list.
I think if I were to guess what is holding back mental health / positive psych as a cause area it would be having a really strong concrete charity to donate to. By strong charity, I mean strong CEA but also focus on narrow set of interventions, decent evidence base/track record, strong M&E, and decently investigated by an external EA party (would not have to be an org. Could be an individual.) Something like Strong Minds might be a good fit for this.
I made this same point in the ‘Effective Altruism’ Facebook group a while ago if anyone wanted to follow for other public conversation on the topic. I wonder if a post on the EA Forum summarizing these kinds of points and requesting evaluations or reviews of charities based on effective positive psychology interventions rigorously implemented would be a good idea.
I think this is missing some prior steps as to how a cause can be built up in the effective altruism movement. For example, a focus on risks of astronomical future suffering (“s-risks), and reducing wild animal suffering (RWAS), both largely inspired in EA by Brian Tomasik’s work, have found success in the German-speaking world and increasingly globally throughout the movement. These are causes which have both have largely circumvented attention from either the Open Philanthropy Project (Open Phil) or the Centre for Effective Altruism (CEA) and its satellite projects (e.g., GWWC, 80,000 Hours, etc.).
Since the beginning of effective altruism, global poverty alleviation and global health have been the biggest focus areas. I witnessed as the movement grew causes were developed through a mix of online coordination on the global level with social networks like Facebook, mailing lists, and fora like LessWrong, and locally or regionally with non-profit organizations focusing on outreach and research. This was the case for both AI safety and farm animal welfare, which proportionally didn’t have nearly the representation in EA five years ago that they have now.
Certainly smaller focus areas like s-risk reduction and RWAS are receiving much less attention than others in EA. However, that across multiple organizations each of those causes is respectively funded by between $100k and $1 million USD, largely from individual effective altruists, is proof of concept a cause can be built up without being touted by CEA or Open Phil. And what’s more it’s not as if the trajectory of these causes looks bleak. They’ve been building up growth momentum for years, and they’re not showing signs of slowing. So how much they achieve increasing success in the near future will provide more data about what’s possible in getting a new cause into EA. What’s more, at least RWAS is a cause that’s on Open Phil’s radar. So it’s not like grants or endorsements of these causes from Open Phil or CEA couldn’t happen in the future.
In general I think developing a cause within the effective altruism community is something which often precedes more focus from it by flagship organizations of the movement, and that the process of development often takes the form of following the kinds of steps Joey outlined above. Obviously there could be more to the process than just that. I’m working on a post to introduce a project which builds on the kinds of steps Joey pointed out, and you’ve already taken, to organize and coordinate causes in effective altruism.
The people at 80kh etc. probably have their hands full. Therefore, even though your post making the case for mental health was laudable, I can well imagine it might not result in action in the short term on their part because of heavy prioritization.
If one wants to make substantive case and roadmap for possible actions for MH, it might make sense to take the initiative and do it oneself or together with a group of interested people. Given there is enough credence for the case, this effort might lead to formation of a new EA-aligned MH organization. I for one, might be interested in helping out with making the case for MH