I donât think mental health has comparably good interventions to either of these, even given the caveats you note. Cost per QALY or similar for treatment looks to have central estimates much higher than these, and we should probably guess mental health interventions in poor countries have more regression to the mean to go.
Some hypothetical future intervention could be much better, but looking for these isnât that neglected, and such progress looks intractable given we understand the biology of a given common mental illness much more poorly than a typical NTD.
I donât think mental health has comparably good ⊠[c]ost per QALY or similar.
Some hypothetical future intervention could be much better, but looking for these isnât that neglected, and such progress looks intractable given we understand the biology of a given common mental illness much more poorly than a typical NTD.
I think the core argument for mental health as a new cause area is that (1) yes, current mental health interventions are pretty bad on average, but (2) there could be low-hanging fruit locked away behind things that look âtoo weird to tryâ, and (3) EA may be in a position to signal-boost the weird things (âpull the ropes sidewaysâ) that have a plausible chance of working.
Using psilocybin as an adjunct to therapy seems like a reasonable example of some low-hanging fruit thatâs effective, yet hasnât been Really Tried, since it is weird. And this definitely does not exhaust the set of weird & plausible interventions.
Iâd also like to signal-boost @MichaelPlantâs notion that âA more general worry is that effective altruists focus too much on saving lives rather than improving lives..â At some point, weâll get to hard diminishing returns on how many lives we can âsaveâ (delay the passing of) at reasonable cost or without significant negative externalities. We may be at that point now. If weâre serious about âdoing the most good we can doâ I think itâs reasonable to explore a pivot to improving livesâand mental health is a pretty key component of this.
1-3 looks general, and can in essence be claimed to apply to any putative cause area not currently thought to be a good candidate. E.g.
1) Current anti-aging interventions are pretty bad on average.
2) There could be low hanging fruit behind things that look âtoo weird to tryâ.
3) EA may be in position to signal boost weird things that have plausible chance of working.
Mutatis mutandis criminal justice reform, improving empathy, human enhancement, and so on. One could adjudicate these competing areas by evidence that some really do have these low hanging fruit. Yet it remains unclear that (for example) things like psilocybin data gives more a boost than (say) cryonics. Naturally I donât mind if enthusiasts pick some area and give it a go, but appeals to make it a ânew cause areaâ based on these speculative bets look premature by my lights: better to pick winners based on which of the disparate fields shows the greatest progress, such that one forecasts similar marginal returns to the âbig threeâ.
(Given GCR/âx-risks, I think the âopportunitiesâ for saving quite a lot of livesâeveryoneâsâare increasing. I agree that ignoring thatâwhich one shouldnâtâit seems likely status quo progress should exhaust preventable mortality faster than preventable ill-health. Yet I donât think we are there yet.)
I worry that youâre also using a fully-general argument here, one that would also apply to established EA cause areas.
This stands out at me in particular:
Naturally I donât mind if enthusiasts pick some area and give it a go, but appeals to make it a ânew cause areaâ based on these speculative bets look premature by my lights: better to pick winners based on which of the disparate fields shows the greatest progress, such that one forecasts similar marginal returns to the âbig threeâ.
Thereâs a lot here that Iâd challenge. E.g., (1) I think youâre implicitly overstating how good the marginal returns on the âbig threeâ actually are, (2) you seem to be doubling down on the notion that âsaving lives is better than improving livesâ or that âthe current calculus of EA does and should lean toward reduction of mortality, not improving well-beingâ, which I challenged above, (3) I donât think your analogy between cryonics (which, for the record, Iâm skeptical on as an EA cause area) and e.g., Entheaâs collation of research on psilocybin seems very solid.
I would also push back on how dismissive âNaturally I donât mind if enthusiasts pick some area and give it a go, but appeals to make it a ânew cause areaâ based on these speculative bets look premature by my lightsâ sounds. Enthusiasts are the ones that create new cause areas. We wouldnât have any cause areas, save for those âsilly enthusiastsâ. Perhaps Iâm misreading your intended tone, however.
Respectfully, I take âchallenging Pâ to require offering considerations for ÂŹP. Remarks like âI worry youâre using a fully-general argumentâ (without describing what it is or how my remarks produce it), âI donât think your analogy is very solidâ (without offering dis-analogies) donât have much more information than simply âI disagreeâ.
1) Iâd suggest astronomical stakes considerations imply at that one of the âbig threeâ do have extremely large marginal returns. If one prefers something much more concrete, Iâd point to the humane reforms improving quality of life for millions of animals.
2) I donât think the primacy of the big three depends in any important way on recondite issues of disability weights or population ethics. Conditional on a strict person affecting view (which denies the badness of death) I would still think the current margin of global health interventions should offer better yields. I think this based on current best estimates of disability weights in things like the GCPP, and the lack of robust evidence for something better in mental health (we should expect, for example, Entheaâs results to regress significantly, perhaps all the way back to the null).
On the general point: I am dismissive of mental health as a cause area insofar as I donât believe it to be a good direction for EA energy to go relative to the other major ones (and especially my own âbest betâ of xrisk). I donât want it to be a cause area as it will plausibly compete for time/âattention/âetc. with other things I deem more important. Iâm no EA leader, but I donât think we need to impute some âanti-weirdness biasâ (which I think is facially implausible given the early embrace of AI stuff etc) to explain why they might think the same.
Naturally, I may be wrong in this determination, and if I am wrong, I want to know about it. Thus having enthusiasts go into more speculative things outside the currently recognised cause areas improves likelihood of the movement self-correcting and realising mental health should be on a par with (e.g.) animal welfare as a valuable use of EA energy.
Yet anointing mental health as a cause area before this case has been persuasively made would be a bad approach. There are many other candidates for âcause area No. n+1â which (as I suggested above) have about the same plausibility as mental health. Making them all recognised âcause areasâ seems the wrong approach. Thus the threshold should be higher.
I agree that, if you care about the far future, mental health (along with poverty, physical and pretty much anything apart from X-risk focused interventions) will look at least look like a waste of time. Further analysis may reveal this to be a bit more complicated, but this isnât the time for such complicated, further analysis.
I donât want it to be a cause area as it will plausibly compete for time/âattention/âetc
I think this probably isnât true, just because those interested in current-human vs far-future stuff are two different audiences. Itâs more a question of whether, in as much people are going to focus on current stuff, would do more good if they focused on mental health over poverty. Thereâs a comment about moral trade to be made here.
I also find the apparent underlying attitude here unsettling. Itâs sort of âI think your views are stupid and Iâm confident I know best so I just want to shut them out of the conversation rather than let others make up their own mindâ approach. On a personal level, I find this thinking (which, unless iâm paranoid, Iâve encountered in the EA world before) really annoying. I say some stuff in the area in this post on moral inclusivity.
I also think both of you being too hypothetical about mental health. Halstead and Snowden have a new report where they reckon Strong Minds is $225/âDALY, which is comparable to AMF if you think AMFâs live saving is equivalent to 40 years of life-improving treatments.
Drugs policy reform I consider to be less at the âthis might be a good idea but we have no reason to think soâ stage and more at the âoh wow, if this is true itâs really promising and we should look into it to find out if it is trueâ stage. Iâm unclear what the bar is to be annointed an âofficial causeâ or who we should allow to be in charge of such this censorious judgements.
We have never interacted before this, at least to my knowledge, and I worry that you may be bringing some external baggage into this interaction (perhaps some poor experience with some cryonics enthusiast...). I find your âletâs shut this down before it competes for resourcesâ attitude very puzzling and aggressive, especially since you show zero evidence that you understand what Iâm actually attempting to do or gather support for on the object-level. Very possibly weâd disagree on that too, which is fine, but Iâm reading your responses as preemptively closed and uncharitable (perhaps veering toward âaggressively hostileâ) toward anything that might ârock the EA boatâ as you see it.
I donât think this is good for EA, and I donât think itâs working off a reasonable model of the expected value of a new cause area. I.e., you seem to be implying the expected cause area would be at best zero, but more probably negative, due to zero-sum dynamics. On the other hand, I think a successful new cause area would more realistically draw in or internally generate at least as many resources as it would consume, and probably much moreâmy intuition is that at the upper bound we may be looking at something as synergistic as a factorial relationship (with three causes, the total âEA pieâ might be 321=6; with four causes the total âEA pieâ might be 432*1=24). More realistically, perhaps 4+3+2+1 instead of 3+2+1. This could be and probably is very wrongâbut at the same time I think itâs more accurate than a zero-sum model.
At any rate, Iâm skeptical that we can turn this discussion into something that will generate value to either of us or to EA, so unless you have any specific things youâd like to discuss or clarify, Iâm going to leave things here. Feel free to PM me questions.
I prefer to keep discussion on the object level, rather offering adverse impressions of one anotherâs behaviour (e.g. uncharitable, aggressive, censorious etc.)[1] with speculative diagnoses as to the root cause of these (âperhaps some poor experience with a cryonics enthusiastâ).
To recall the dialectical context: the implication upthread was a worry that the EA community (or EA leadership) are improperly neglecting the metal health cause area, perhaps due to (in practice) some anti-weirdness bias. To which my counter-suggestion was that maybe EA generally/âleaders thereof have instead made their best guess that the merits of this area isnât more promising than those cause areas they already attend to.
I accept that conditional on some recondite moral and empirical matters, mental health interventions look promising. Yet that does not distinguish mental health beyond many other candidate cause areas, e.g.:
Life extension/âcryonics
Pro-life advocacy/ânatural embryo loss mitigation
Immigration reform
Improving scientific norms
etc.
All generally have potentially large scale, sometimes neglected, but less persuasive tractability. In terms of some hypothetical dis aggregated EA resource (e.g. people, money), Iâd prefer it to go into one of the âbig threeâ than any of these other areas, as my impression is the marginal returns for any of these three is greater than one of those. In other senses there may not be such zero sum dynamics (i.e. conditional on Alice only wanting to work in mental health, better that she work in EA-style mental mental), yet I aver this doesnât really apply to which topics the movement gives relative prominence to (after all, one might hope that people switch from lower- to higher-impact cause areas, as I have attempted to do).
Of course, there remains value in exploration: if in fact EA writ large is undervaluing mental health, they would want to know about it and change tack What I hope would happen if I am wrong in my determination of mental health is that public discussion of the merits would persuade more and more people of the merits of this approach (perhaps Iâm incorrigible, hopefully third parties are not), and so it gains momentum from a large enough crowd of interested people it becomes its own thing with similar size and esteem to areas âwithin the movementâ. Inferring from the fact that this has not yet happened that the EA community is not giving a fair hearing is not necessarily wise.
[1]: I take particular exception to the accusations of censoriousness (from Plant) and wanting to âshut down discussionâ [from Plant and yourself]. In what possible world is arguing publicly on the internet a censorious act? I donât plot to ârun the mental health guys out of the EA movementâ, I donât work behind the scenes to talk to moderators to get rid of your contributions, I donât downvote remarks or posts on mental health, and so on and so forth for any remotely plausible âshutting down discussionâ behaviour. I leave adverse remarks I could make to this apophasis.
Iâm not seeing object-level arguments against mental health as an EA cause area. We have made some object-level arguments for, and Iâm working on a longer-form description of what QRI plans in this space. Look for more object-level work and meta-level organizing over the coming months.
Iâd welcome object-level feedback on our approaches. It didnât seem like your comments above were feedback-focused, but rather they seemed motivated by a belief that this was not âa good direction for EA energy to go relative to the other major ones.â I canât rule that out at this point. But I donât like seeing a community member just dishing out relatively content-free dismissiveness on people at a relatively early stage in trying to build something new. If you donât see any good interventions here, and donât think weâll figure out any good interventions, it seems much better to just let us fail, rather than actively try to pour cold water on us. If weâre on the verge of using lots of community resources on something that you know to be unworkable, please pour the cold water. But if your argument boils down to âthis seems like a bad idea, but I canât give any object-level reasons, but I really want people to know I think this is a bad ideaâ then Iâm not sure what value this interaction can produce.
But, that said, Iâd also like to apologize if Iâve come on too strong in this back-and-forth, or if you feel Iâve maligned your motives. I think you seem smart, honest, invested in doing good as you see it, and are obviously willing to speak your mind. I would love to channel this into making our ideas better! In trying to do something new, thereâs approximately a 100% chance weâll make a lot of mistakes. Iâd like to enlist your help in figuring out where the mistakes are and better alternatives. Or, if youâd rather preemptively write off mental health as a cause area, thatâs your prerogative. But weâre in this tent together, and although all the evidence I have suggests we have significantly different (perhaps downright dissonant) cognitive styles, perhaps we can still find some moral trade.
I donât think mental health has comparably good interventions to either of these, even given the caveats you note. Cost per QALY or similar for treatment looks to have central estimates much higher than these, and we should probably guess mental health interventions in poor countries have more regression to the mean to go.
Some hypothetical future intervention could be much better, but looking for these isnât that neglected, and such progress looks intractable given we understand the biology of a given common mental illness much more poorly than a typical NTD.
I think the core argument for mental health as a new cause area is that (1) yes, current mental health interventions are pretty bad on average, but (2) there could be low-hanging fruit locked away behind things that look âtoo weird to tryâ, and (3) EA may be in a position to signal-boost the weird things (âpull the ropes sidewaysâ) that have a plausible chance of working.
Using psilocybin as an adjunct to therapy seems like a reasonable example of some low-hanging fruit thatâs effective, yet hasnât been Really Tried, since it is weird. And this definitely does not exhaust the set of weird & plausible interventions.
Iâd also like to signal-boost @MichaelPlantâs notion that âA more general worry is that effective altruists focus too much on saving lives rather than improving lives..â At some point, weâll get to hard diminishing returns on how many lives we can âsaveâ (delay the passing of) at reasonable cost or without significant negative externalities. We may be at that point now. If weâre serious about âdoing the most good we can doâ I think itâs reasonable to explore a pivot to improving livesâand mental health is a pretty key component of this.
1-3 looks general, and can in essence be claimed to apply to any putative cause area not currently thought to be a good candidate. E.g.
1) Current anti-aging interventions are pretty bad on average. 2) There could be low hanging fruit behind things that look âtoo weird to tryâ. 3) EA may be in position to signal boost weird things that have plausible chance of working.
Mutatis mutandis criminal justice reform, improving empathy, human enhancement, and so on. One could adjudicate these competing areas by evidence that some really do have these low hanging fruit. Yet it remains unclear that (for example) things like psilocybin data gives more a boost than (say) cryonics. Naturally I donât mind if enthusiasts pick some area and give it a go, but appeals to make it a ânew cause areaâ based on these speculative bets look premature by my lights: better to pick winners based on which of the disparate fields shows the greatest progress, such that one forecasts similar marginal returns to the âbig threeâ.
(Given GCR/âx-risks, I think the âopportunitiesâ for saving quite a lot of livesâeveryoneâsâare increasing. I agree that ignoring thatâwhich one shouldnâtâit seems likely status quo progress should exhaust preventable mortality faster than preventable ill-health. Yet I donât think we are there yet.)
I worry that youâre also using a fully-general argument here, one that would also apply to established EA cause areas.
This stands out at me in particular:
Thereâs a lot here that Iâd challenge. E.g., (1) I think youâre implicitly overstating how good the marginal returns on the âbig threeâ actually are, (2) you seem to be doubling down on the notion that âsaving lives is better than improving livesâ or that âthe current calculus of EA does and should lean toward reduction of mortality, not improving well-beingâ, which I challenged above, (3) I donât think your analogy between cryonics (which, for the record, Iâm skeptical on as an EA cause area) and e.g., Entheaâs collation of research on psilocybin seems very solid.
I would also push back on how dismissive âNaturally I donât mind if enthusiasts pick some area and give it a go, but appeals to make it a ânew cause areaâ based on these speculative bets look premature by my lightsâ sounds. Enthusiasts are the ones that create new cause areas. We wouldnât have any cause areas, save for those âsilly enthusiastsâ. Perhaps Iâm misreading your intended tone, however.
Respectfully, I take âchallenging Pâ to require offering considerations for ÂŹP. Remarks like âI worry youâre using a fully-general argumentâ (without describing what it is or how my remarks produce it), âI donât think your analogy is very solidâ (without offering dis-analogies) donât have much more information than simply âI disagreeâ.
1) Iâd suggest astronomical stakes considerations imply at that one of the âbig threeâ do have extremely large marginal returns. If one prefers something much more concrete, Iâd point to the humane reforms improving quality of life for millions of animals.
2) I donât think the primacy of the big three depends in any important way on recondite issues of disability weights or population ethics. Conditional on a strict person affecting view (which denies the badness of death) I would still think the current margin of global health interventions should offer better yields. I think this based on current best estimates of disability weights in things like the GCPP, and the lack of robust evidence for something better in mental health (we should expect, for example, Entheaâs results to regress significantly, perhaps all the way back to the null).
On the general point: I am dismissive of mental health as a cause area insofar as I donât believe it to be a good direction for EA energy to go relative to the other major ones (and especially my own âbest betâ of xrisk). I donât want it to be a cause area as it will plausibly compete for time/âattention/âetc. with other things I deem more important. Iâm no EA leader, but I donât think we need to impute some âanti-weirdness biasâ (which I think is facially implausible given the early embrace of AI stuff etc) to explain why they might think the same.
Naturally, I may be wrong in this determination, and if I am wrong, I want to know about it. Thus having enthusiasts go into more speculative things outside the currently recognised cause areas improves likelihood of the movement self-correcting and realising mental health should be on a par with (e.g.) animal welfare as a valuable use of EA energy.
Yet anointing mental health as a cause area before this case has been persuasively made would be a bad approach. There are many other candidates for âcause area No. n+1â which (as I suggested above) have about the same plausibility as mental health. Making them all recognised âcause areasâ seems the wrong approach. Thus the threshold should be higher.
Just to chip in.
I agree that, if you care about the far future, mental health (along with poverty, physical and pretty much anything apart from X-risk focused interventions) will look at least look like a waste of time. Further analysis may reveal this to be a bit more complicated, but this isnât the time for such complicated, further analysis.
I think this probably isnât true, just because those interested in current-human vs far-future stuff are two different audiences. Itâs more a question of whether, in as much people are going to focus on current stuff, would do more good if they focused on mental health over poverty. Thereâs a comment about moral trade to be made here.
I also find the apparent underlying attitude here unsettling. Itâs sort of âI think your views are stupid and Iâm confident I know best so I just want to shut them out of the conversation rather than let others make up their own mindâ approach. On a personal level, I find this thinking (which, unless iâm paranoid, Iâve encountered in the EA world before) really annoying. I say some stuff in the area in this post on moral inclusivity.
I also think both of you being too hypothetical about mental health. Halstead and Snowden have a new report where they reckon Strong Minds is $225/âDALY, which is comparable to AMF if you think AMFâs live saving is equivalent to 40 years of life-improving treatments.
Drugs policy reform I consider to be less at the âthis might be a good idea but we have no reason to think soâ stage and more at the âoh wow, if this is true itâs really promising and we should look into it to find out if it is trueâ stage. Iâm unclear what the bar is to be annointed an âofficial causeâ or who we should allow to be in charge of such this censorious judgements.
Hi Gregory,
We have never interacted before this, at least to my knowledge, and I worry that you may be bringing some external baggage into this interaction (perhaps some poor experience with some cryonics enthusiast...). I find your âletâs shut this down before it competes for resourcesâ attitude very puzzling and aggressive, especially since you show zero evidence that you understand what Iâm actually attempting to do or gather support for on the object-level. Very possibly weâd disagree on that too, which is fine, but Iâm reading your responses as preemptively closed and uncharitable (perhaps veering toward âaggressively hostileâ) toward anything that might ârock the EA boatâ as you see it.
I donât think this is good for EA, and I donât think itâs working off a reasonable model of the expected value of a new cause area. I.e., you seem to be implying the expected cause area would be at best zero, but more probably negative, due to zero-sum dynamics. On the other hand, I think a successful new cause area would more realistically draw in or internally generate at least as many resources as it would consume, and probably much moreâmy intuition is that at the upper bound we may be looking at something as synergistic as a factorial relationship (with three causes, the total âEA pieâ might be 321=6; with four causes the total âEA pieâ might be 432*1=24). More realistically, perhaps 4+3+2+1 instead of 3+2+1. This could be and probably is very wrongâbut at the same time I think itâs more accurate than a zero-sum model.
At any rate, Iâm skeptical that we can turn this discussion into something that will generate value to either of us or to EA, so unless you have any specific things youâd like to discuss or clarify, Iâm going to leave things here. Feel free to PM me questions.
I prefer to keep discussion on the object level, rather offering adverse impressions of one anotherâs behaviour (e.g. uncharitable, aggressive, censorious etc.)[1] with speculative diagnoses as to the root cause of these (âperhaps some poor experience with a cryonics enthusiastâ).
To recall the dialectical context: the implication upthread was a worry that the EA community (or EA leadership) are improperly neglecting the metal health cause area, perhaps due to (in practice) some anti-weirdness bias. To which my counter-suggestion was that maybe EA generally/âleaders thereof have instead made their best guess that the merits of this area isnât more promising than those cause areas they already attend to.
I accept that conditional on some recondite moral and empirical matters, mental health interventions look promising. Yet that does not distinguish mental health beyond many other candidate cause areas, e.g.:
Life extension/âcryonics
Pro-life advocacy/ânatural embryo loss mitigation
Immigration reform
Improving scientific norms etc.
All generally have potentially large scale, sometimes neglected, but less persuasive tractability. In terms of some hypothetical dis aggregated EA resource (e.g. people, money), Iâd prefer it to go into one of the âbig threeâ than any of these other areas, as my impression is the marginal returns for any of these three is greater than one of those. In other senses there may not be such zero sum dynamics (i.e. conditional on Alice only wanting to work in mental health, better that she work in EA-style mental mental), yet I aver this doesnât really apply to which topics the movement gives relative prominence to (after all, one might hope that people switch from lower- to higher-impact cause areas, as I have attempted to do).
Of course, there remains value in exploration: if in fact EA writ large is undervaluing mental health, they would want to know about it and change tack What I hope would happen if I am wrong in my determination of mental health is that public discussion of the merits would persuade more and more people of the merits of this approach (perhaps Iâm incorrigible, hopefully third parties are not), and so it gains momentum from a large enough crowd of interested people it becomes its own thing with similar size and esteem to areas âwithin the movementâ. Inferring from the fact that this has not yet happened that the EA community is not giving a fair hearing is not necessarily wise.
[1]: I take particular exception to the accusations of censoriousness (from Plant) and wanting to âshut down discussionâ [from Plant and yourself]. In what possible world is arguing publicly on the internet a censorious act? I donât plot to ârun the mental health guys out of the EA movementâ, I donât work behind the scenes to talk to moderators to get rid of your contributions, I donât downvote remarks or posts on mental health, and so on and so forth for any remotely plausible âshutting down discussionâ behaviour. I leave adverse remarks I could make to this apophasis.
Iâm not seeing object-level arguments against mental health as an EA cause area. We have made some object-level arguments for, and Iâm working on a longer-form description of what QRI plans in this space. Look for more object-level work and meta-level organizing over the coming months.
Iâd welcome object-level feedback on our approaches. It didnât seem like your comments above were feedback-focused, but rather they seemed motivated by a belief that this was not âa good direction for EA energy to go relative to the other major ones.â I canât rule that out at this point. But I donât like seeing a community member just dishing out relatively content-free dismissiveness on people at a relatively early stage in trying to build something new. If you donât see any good interventions here, and donât think weâll figure out any good interventions, it seems much better to just let us fail, rather than actively try to pour cold water on us. If weâre on the verge of using lots of community resources on something that you know to be unworkable, please pour the cold water. But if your argument boils down to âthis seems like a bad idea, but I canât give any object-level reasons, but I really want people to know I think this is a bad ideaâ then Iâm not sure what value this interaction can produce.
But, that said, Iâd also like to apologize if Iâve come on too strong in this back-and-forth, or if you feel Iâve maligned your motives. I think you seem smart, honest, invested in doing good as you see it, and are obviously willing to speak your mind. I would love to channel this into making our ideas better! In trying to do something new, thereâs approximately a 100% chance weâll make a lot of mistakes. Iâd like to enlist your help in figuring out where the mistakes are and better alternatives. Or, if youâd rather preemptively write off mental health as a cause area, thatâs your prerogative. But weâre in this tent together, and although all the evidence I have suggests we have significantly different (perhaps downright dissonant) cognitive styles, perhaps we can still find some moral trade.
Best wishes, Mike