We have never interacted before this, at least to my knowledge, and I worry that you may be bringing some external baggage into this interaction (perhaps some poor experience with some cryonics enthusiast...). I find your “let’s shut this down before it competes for resources” attitude very puzzling and aggressive, especially since you show zero evidence that you understand what I’m actually attempting to do or gather support for on the object-level. Very possibly we’d disagree on that too, which is fine, but I’m reading your responses as preemptively closed and uncharitable (perhaps veering toward ‘aggressively hostile’) toward anything that might ‘rock the EA boat’ as you see it.
I don’t think this is good for EA, and I don’t think it’s working off a reasonable model of the expected value of a new cause area. I.e., you seem to be implying the expected cause area would be at best zero, but more probably negative, due to zero-sum dynamics. On the other hand, I think a successful new cause area would more realistically draw in or internally generate at least as many resources as it would consume, and probably much more—my intuition is that at the upper bound we may be looking at something as synergistic as a factorial relationship (with three causes, the total ‘EA pie’ might be 321=6; with four causes the total ‘EA pie’ might be 432*1=24). More realistically, perhaps 4+3+2+1 instead of 3+2+1. This could be and probably is very wrong—but at the same time I think it’s more accurate than a zero-sum model.
At any rate, I’m skeptical that we can turn this discussion into something that will generate value to either of us or to EA, so unless you have any specific things you’d like to discuss or clarify, I’m going to leave things here. Feel free to PM me questions.
I prefer to keep discussion on the object level, rather offering adverse impressions of one another’s behaviour (e.g. uncharitable, aggressive, censorious etc.)[1] with speculative diagnoses as to the root cause of these (“perhaps some poor experience with a cryonics enthusiast”).
To recall the dialectical context: the implication upthread was a worry that the EA community (or EA leadership) are improperly neglecting the metal health cause area, perhaps due to (in practice) some anti-weirdness bias. To which my counter-suggestion was that maybe EA generally/leaders thereof have instead made their best guess that the merits of this area isn’t more promising than those cause areas they already attend to.
I accept that conditional on some recondite moral and empirical matters, mental health interventions look promising. Yet that does not distinguish mental health beyond many other candidate cause areas, e.g.:
Life extension/cryonics
Pro-life advocacy/natural embryo loss mitigation
Immigration reform
Improving scientific norms
etc.
All generally have potentially large scale, sometimes neglected, but less persuasive tractability. In terms of some hypothetical dis aggregated EA resource (e.g. people, money), I’d prefer it to go into one of the ‘big three’ than any of these other areas, as my impression is the marginal returns for any of these three is greater than one of those. In other senses there may not be such zero sum dynamics (i.e. conditional on Alice only wanting to work in mental health, better that she work in EA-style mental mental), yet I aver this doesn’t really apply to which topics the movement gives relative prominence to (after all, one might hope that people switch from lower- to higher-impact cause areas, as I have attempted to do).
Of course, there remains value in exploration: if in fact EA writ large is undervaluing mental health, they would want to know about it and change tack What I hope would happen if I am wrong in my determination of mental health is that public discussion of the merits would persuade more and more people of the merits of this approach (perhaps I’m incorrigible, hopefully third parties are not), and so it gains momentum from a large enough crowd of interested people it becomes its own thing with similar size and esteem to areas ‘within the movement’. Inferring from the fact that this has not yet happened that the EA community is not giving a fair hearing is not necessarily wise.
[1]: I take particular exception to the accusations of censoriousness (from Plant) and wanting to ‘shut down discussion’ [from Plant and yourself]. In what possible world is arguing publicly on the internet a censorious act? I don’t plot to ‘run the mental health guys out of the EA movement’, I don’t work behind the scenes to talk to moderators to get rid of your contributions, I don’t downvote remarks or posts on mental health, and so on and so forth for any remotely plausible ‘shutting down discussion’ behaviour. I leave adverse remarks I could make to this apophasis.
I’m not seeing object-level arguments against mental health as an EA cause area. We have made some object-level arguments for, and I’m working on a longer-form description of what QRI plans in this space. Look for more object-level work and meta-level organizing over the coming months.
I’d welcome object-level feedback on our approaches. It didn’t seem like your comments above were feedback-focused, but rather they seemed motivated by a belief that this was not “a good direction for EA energy to go relative to the other major ones.” I can’t rule that out at this point. But I don’t like seeing a community member just dishing out relatively content-free dismissiveness on people at a relatively early stage in trying to build something new. If you don’t see any good interventions here, and don’t think we’ll figure out any good interventions, it seems much better to just let us fail, rather than actively try to pour cold water on us. If we’re on the verge of using lots of community resources on something that you know to be unworkable, please pour the cold water. But if your argument boils down to “this seems like a bad idea, but I can’t give any object-level reasons, but I really want people to know I think this is a bad idea” then I’m not sure what value this interaction can produce.
But, that said, I’d also like to apologize if I’ve come on too strong in this back-and-forth, or if you feel I’ve maligned your motives. I think you seem smart, honest, invested in doing good as you see it, and are obviously willing to speak your mind. I would love to channel this into making our ideas better! In trying to do something new, there’s approximately a 100% chance we’ll make a lot of mistakes. I’d like to enlist your help in figuring out where the mistakes are and better alternatives. Or, if you’d rather preemptively write off mental health as a cause area, that’s your prerogative. But we’re in this tent together, and although all the evidence I have suggests we have significantly different (perhaps downright dissonant) cognitive styles, perhaps we can still find some moral trade.
Hi Gregory,
We have never interacted before this, at least to my knowledge, and I worry that you may be bringing some external baggage into this interaction (perhaps some poor experience with some cryonics enthusiast...). I find your “let’s shut this down before it competes for resources” attitude very puzzling and aggressive, especially since you show zero evidence that you understand what I’m actually attempting to do or gather support for on the object-level. Very possibly we’d disagree on that too, which is fine, but I’m reading your responses as preemptively closed and uncharitable (perhaps veering toward ‘aggressively hostile’) toward anything that might ‘rock the EA boat’ as you see it.
I don’t think this is good for EA, and I don’t think it’s working off a reasonable model of the expected value of a new cause area. I.e., you seem to be implying the expected cause area would be at best zero, but more probably negative, due to zero-sum dynamics. On the other hand, I think a successful new cause area would more realistically draw in or internally generate at least as many resources as it would consume, and probably much more—my intuition is that at the upper bound we may be looking at something as synergistic as a factorial relationship (with three causes, the total ‘EA pie’ might be 321=6; with four causes the total ‘EA pie’ might be 432*1=24). More realistically, perhaps 4+3+2+1 instead of 3+2+1. This could be and probably is very wrong—but at the same time I think it’s more accurate than a zero-sum model.
At any rate, I’m skeptical that we can turn this discussion into something that will generate value to either of us or to EA, so unless you have any specific things you’d like to discuss or clarify, I’m going to leave things here. Feel free to PM me questions.
I prefer to keep discussion on the object level, rather offering adverse impressions of one another’s behaviour (e.g. uncharitable, aggressive, censorious etc.)[1] with speculative diagnoses as to the root cause of these (“perhaps some poor experience with a cryonics enthusiast”).
To recall the dialectical context: the implication upthread was a worry that the EA community (or EA leadership) are improperly neglecting the metal health cause area, perhaps due to (in practice) some anti-weirdness bias. To which my counter-suggestion was that maybe EA generally/leaders thereof have instead made their best guess that the merits of this area isn’t more promising than those cause areas they already attend to.
I accept that conditional on some recondite moral and empirical matters, mental health interventions look promising. Yet that does not distinguish mental health beyond many other candidate cause areas, e.g.:
Life extension/cryonics
Pro-life advocacy/natural embryo loss mitigation
Immigration reform
Improving scientific norms etc.
All generally have potentially large scale, sometimes neglected, but less persuasive tractability. In terms of some hypothetical dis aggregated EA resource (e.g. people, money), I’d prefer it to go into one of the ‘big three’ than any of these other areas, as my impression is the marginal returns for any of these three is greater than one of those. In other senses there may not be such zero sum dynamics (i.e. conditional on Alice only wanting to work in mental health, better that she work in EA-style mental mental), yet I aver this doesn’t really apply to which topics the movement gives relative prominence to (after all, one might hope that people switch from lower- to higher-impact cause areas, as I have attempted to do).
Of course, there remains value in exploration: if in fact EA writ large is undervaluing mental health, they would want to know about it and change tack What I hope would happen if I am wrong in my determination of mental health is that public discussion of the merits would persuade more and more people of the merits of this approach (perhaps I’m incorrigible, hopefully third parties are not), and so it gains momentum from a large enough crowd of interested people it becomes its own thing with similar size and esteem to areas ‘within the movement’. Inferring from the fact that this has not yet happened that the EA community is not giving a fair hearing is not necessarily wise.
[1]: I take particular exception to the accusations of censoriousness (from Plant) and wanting to ‘shut down discussion’ [from Plant and yourself]. In what possible world is arguing publicly on the internet a censorious act? I don’t plot to ‘run the mental health guys out of the EA movement’, I don’t work behind the scenes to talk to moderators to get rid of your contributions, I don’t downvote remarks or posts on mental health, and so on and so forth for any remotely plausible ‘shutting down discussion’ behaviour. I leave adverse remarks I could make to this apophasis.
I’m not seeing object-level arguments against mental health as an EA cause area. We have made some object-level arguments for, and I’m working on a longer-form description of what QRI plans in this space. Look for more object-level work and meta-level organizing over the coming months.
I’d welcome object-level feedback on our approaches. It didn’t seem like your comments above were feedback-focused, but rather they seemed motivated by a belief that this was not “a good direction for EA energy to go relative to the other major ones.” I can’t rule that out at this point. But I don’t like seeing a community member just dishing out relatively content-free dismissiveness on people at a relatively early stage in trying to build something new. If you don’t see any good interventions here, and don’t think we’ll figure out any good interventions, it seems much better to just let us fail, rather than actively try to pour cold water on us. If we’re on the verge of using lots of community resources on something that you know to be unworkable, please pour the cold water. But if your argument boils down to “this seems like a bad idea, but I can’t give any object-level reasons, but I really want people to know I think this is a bad idea” then I’m not sure what value this interaction can produce.
But, that said, I’d also like to apologize if I’ve come on too strong in this back-and-forth, or if you feel I’ve maligned your motives. I think you seem smart, honest, invested in doing good as you see it, and are obviously willing to speak your mind. I would love to channel this into making our ideas better! In trying to do something new, there’s approximately a 100% chance we’ll make a lot of mistakes. I’d like to enlist your help in figuring out where the mistakes are and better alternatives. Or, if you’d rather preemptively write off mental health as a cause area, that’s your prerogative. But we’re in this tent together, and although all the evidence I have suggests we have significantly different (perhaps downright dissonant) cognitive styles, perhaps we can still find some moral trade.
Best wishes, Mike