What a great goal, and the early steps you’ve made here feel like address many of my concerns. I think it’s great you did an initial survey and will be doing a pilot of executive function coaching—as I read, I had the thought that I have no idea how easy it is to do executive function coaching well, or how easy it is to make things a lot worse, so piloting it seems exactly right.
With love and support for the project, I’ll note a couple remaining concerns:
Minipoint about price: I have never paid for things in the realm of {coaching, therapy, tutoring} that were ⇐ $60 an hour. As a tutor I have charged both 1⁄3 of that and an order of magnitude more than that. Do you feel confident those things will be available at that price, or is the difference in money there not very important?
First concern: Helping with perverse retention
Perverse retention. Part of the problem is about retention: not losing people from the community just because they’re unlucky. But there should be graceful routes for people to decide EA isn’t for them too. A non-EA mental health worker is one of these routes. Mitigation: But EA support people can easily do it too, by just not being naive fanatics.
I’m not sure what role not being a naive fanatic (or the linked post, the importance of taking care of one’s self) is playing here. Is the idea that thoughtful EA support people will float the hypothesis “maybe EA is not for you?” when appropriate?
The concerns about perverse retention seem like:
People won’t leave EA even if they should because that’s where their mental and financial support is coming from
People won’t leave EA even if they should because their support person is an EA and they’re so nice and great and you don’t want to disappoint them
People won’t leave EA even if they should because their support person is trying to do everything to keep them there
But then also, if I were an EA support person, while I would be totally comfortable suggesting that EA isn’t the right choice, I would feel in a weird position if someone still wanted my time. I would probably want to give that time! But I would have signed up because I want to support people who either share my values or I think are likely to make the world better or because I want my community to be a good community. Helping other lovely people is good, but not what I signed up for.
So I’m not sure which of these the mitigation addresses, and I don’t really know how to mitigate any of these, except that 3 is very bad and hopefully no one like that will end up being a support person, and 1 and 2 can maybe be mitigated by gentle support?
Second concern: Parts Model
Which is why executive dysfunction should not be treated by default as a difficulty that needs to be overcome. Instead it can also be a signal from one or more of your parts that the path you’re on is not the right one for you, and that you might benefit from searching for other, better roads, or even goals.
This reads to me as this parts model being uncontroversial, real, and the right way to think about one’s self, without any flagging that this Off-Road experiment is taking a very particular view on the matter. Maybe this post is that flag? It does spend the first half laying out its view on how to help and why that’s the right intervention point for Off-Road, so maybe I’m totally off base here, but while “executive function is the right intervention” felt like it got labelled as a hypothesis and explained, this parts thing didn’t. Is it so uncontroversially true (and a useful lens) that it doesn’t need to be flagged? Is this cordoning off a certain section of the space and other people should do non-parts-based work, or is this an attractor in the space of trying to help?
A therapist friend of mine says that IFS done badly can be extremely harmful (probably not a concern here since “therapy” won’t be the main dynamic) and a CFAR person we know thinks that thinking of yourself in parts is one of the more potentially dangerous things CFAR teaches.
But also parts of it seem very commonsensical, and certainly lots of people find it useful. (I find it semi-useful, and also worry it gives people a superweapon with which to describe me in ways I can’t push back on).
Anyway, I bumped on it.
Potential worries: people will update too strongly that this is the consensus or universal view, or that help of a different kind will be harder to find because the lacuna won’t be obvious.
Third concern: Getting rid of shoulds
I don’t have a ton of evidence in this area, and (everyone − 1) I know who has read Replacing Guilt has liked it (including me, though I don’t seem to have liked it as much as other people), so there’s a lot of reason to dismiss what I’m saying here. Also, there are elements of Nate’s thinking that I have incorporated into my own life and my teaching of high school students for years, including before I read his work; there was a synchrony of thinking.
Nonetheless, getting rid of shoulds entirely feels like a very strong intervention and I have some Chesterton’s fence intuitions about it. I also personally don’t really like the idea of EA being a place where moral obligations don’t matter, seems like it takes away a really beautiful and core element, though the proof will be in the pudding if the world is made better that way.
So a project that again seems framed as general help having this particular view without maybe a big flag saying where it stands (again, could be this post, but I suspect would need to be a bigger flag than that) worries me.
Same worries as above: people will update too strongly that this is the consensus or universal view, or that help of a different kind will be harder to find because the lacuna won’t be obvious.
Conclusion:
None of this makes me think the project is bad or net negative, and I erred on the side of stating rather than non-stating concerns, so take them under wing if they’re useful and not if they’re not. Fwiw, the first one is the biggest concern to me.
Gavin covers the rest of it, so to talk about the “parts” thing; in this context I’m using it more as a semantic handle on what it means to have internal conflict, and not explicitly as an IFS thing. Psychotherapists have been talking about individuals as being made up of “parts” from the very beginning (Freud’s Id, Ego, Superego) and with all due respect to our mutual CFAR friend, if there’s any other way to describe and interface with the experience of internal conflict as well, I have yet to hear it :)
In other words, I’ve written “a signal from one or more of your parts” as basically equivalent to “a signal that you aren’t fully convinced.” I think the latter is lower-resolution way of saying the former, but could be convinced it’s better if people largely expect the coaching to center around IFS-type things.
As for “shoulds,” I think we can get rid of the way they exist as harmful things without eliminating what you call “moral obligations,” which I agree are good things (and sort of important to the “Altruist” part of Effective Altruism!). Basically I consider the two phrases to be pointing at very different phenomena in general; I think “shoulds” comes from an external source, even if it’s been internalized, while moral obligations are the result of internal generators, and aren’t the sort of thing that would respond to the sorts of questions and interventions that tend to dissolve shoulds.
Thanks Chana! I appreciate you thinking hard about this, and hope it’ll make us more careful and good.
Price. My EA coach is $60 an hour (with student discount), which is my only datum. Happy to amend given more data.
Retention. Yeah, you capture what I was thinking about with (3): not being a naive optimiser, not squeezing as many people into EA as you can despite their misery and lack of fit. The self-care link is pointing at the same vague spirit: don’t routinely crush feelings (in that case, your own). Both my and Damon’s instincts run pretty heavily against indoctrination, so we should be able to spot it in others. I don’t think we’ll set any policy about continuing to help people after they leave EA, that’s clearly a matter of conscience n context.
I take (1) and (2) pretty seriously, but Free Support Booking, the current leading idea, is designed to mitigate them ~completely: the idea is we book “external” (non-EA) support people. I just forgot to say this at any point. Only trouble is the money.
Parts. I’ll let Damon respond in full, but my take is: I don’t think that sentence is meant as a strong claim nor mission statement. Parts stuff is a mental model: often useful, always extremely unclear metaphysically. Taken metaphorically (“as if I had several subagents, several utility functions, internal conflict”), it seems fine. We haven’t designed the coaching yet, but it won’t involve intense IFS or whatnot.
I find it hard to think about the baseline risk of all psychological intervention (all intervention), which is what I take your concerned friends to be denoting. Going to a standard psychodynamic therapist seems similarly risky to me (i.e. not very).
Shoulds. Happy to flag it. (I personally get a lot out of shoulds, so we’re not the anti-should movement.)
What a great goal, and the early steps you’ve made here feel like address many of my concerns. I think it’s great you did an initial survey and will be doing a pilot of executive function coaching—as I read, I had the thought that I have no idea how easy it is to do executive function coaching well, or how easy it is to make things a lot worse, so piloting it seems exactly right.
With love and support for the project, I’ll note a couple remaining concerns:
Minipoint about price: I have never paid for things in the realm of {coaching, therapy, tutoring} that were ⇐ $60 an hour. As a tutor I have charged both 1⁄3 of that and an order of magnitude more than that. Do you feel confident those things will be available at that price, or is the difference in money there not very important?
First concern: Helping with perverse retention
I’m not sure what role not being a naive fanatic (or the linked post, the importance of taking care of one’s self) is playing here. Is the idea that thoughtful EA support people will float the hypothesis “maybe EA is not for you?” when appropriate?
The concerns about perverse retention seem like:
People won’t leave EA even if they should because that’s where their mental and financial support is coming from
People won’t leave EA even if they should because their support person is an EA and they’re so nice and great and you don’t want to disappoint them
People won’t leave EA even if they should because their support person is trying to do everything to keep them there
But then also, if I were an EA support person, while I would be totally comfortable suggesting that EA isn’t the right choice, I would feel in a weird position if someone still wanted my time. I would probably want to give that time! But I would have signed up because I want to support people who either share my values or I think are likely to make the world better or because I want my community to be a good community. Helping other lovely people is good, but not what I signed up for.
So I’m not sure which of these the mitigation addresses, and I don’t really know how to mitigate any of these, except that 3 is very bad and hopefully no one like that will end up being a support person, and 1 and 2 can maybe be mitigated by gentle support?
Second concern: Parts Model
This reads to me as this parts model being uncontroversial, real, and the right way to think about one’s self, without any flagging that this Off-Road experiment is taking a very particular view on the matter. Maybe this post is that flag? It does spend the first half laying out its view on how to help and why that’s the right intervention point for Off-Road, so maybe I’m totally off base here, but while “executive function is the right intervention” felt like it got labelled as a hypothesis and explained, this parts thing didn’t. Is it so uncontroversially true (and a useful lens) that it doesn’t need to be flagged? Is this cordoning off a certain section of the space and other people should do non-parts-based work, or is this an attractor in the space of trying to help?
A therapist friend of mine says that IFS done badly can be extremely harmful (probably not a concern here since “therapy” won’t be the main dynamic) and a CFAR person we know thinks that thinking of yourself in parts is one of the more potentially dangerous things CFAR teaches.
But also parts of it seem very commonsensical, and certainly lots of people find it useful. (I find it semi-useful, and also worry it gives people a superweapon with which to describe me in ways I can’t push back on).
Anyway, I bumped on it.
Potential worries: people will update too strongly that this is the consensus or universal view, or that help of a different kind will be harder to find because the lacuna won’t be obvious.
Third concern: Getting rid of shoulds
I don’t have a ton of evidence in this area, and (everyone − 1) I know who has read Replacing Guilt has liked it (including me, though I don’t seem to have liked it as much as other people), so there’s a lot of reason to dismiss what I’m saying here. Also, there are elements of Nate’s thinking that I have incorporated into my own life and my teaching of high school students for years, including before I read his work; there was a synchrony of thinking.
Nonetheless, getting rid of shoulds entirely feels like a very strong intervention and I have some Chesterton’s fence intuitions about it. I also personally don’t really like the idea of EA being a place where moral obligations don’t matter, seems like it takes away a really beautiful and core element, though the proof will be in the pudding if the world is made better that way.
So a project that again seems framed as general help having this particular view without maybe a big flag saying where it stands (again, could be this post, but I suspect would need to be a bigger flag than that) worries me.
Same worries as above: people will update too strongly that this is the consensus or universal view, or that help of a different kind will be harder to find because the lacuna won’t be obvious.
Conclusion:
None of this makes me think the project is bad or net negative, and I erred on the side of stating rather than non-stating concerns, so take them under wing if they’re useful and not if they’re not. Fwiw, the first one is the biggest concern to me.
Gavin covers the rest of it, so to talk about the “parts” thing; in this context I’m using it more as a semantic handle on what it means to have internal conflict, and not explicitly as an IFS thing. Psychotherapists have been talking about individuals as being made up of “parts” from the very beginning (Freud’s Id, Ego, Superego) and with all due respect to our mutual CFAR friend, if there’s any other way to describe and interface with the experience of internal conflict as well, I have yet to hear it :)
In other words, I’ve written “a signal from one or more of your parts” as basically equivalent to “a signal that you aren’t fully convinced.” I think the latter is lower-resolution way of saying the former, but could be convinced it’s better if people largely expect the coaching to center around IFS-type things.
As for “shoulds,” I think we can get rid of the way they exist as harmful things without eliminating what you call “moral obligations,” which I agree are good things (and sort of important to the “Altruist” part of Effective Altruism!). Basically I consider the two phrases to be pointing at very different phenomena in general; I think “shoulds” comes from an external source, even if it’s been internalized, while moral obligations are the result of internal generators, and aren’t the sort of thing that would respond to the sorts of questions and interventions that tend to dissolve shoulds.
The thing about parts not being necessarily about IFS specifically should have occurred to me, thank you!
Thanks Chana! I appreciate you thinking hard about this, and hope it’ll make us more careful and good.
Price. My EA coach is $60 an hour (with student discount), which is my only datum. Happy to amend given more data.
Retention. Yeah, you capture what I was thinking about with (3): not being a naive optimiser, not squeezing as many people into EA as you can despite their misery and lack of fit. The self-care link is pointing at the same vague spirit: don’t routinely crush feelings (in that case, your own). Both my and Damon’s instincts run pretty heavily against indoctrination, so we should be able to spot it in others. I don’t think we’ll set any policy about continuing to help people after they leave EA, that’s clearly a matter of conscience n context.
I take (1) and (2) pretty seriously, but Free Support Booking, the current leading idea, is designed to mitigate them ~completely: the idea is we book “external” (non-EA) support people. I just forgot to say this at any point. Only trouble is the money.
Parts. I’ll let Damon respond in full, but my take is: I don’t think that sentence is meant as a strong claim nor mission statement. Parts stuff is a mental model: often useful, always extremely unclear metaphysically. Taken metaphorically (“as if I had several subagents, several utility functions, internal conflict”), it seems fine. We haven’t designed the coaching yet, but it won’t involve intense IFS or whatnot.
I find it hard to think about the baseline risk of all psychological intervention (all intervention), which is what I take your concerned friends to be denoting. Going to a standard psychodynamic therapist seems similarly risky to me (i.e. not very).
Shoulds. Happy to flag it. (I personally get a lot out of shoulds, so we’re not the anti-should movement.)
Thank you! This makes sense to me.