Thanks to both of you for writing this! Very valuable resource to have on hand, and a great review of different aspects of therapeutic modes of thought/self-help processes.
Great breakdown of the skills and concrete steps, thanks for writing this! I can already tell I’ll be linking people to this fairly often :)
I’ve been speaking to a number of people in university organizing groups who have been aware of these issues, and almost across the board the major issue they feel is that it seems too conflict-generating/bad/guilt-inducing to essentially tell their friends and peers in their or other universities something like “Hey, I think the thing you’re doing is actually causing a lot of harm, actually.”I would be very in favor of helping find ways to facilitate better communication between these groups that specifically targets ways they can improve in non-blaming, pro-social and supportive ways.
Hi Dave, emailing you now :)
This is an enormously valuable project, thank you and the others so much for continuing to work on making sure it can meet the community’s needs!
Woops, good point. Fixed!
Gavin covers the rest of it, so to talk about the “parts” thing; in this context I’m using it more as a semantic handle on what it means to have internal conflict, and not explicitly as an IFS thing. Psychotherapists have been talking about individuals as being made up of “parts” from the very beginning (Freud’s Id, Ego, Superego) and with all due respect to our mutual CFAR friend, if there’s any other way to describe and interface with the experience of internal conflict as well, I have yet to hear it :) In other words, I’ve written “a signal from one or more of your parts” as basically equivalent to “a signal that you aren’t fully convinced.” I think the latter is lower-resolution way of saying the former, but could be convinced it’s better if people largely expect the coaching to center around IFS-type things.As for “shoulds,” I think we can get rid of the way they exist as harmful things without eliminating what you call “moral obligations,” which I agree are good things (and sort of important to the “Altruist” part of Effective Altruism!). Basically I consider the two phrases to be pointing at very different phenomena in general; I think “shoulds” comes from an external source, even if it’s been internalized, while moral obligations are the result of internal generators, and aren’t the sort of thing that would respond to the sorts of questions and interventions that tend to dissolve shoulds.
Agreed, more public figures of people who found something meaningful and impactful that wasn’t what they initially thought they would/should work on would help with that :)
This is great to hear and an interesting read, thank you for sharing!
Ah, yeah that wasn’t intended as my meaning. Will edit :)
Hey, thanks for the comment! Just to clarify because I may be too sleep deprived to track what you’re saying… I originally read that as proportional by percent but not by absolute numbers, right?
So if roughly 900 new people per year are considered engaged enough to count as part of the community, ~20% of that and ~20% of 650 would still leave a growing number of people in the community working EA jobs, and even ~30% or ~40% increase in jobs would still leave a growing absolute number of people in the community not working EA jobs.(Again, not to say that this is bad necessarily, and as you noted there’s also people who were funded by grants or doing research or similar)
Yeah, this seems a hard problem to do well and safely from an organizational standpoint. I’m very sympathetic to the idea that it is an onerous cost on the organization’s side; what I’m uncertain about is whether it ends up being more beneficial to the community on net.
It’s been an ongoing discussion at SPARC and ESPR to try to decide how much or how little exposure to EA (as opposed to “EA ideas”) we want to make explicit during the camps. So far the answer has been mostly “fairly little,” and of course we do focus quite a lot on frank conversations about the ups and downsides. But it’s definitely difficult to pass down wisdom rather than just knowledge, and some of the questions have no genuine or easy answer. Thinking on this is certainly something that keeps me up at night every so often.
There’s a subreddit called /r/rational which discusses and shares “rational” and “rationalist” fiction. Many of these include EA themes, both explicitly and implicitly.
Some I’d recommend along with the ones others here have already shared include Worth the Candle, an original fiction about a teenager who gets transported into a fantasy world of his own creation and has to overcome personal challenges like grief and societal ones like complex coordination problems, Animorphs: The Reckoning, a fanfic that re-imagines the alien-body-snatchers story of the original through a much more serious and thoughtful lens, and my own fanfic, Pokemon: The Origin of Species, which explores the pokemon world from a more rational/EA lens while also teaching some psychology and therapy.
There’s also comics like Strong Female Protagonist, which is about a superheroine who quits fighting crime and goes to college because she realizes she doesn’t know how to “actually” save the world.