It would be worth cross posting each blog post here!
sjsjsj
Suggestions for getting retiree / second career folks interested in AI Safety?
Fair point. Is there a consensus within EA that EA should only be focused on what are the most effective causes in terms of increasing total utility, vs there being space to optimize effective impact within non-optimal causes?
My personal interests aside, it seems like there would be an case to address this, as many people outside the current EA movement are not particularly interested in maxing utils in the abstract, but rather guided by personal priorities—so improving the efficacy of their donations within those priorities would have value. And there to my knowledge, there is a vacuum in analyzing the impact of charities outside of top EA cause areas. I would imagine that on net, it’s a loss to allocate non-trivial resources to this away from higher impact cause areas… arguably asking people to share the information they currently have in low-effort ways would be positive on net, though I can see why one would want to promote conversational norms that discourage this.
Maybe I’ll take this to LessWrong, where I’ll hit many folks with the same knowledge base, but without violating the norms you put forth?
Thank you!
Suggestions for effective homelessness donations in San Francisco?
Great advice! I recommend Lori Gottlieb’s “Marry Him” for more on what standards are appropriate (it’s aimed at hetero women but I found it useful as a hetero man), and Logan Ury’s “How Not to Die Alone” for more on a number of these topics.
Thank you for doing this!
My questions:
-
Where did the time come from? What activities did you have to give up? How did that feel, emotionally?
-
How did this change in going from one kid to two?
(I say this as someone who:
Can’t really imagine working less and still being reasonably successful in my current line of work
needs a certain amount of sleep to be productive and happy
has a life full of other things that bring me joy and feel important to me
At the same time, I have a strong felt sense that I would like to have a child. So I am currently betting that I will find the time mainly by cutting out most of my time with friends / other leisure activities, and that the meaning and joy of raising a child will make this worthwhile. But I worry that I will feel resentful / inclined to prioritize my needs over my child’s fullest flourishing.)
-
Siebe, thanks for this, and sorry to hear you’re suffering from long COVID! Would you be open to posting a link to this on LessWrong? I think the analysis would be of personal interest to many there, independent of its merits as a cause area.
I’ve been thinking a lot lately about whether I want to have kids given I also want to have a life beyond my kids, and this was very helpful!
As someone who is not very empathetic by nature, I found Authentic Relating practice (check out, for example, www.authrev.org) very helpful for cultivating empathy, as it literally focuses on and trains “getting someone else’s world.” It also trains awareness of and ability to share your own emotional and somatic experience, which is central to emotional intelligence more broadly. I liked it because it was fun—it felt very connecting (I would leave events with a feeling similar to having cuddled with people, even when no cuddling had taken place—oxytocin something something), and I found exploring my and other’s experience to be a rich and interesting experience.
Nonviolent Communication (NVC) (check out the book by Marshall Rosenberg, and also the online pay what you want Communication Dojo classes by Newt Bailey) is also very helpful for both empathy and emotional intelligence, as it systematically cultivates a empathy for your own and others’ feelings and the needs behind those feelings. It was less rewarding as a recreational activity than Authentic Relating, but Newt is hilarious and lovely so his classes are really fun (and you can attend them on a one-off basis without committing to a series).
Both those practices have had a major impact on my ability to navigate relationship challenges (romantic and familial) with less anger and irritation and more success, as well as my own well-being outside of a relational context.
Lastly, I’ve read in passing (https://docs.google.com/document/d/1nrHi6vTRJI_MELW_gtTiEaaYwK8I82ytMIpHbmM0KNY/edit?usp=drivesdk) that metta (loving kindness) meditation improves empathy. In my experience, it does cultivate a feeling of wa friendliness and care towards other people, but is less helpful in providing insight, compared to the other practices I mentioned. My experience with it is more limited, though. A neat bonus is that once you get some experience with it, it’s like a “muscle” you can use in situations where you might otherwise feel anxious or irritable—e.g. I’ve silently wished loving kindness while going through an airport, listening to a crying baby on a plane, in an annoying or frustrating meeting at work, and before parties where I didn’t know too many people, and it really improved the way I felt. It can also feel more immediately rewarding than some other types of meditation.
All these sound super hippie, I grant you, and you may have to hold your nose initially if you’re allergic to that sort of thing (I did), but they’re well worth it.
You may want to look at Teach for America and Venture for America as potential models
This was really great. As someone who has been lurking around LW/EA Forum for a few years but has never found reading the Sequences the highest-return investment compared to other things I could be doing, I very much appreciate your writing it.
A thought on something which is probably not core to your post but worth considering:
You said:
The dream behind the Bayesian mindset is that I could choose some set of values that I can really stand behind (e.g., putting a lot of value on helping people, and none on things like “feeling good about myself”), and focus only on that. Then the parts of myself driven by “bad” values would have to either quiet down, or start giving non-sincere probabilities. But over time, I could watch how accurate my probabilities are, and learn to listen to the parts of myself that make better predictions.
I think it’s perhaps… not feasible, or has long-term side effects, to think that if you currently care about feeling good about yourself, you can just decide you don’t and jump immediately to ignoring that need of yours. I would predict that taking that approach is likely to result in resistance to making accurate predictions or doing the things you endorse valuing, and/ or mysterious unhappy emotions because your need to feel good about yourself is not being met.
It seems to me that it would be better to use some method like Internal Double Crux to dialogue between the part of you that wants to generate good feelings by generating skewed predictions and the part of you that wants to help people, and find a way to meet the former part’s needs that don’t require making skewed predictions. An example of such an approach could be feeling good about yourself for cultivating more effective predictions. I imagine that’s implicit in the approach you describe, but it may be more effective to hold explicit space for the part that wants to feel good about itself, rather than making it wrong for generating skewed predictions.
Makes sense, thanks! It may be worth highlighting that more proactively when you do outreach within EA (and there may be nuanced ways to communicate that even generally).
“Our approach has similarities with that followed by charity analysis organisations like GiveWell and Founders Pledge.”
To put it bluntly, why should someone go to (work for, consult the recommendations of, support) SoGive vs other leading organizations you mention? Does your org fill a neglected niche, or take a better approach somehow, or do you think it’s just valuable having multiple independent perspectives on the same issue?
There are non-profit consultancies like FSG, Bridgespan, Dalberg, and the tiny Redstone Strategy Group which do this sort of work. I believe they themselves are for profit and so charge significant fees. Not familiar with anything within EA but then I am somewhat on the periphery of EA so there could well be something that exists. Agree that this seems like an intriguing place for organizations with EA expertise to add value!
Here’s a direct link to the form for people who don’t want to hunt through the twitter thread https://docs.google.com/forms/d/e/1FAIpQLSfitym3vRQKDjEMNaK3j5D7SCYVbIhBruIMClUaK0DkP9uO-g/viewform
Thank you for this post and the context on the credibility and impact of this effort!
I like Designing Your Life by Bill Burnett—one of the best books I’ve read on the topic.
Given the recent post on the marketability of EA (which went so far as to suggest excluding MIRI from the EA tent to make EA more marketable—or maybe that was a comment in response to the post; don’t remember), a brief reaction from someone who is excited about Effective Altruism but has various reservations. (My main reservation, so you have a feel for where I’m coming from, is that my goal in life is not to maximize the world’s utility, but, roughly speaking, to maximize my own utility and end-of-life satisfaction, and therefore I find it hard to get excited about theoretically utility maximizing causes rather than donating to things which I viscerally care about—I know this will strike most people here as incredibly squishy, but I’d bet that much of the public outside the EA community probably has a similar reaction, though few would actually come out and say it)
I like your idea about high-leverage values spreading.
The ideas about Happy Animal Farm / Promoting Universal Eudaimonia seem nuts to me, so much so that I actually reread the post to see if this was a parody. If it gains widespread popularity among the EA movement, I will move from being intrigued about EA and excited to bring it up in conversation to never raising it in conversations with all but the most open-minded / rationalist people, or raising it in the tone of “yeah these guys are crazy but this one idea they have about applying data analysis to charity has some merit… Earning to Give is intriguing too....” I could be wrong, but I’d strongly suspect that most people who are not thoroughgoing utilitarians find it incredibly silly to argue that creating more beings who experience utility is a good cause, and this would quickly push EA away from being taken seriously in the mainstream.
The humane insecticides idea doesn’t seem AS silly as those two above, but it places EA in the same mental category as my most extreme caricature of PETA (and I’m someone who eats mostly vegan, and some Certified Humane, because I care about animal welfare). I don’t think insects are a very popular cause.
Just my 2 cents.
Thanks for the thoughtful reply. That makes sense. I don’t consider myself an EA, and read EA Forum 80% out of intellectual interest, 20% out of altruistic motives, so I’ll leave my end of the conversation here (and perhaps subscribe to your blog!), but from the upvotes on your suggestion of a blog update, seems like it met with significant interest among EA Forum readers, so I’d encourage you to do that!