Thanks for writing :)
I see the “narcissism of small differences” dynamics already coming up subtly between EA sub-groups. I see some resentment toward the Bay Area rationalists and similar circles.
Also I found the the tech firm example helpful, and wouldn’t be surprised if other social movements became increasingly guarded against or dismissive of EAs’ aims because its philosophy is so captivating and its outreach is so aggressive to top-talent students.
I wonder how you imagine EA outreach looking differently? Do you think it should be slower?
I’m not sure exactly what I think, but I want it to be the case, and have the intuition that it’s best for us to be teaching students everything their university should have taught them. Part of that is how to make a difference in the world using an “EA-mindset”, but it’s also emotional intelligence, how to collaborate without hierarchy, how to hold multiple mindsets usefully, and how to understand and work with oneself.
Harry Taussig
I have not!
But I would guess that about the closest you can get is doing user interviews (or surveys but I don’t think you could get many people to fill them out) multiple months out, and just asking people how they think it affected them, and how counterfactual they think that impact was. I think people will have good enough insight here for this to get you most of the valuable information. My first EAG was the difference between me working in an EA org and being a software engineering. My most recent EAG did almost nothing for me, on reflection, even though I made new connections and rated it very highly.
I think just asking this directly probably gets us closer than trying to assign what portion of the impact each particular event might get, even though I agree in reality the picture is much more complicated than this.
And if anyone has ideas on how to do better impact on analysis than this on events, PLEASE tell me. But I think this is already a huge improvement on my sense of what the default impact analysis for EA events is, and anything more complicated won’t give us too much more information.
I totally agree here if we are talking about giving people the best experience, which is a lot of what we want to do to facilitate friendships that will support people long-term in their motivation and making big decisions related to their career or life that could be quite impactful.
I also worry about feedback loops here, and how it’s easiest to optimize for people giving you good reviews at the end of your event, which means optimizing for people’s happiness over everything else.
I’d be very excited about events and retreats that more consistently do follow-ups 1-12 months after the event so we can see what really impacted and supported people. I’m guessing a lot of is vibes, but it could be a lot less than I currently think (my position is currently similar to yours). There are big impactful wins to be had that optimizing for people’s well-being will liekly not get us to.
For more on this you can check out similar thoughts from this forum post on why CFAR didn’t go as well as planned, or Andy Matuschak’s thoughts on “enabling environments”.
Just want to say that I really appreciate this post and keep coming back to it :)
Learning by writing in groups
Thank you for writing this! This helped me understand my negative feelings towards long-termist arguments so much better.
In talking to many EA University students and organizers, so many of them have serious reservations about long-termism as a philosophy, but not as a practical project because long-termism as a practical project usually means don’t die in the next 100 years, which is something we can pretty clearly make progress on (which is important since the usual objection is that maybe we can’t influence the long-term future).
I’ve been frustrated that in the intro fellowship and in EA conversations we must take such a strange path to something so intuitive: let’s try to avoid billions of people dying this century.
The case for working on animal welfare over AI / X-risk
https://docs.google.com/document/d/1gk2vVgp6NJf15rGr9R_H68DGwpKgIPUvhkk7DCqUbL4/edit?usp=sharing
Sorry about that and thanks for pointing this out :)
Akash will update this soon!
Thanks for writing this post. :)
I like how you accept that a low-commitment reading group is sometimes the best option.I think one of the ways reading groups go wrong is when you don’t put in the intentional effort or accountability to encourage everyone to actually read, but you still expect them to – even though you’re unsurprised when they don’t read. But then, because you wish they had read, you still run the discussion as if they’re prepared. You get into this awkward situation you talked about where people don’t speak since they don’t want to blatantly reveal they haven’t read.
I love and appreciate these suggestions! I’ll be stealing the idea about copying readings into google docs and am super excited for it.
Thanks for writing this post :)
It seems like one of the main factors leading to your mistakes was the way ideas can get twisted as they are echoed through the community and the epistemic humility that turns into deference to experts. I especially resonated with this:
I did plenty of things just because they were ‘EA’ without actually evaluating how much impact I would be having or how much I would learn.
As a university organizer, I see that nearly all of my experience with EA so far is not “doing” EA, but only learning about it. Not making impact estimates myself and then comparing to experts, but being anchored to experts’ answers from the start. It’s very much like university. You learn the common arguments and “right” answers, and even though you’re encouraged to discuss and disagree, everyone pretty much knows what the teacher or facilitator wants you to say.
I like your plans to further consider what you think about how to help others best and your own cause prioritization. That’s what I’m trying to do right now too :)
But I’m curious about why neither of us did this earlier. EAs often say they want you to figure things out for yourself, but there is also so much deference and respect towards the experts that I think makes it scary to say what you actually think, when everyone has a pretty good idea of what you’re supposed to think, and how epistemically humble we should be.
Do you have any thoughts on how to better encourage people to build their own views in EA? Or what would have made your past self do that?
[Question] Do you have an example impact calculation for a high-impact career?
Hey Ozzie, I’ve thought about this a little before and wrote about it here if you’re interested! :)
This is really exciting! You could try reaching out to coinbase to get listed as an organization on this page: https://www.coinbase.com/learn/crypto-basics/how-to-donate-crypto
Just wanted to let you know that this is extraordinarily helpful for me right now planning my first retreat, thanks Jessica!
That’s great! I have to decide by Thursday, so I’ll let you know what we’re working on :).
Definitely nothing larger than a few gigabytes I would say. I’m pretty new to data science and we’re using pretty simple methods in this project, so I’m guessing we’ll also want to do a relatively simple regression or classification analysis on a relatively simple (and maybe small) dataset.
[Question] EA datasets
The Place of Stories Within EA: “The Egg”
Thank you for sending this Sofia, I’m glad you decided to inappropriately answer my question!
This is great, I really appreciate you writing it. I just took vacation for a couple months and basically did what Alice said. Any readers feel free to DM me if you’d like to discuss these feelings + what you might do about it :))