Cool! For a brief second, I thought this post was going to be an extremely long list of the trillions of senting beings my actions could influence, but this is much more digestable.
OllieBase
Quick take:
Yes, the sample I looked at did have some very expensive retreats, and I think you can run much cheaper ones. Note though that EAGx events / conferences can also be much cheaper so you should adjust on both sides (I think I had a cost-inflated sample due to high spending on community building in 2022)
I still think outcomes-per-person at retreats often don’t seem that different to larger events, so returns to scale are often real. i.e. focus on cost-per-attendee. If your theory of change involves helping lots of people you don’t know well find careers in EA/AIS, I think going bigger is usually a good move.
Retreats are definitely a useful intervention, especially when you have a smaller group whose needs/goals you know well (e.g. more involved community members looking to go deeper on topics).
I had a reminder to check back on this. I had a quick scan, and I don’t think this happened. Joe’s post probably meets the bar, and does suggest it’s still a contentious issue, but I can’t find 9+ more so not as contentious as you predicted :)
Awesome line-up, nice work!
Starting afresh seems like the right move here, and I think it’s super commendable to share that you’re re-committing.
I have the same problem when it comes to end of year donations, and that prompted me to move to monthly donations (even if the idealized version of me would save accordingly and then make bigger donations more thoughtfully EOY).
Also:
In total, I’ve given about half of what I pledged since 2016.
This is still a lot of money, and a lot of good. Giving 5% of your income to charity for almost 10 years is a hugely generous and selfless thing to do :)
Cool! FYI when I open your home page on a large monitor, the “Log In” and “Blog” buttons overlap with each other (fine on small screens).
You might still want more on the margin, but I think this already happens a fair amount in EA:
Fellowships are very common—the amount of freedom seems to vary but many (e.g., GovAI, historically FHI) involve giving fellows several months or up to a year to explore topics with limited oversight.
EA funders often give unrestricted funding, often to individuals to pursue projects (see e.g. EA Funds grants, ACX grants, Manifund).
Compared to other social/intellectual movements, EA is (still) well-funded. I expect many other non-profits / activist groups / academic institutions would be amazed at how many people in EA are paid well to think and write about relevant topics with a lot of freedom.
Once you register for the event, you’ll be invited to Swapcard, our event platform which has all of this information :) DM me if you have any issues.
+1. Bumping my quick take listing where some of the people who participated in my EA uni group while I was there ended up (this is now considerably out of date, but AFAIK these people remain on paths that seem super impactful)
It looks the post in question is now tagged ‘Community’.
That quote seems taken out of context. I don’t know the passage (stagnation chapter?), but I don’t think Will was making that point in relation to what kind of skillset the EA community needs.
This is a great post!
> ITN estimates sometimes consider broad versions of the problem when estimating importance and narrow versions when estimating total investment for the neglectedness factor (or otherwise exaggerate neglectedness), which inflates the overall resultsI really like this framing. It isn’t an ITN estimate, but a related claim I think I’ve seen a few times in EA spaces is:
“billions/trillions of dollars are being invested in AI development, but very few people are working on AI safety”
I think this claim:
Seems to ignore large swathes of work geared towards safety-adjacent things like robustness and reliability.
Discounts other types of AI safety “investments” (e.g., public support, regulatory efforts).
Smuggles in a version of “AI safety” that actually means something like “technical research focused on catastrophic risks motivated by a fairly specific worldview”.
I still think technical AI safety research is probably neglected, and I expect there’s an argument here that does hold up. I’d love to see a more thorough ITN on this.
By my count, barring Trajan House, it now appears that EA has officially been annexed from Oxford
Do you mean Oxford University? That could be right (though a little strong, I’m sure it has its sympathisers). Noting that Oxford is still one of the cities (towns?) with the highest density of EAs in the world. People here are also very engaged (i.e. probably work in the space).
I assumed the main reason for doing something like that is to get people engaged and actually thinking about ideas
I don’t know what motivations people usually have, but I also feel skeptical of this vague “activation” theory of change. If session leads don’t know what actions they want session participants to take, I’m not optimistic about attendees generating useful actions themselves by discussing the topic for 10 minutes in a casual no-stakes, no-rigour, no-guidance setting. I’m more optimistic if the ask is “open a doc and write things that you could do”.
I would do a meeting of people filtered for being high context and having relevant thoughts, which is much more likely to work.
Yep, the thing you’ve described here sounds promising for the reasons Alex covered :) I realise I was thinking of the conference setting in my critique here (and probably should’ve made that explicit), but I’m much more optimistic about brainstorming in small groups of people with shared context, shared goals and using something like the format you’ve described.
It’s not clear that EA funding relies on Facebook/Meta much anymore. The original tweet is deleted, and this post is 3 years old but Holden wrote of Cari and Dustin’s wealth:
I also note that META stock is not as large a part of their portfolio as some seem to assume
You could argue Facebook/Meta is what made Dustin wealthy originally, but probably not correct to say that EA funding “deeply relies” on Meta today.
Yep, I think this is right, but we don’t totally rely on these kinds of surveys!
We also conduct follow-up surveys to check what actually happens a few months after each event and unsurprisingly, you do see intentions and projects dissipate (as well as many materialising). A problem we face is that these surveys have much lower response rates.
Other more reliable evidence about the impact of EAG comes from surveys which ask people how they found impactful work (e.g., the EA Survey, Open Phil’s surveys), and EAG is cited a lot. We’ll usually turn to this kind of evidence to think about our impact, though end-of-event feedback surveys are useful for feedback about content, venue, catering, attendee interactions etc. and you can also do things like discounting reported impact in end-of-event surveys using follow-up survey data.
I’m reading “OK” as “morally permissible” rather than “not harmful”. E.g., I think it’s also “OK” to eat meat, even though I think it’s causing harm.
(Not saying you should clarify the poll, it’s clear enough and will probably produce interesting results either way!)
I thought this was a great post, thanks for sharing! I think you’re unusually productive at identifying important insights in ethics and philosophy, please keep it up!
I think most answers here are missing what seems the most likely explanation to me: the people who are motivated by EA principles to engage with politics are not public about their motivations or affiliations with EA. Not just because the EA brand is disliked by some political groups, but it seems generally wise to avoid having strong idealogical identities in politics beyond motivations like “do better for my constituents”.