Founded Giving What We Can Oxford (now EA Oxford) with Max D. (2010)
Founding team of CEA (now Effective Ventures) (2011)
Ran The Life You Can Save with Peter Singer (2012)
Graduated (2013)
Ventured out of the EA bubble (2014-2017)
Ran EA London with David Nash (2018)
Had some fun with Daoism (2019-2020)
Supporting EAs in various ways (2021+)
Holly Morgan
With apologies for not managing to be quite as eloquent/professional as the others: I have nothing but love, respect and gratitude for you, Nick; you’ve always been so warm, insightful and supportive. I may always think of you primarily as one of the three founding pillars of CEA/EV, but I’m excited to see what you do next :-)
I like this post so much that I’m buying you a sandwich (check your email).
Thank you for sharing! I love hearing “origin stories” from like-minded people and I found this post both clear and inspiring :-)
There’s also an EA for Christians group—if you haven’t already come across them, might be worth checking out!
Thanks for adding a bio, Wes, and welcome!
Feel free to reach out to me for any “help with the 8-week course on 80,000 hours” :-)
Thanks Toby—so, so exciting to see this work progressing!
One quibble:
The value of advancements and speed-ups depends crucially on whether they also bring forward the end of humanity. When they do, they have negative value
...when the area under the graph is mostly above the horizontal axis?
Even if you assign a vanishingly small probability to future trajectories in which the cumulative value of humanity/sentientkind is below zero, I imagine many of the intended users of this framework will at least sometimes want to model the impact of interventions in worlds where the default trajectory is negative (e.g. when probing the improbable)?
Maybe this is another ‘further development’ consciously left to others and I don’t know how much the adjustment would meaningfully change things anyway—I admit I’ve only skimmed the chapter! But I find it interesting that, for example, when you include the possibility of negative default trajectories, the more something looks like a ‘speed-up with an endogenous end time’ the less robustly bad it is (as you note), whereas the more it looks like a ‘gain’ the more robustly good it is .
I also hope that some of the (what I perceive to be) silent majority will chime in and demonstrate that we’re here and don’t want to see EA splintered, rebranded, or otherwise demoted in favor of some other label.
🙋♀️
This is one of my favourite posts on this forum and I imagine the large majority of EAs I know IRL would largely agree with it (although there’s definitely a selection bias there). Thank you! I feel like there have been several moments in the past year or so where I’ve been like, “Man, EA NYC seems really cool.”
Re “best EA win,” I couldn’t pick a favourite but here’s one I learnt a few hours ago: Eitan Fischer—who I remember from early CEA days when he founded Animal Charity Evaluators—now runs the cultivated meat company Mission Barns. The Guardian says, “[A] handful of outlets have agreed to stock its products once they are approved for sale.” 🥳
All done :-) (already had a solar/crank charger+radio). Thank you!
Huh, maybe not.
Might be worth buying a physical copy of The Knowledge too (I just have).
And if anyone’s looking for a big project...
If we take catastrophic risks seriously and want humanity to recover from a devastating shock as far and fast as possible, producing such a guide before it’s too late might be one of the higher-impact projects someone could take on.
That was my first thought, but I expect many other individuals/institutions have already made large efforts to preserve such info, whereas this is probably the only effort to preserve core EA ideas (at least in one place)? And it looks like the third folder—“Non-EA stuff for the post-apocalypse”—contains at least some of the elementary resources you have in mind here.
But yeah, I’m much more keen to preserve arguments for radical empathy, scout mindset, moral uncertainty etc. than, say, a write-up of the research behind HLI’s charity recommendations. Maybe it would also be good to have an even small folder within “Main content (3GB)” with just the core ideas; the “EA Handbook” (39MB) sub-folder could perhaps serve such a purpose in the meantime.
Anyway, cool project! I’ve downloaded :)
Asking for a friend—will email now :)
So exciting, thank you!! And what a team!
Quick question: Do you know if you can provide funding for studies e.g. PhDs?[1]
- ^
The website sounds promising: It says you’ve already provided funding for a “PhD salary supplement” and also “We support regrants to registered charities and individuals. For-profit organizations may also be eligible, pending due diligence. As a US-registered 501c3, we do not permit donations to political campaigns.” But I think that funding tuition fees can sometimes be a bit trickier...
- ^
And Claim (46) seems plausible but uninteresting, given that “Scholars of the American movement find that [nonhuman animal rights] activists are overwhelmingly women at about 80 per cent (Gaarder 2011).”
Some thoughts from me (as a big fan of MoreGood):
I really don’t like the name MoreGood. It’s a direct callback to LessWrong. I don’t want to have to endorse LW to endorse EAF, or EA more generally, or the causes we care about, and this name change would signal that. Yes, there’s some shared intellectual history, but I don’t think LW-rationalism is inherent to or necessary for EA.
I don’t think it would signal this to many people.
For people new to/interested in EA, they’ll probably search for “EA” or “Effective Altruism”. They wouldn’t know the rebrand or name change unless there was a way to preserve it for SEO.
To me this is a feature, not a bug. I personally think having a slightly higher barrier to entry (you have to be engaged enough to have found the forum via other means than the first page of Google results) would do this forum good overall.
I think EA Forum is fine, and it is the major place for EA discussion online at the moment. I don’t think it’s that representative of EA?
I think having a very descriptive name is probably not worth the increase in times this forum gets quoted with more apparent authority than it actually has. [Edit: This is quite theoretical. These are the only actual examples I can think of right now and they’re basically fine.]
Any other online forum will also be skewed towards those online or ‘extremely online’. I think EA Twitter is much worse for this than the Forum.
Agreed. It’s still a downside to me that a less clear name means that there’ll be more fairly engaged EAs who end up with just Twitter etc. to discuss EA online.
In the spirit of do-ocracy, there’s no reason that other people can’t set up an alternative forum with a different focus/set of norms, though it will probably suffer from the network effects that make it difficult to challenge social media incumbents.
Sure, but the name change would make people feel more empowered to? (And I’m undecided on whether more forums would be good or bad.)
I definitely think there’s a “generational” thing here. For those of us who’ve been around long enough to see how everything came from nothing but people doing things they thought needed to be done, it’s perfectly obvious. But I can very much see how if you join the community today it looks like there are these serious, important organizations who are In Charge. But I do think it’s still not really true.
+1.
I was slow to realise that, over the period of just a few years of growth, this bunch of uncertain, scrappy, loosely coordinated students had come to be seen as a powerful established authority and treated accordingly. I think many others have been rather slow to notice this too and that that’s been a big source of confusion and tension as of late.
Oh I didn’t read Will as proposing multiple forums (although what he says is compatible with that proposal).
I thought he was saying that the name should better reflect how representative the forum is of EA thought at large. (The ‘decentralisation’ aspect being moving from the impression of ‘This forum is the main hub of all EA thought’ to ‘This forum is the main hub of Extremely Online EA thought’.)
This is honestly the best idea I’ve heard in a long time!
“It might be helpful for there to be a summary post outlining the different investigations/projects that are aiming to “implement reforms at EA organizations,”″
Joris P mentions this in another comment: https://forum.effectivealtruism.org/posts/KTsaZ69Ctkuw6n4tu/overview-reflection-projects-on-community-reform
HTH
I think it would be nice to know what is the marginal value of my personal spending, increasing my reserves, and donating.
I found this discussion of how much to save vs donate helpful when I was reviewing my finances recently.
I like CEA’s timely addition last summer of collaborative spirit to the other three values you have here (which they called impartial altruism, prioritization, and open truthseeking).