There are many ways to reduce existential risk. I don’t see any good reason to think that reducing small chances of extinction events is better EV than reducing higher chances of smaller catastrophes, or even just building human capacity in preferentially non-destructive way. The arguments that we should focus on extinction have always boiled down to ‘it’s simpler to think about’.
I think we just agree. Don’t donate to politics unless you’re going to be smart about it
Thanks a lot for the good vibes as always!
The part with the environmental advocate was totally unexpected. Ronny and I talked about EA at length but I guess they decided to cut it and present “the other side” as well.
I’m working on something similar. See https://impactlist.vercel.app/ for a very early demo. Don’t take the effectiveness ratings that seriously yet—I’ve just done very shallow research using LLMs so far. The aim is not to measure how wealthy people would be if they never donated to charity, but how much good billionaires have done with their charitable donations.
I originally posted about it on this forum a couple years ago (https://forum.effectivealtruism.org/posts/LCJa4AAi7YBcyro2H/proposal-impact-list-like-the-forbes-list-except-for-impact) but didn’t start working on it seriously until this month.
Currently looking for volunteers (researchers and React devs). Here’s the discord: https://discord.gg/6GNre8U2ta.
You’re right to flag the risks of introducing pay gates. I agree it would be a mistake to charge for things that are currently core to how people first engage, especially given how many people first get involved in their 20s when finances are tight.
I think the case for a supporter membership model rests on keeping those core engagement paths free (intro courses, certain events, 1-1 advice, etc.), while offering membership as an optional way for people to express support, get modest perks, and help fund infrastructure.
I also think the contrast you draw between the two (mountaineering clubs = self-benefit, EA = other-benefit) is too simplistic. Most people who get involved in EA do so because they want to become more effective at helping others. That’s a deeply personal goal. They benefit from gaining clarity, support, and a community aligned with their values. EA resources serve them, not just the ultimate beneficiaries.
Likewise, mountaineering clubs aren’t purely self-serving either — they invest in safety standards, trail access, training, and other mountaineering public goods that benefit non-members and future members.
In both cases, people pay to be part of something they value, which helps them grow and contribute more, and then the thing they value ends up growing as well.
Yes, but if at some point you find out, for example, that your model of mortality leads to a conclusion that one should kill all humans, you’d probably conclude that your model is wrong rather than actually go through with it.
It’s an extreme example, but at its basis every model is somehow an approximation stemming from our internal moral intuition. Be it that life is better than death, or happiness better than pain, or satisfying desires better than frustration, or that following god’s commands is better than ignoring them, etc.
Thanks for this comment Jason, you raise a valid point. I do think this will be challenging — part of the motivation behind this bounty was that friends and I don’t think we could facilitate large-scale corporate donation matching through our own connections. However, there is some precedent for companies matching non-staff funds, and I can imagine a few theories of success:
Last year, Walmart launched a campaign to match up to $2.5M in customer donations to the Red Cross. Amazon, Google, Facebook and others have run similar matching programs for specific nonprofits that are open to the public. PayPal has run public Giving Tuesday donation matching campaigns similar to Facebook’s old Giving Tuesday format.
There is a law in India requiring certain companies to donate at least 2% of net profits to registered nonprofits; many US companies set aside a pot of resources (in tech it is often 1% of profits) for giving. So in some cases, CSR leadership merely decide how to allocate existing resources for corporate philanthropy (though in other cases, encouraging a company to give could grow the total amount they donate). With low confidence, I would guess that causes are often chosen by a single passionate senior leader at a company or by an internal champion persuading a small group of senior decisionmakers.
I’m unsure about the extent to which retail donors’ matching funds incentivize companies to give, but I would guess that it helps at least somewhat. Executives might be excited by a $10M company-branded initiative that only “costs” the company $5M. I also think there are ways to frame retail donor co-funding as coming from community members or customers, rather than from strangers.
I don’t have a view on whether it’s more tractable to persuade companies to pledge a pot of matching funds or to change their employee donation matching policy. I could imagine that an initiative allowing employees to lend their unused donation match for an org-wide giving event would be meaningful to many staff members, and could lead to greater orgwide morale gains than a “use-it-or-lose-it” donation match policy. I could also imagine that certain early-stage startups may be openminded about designing their donation match to include a public matching component.
Ultimately, I’m unsure about how realistic this idea is or how likely the bounty is to get claimed. But we wanted to offer this bounty in case it might cause someone reading this post to work some magic and grow the set of resources going to great nonprofits!
I quite liked this! I thought the part where the environmental advocate was like “well actually I do think animal suffering is important” was kind of hilarious + wholesome, and also I admire them for being willing to agree here despite their other reservations about EA. <3
Nice job @Andres Jimenez Zorrilla 🔸 and all! Proud to be a part of the “we look at numbers + care about shrimp” club :)
The EA movement is chock-full of people who are good at programming. What about open-sourcing the EA source code and outsourcing development of new features to volunteer members who want to contribute?
I find searching for in-depth content on the EA Forum vastly better than Reddit. This isn’t just relating to EA topics. There are a few academic-ish subreddits that I like and will search when I’m interested in what the amateur experts think on a given topic. Finding relevant posts is about the same on Reddit but finding in-depth comments + related posts is very hard. I usually have to do some Google magic to make that happen.
Also on rare occasion, I end up liking a person’s writing style or thinking methods and want to deep dive into what else they’ve written about. On the EA Forum, about 100% of what I find will be tangential to things I care about. On Reddit, it’s more likely I’ll have to sift through lots of hobbyist content like about sports since it’s more of a “bring your whole self” platform.
If your AI work doesn’t ground out in reducing the risk of extinction, I think animal welfare work quickly becomes the more impactful than anything AI. Xrisk reduction can be through more indirect channels, of course, though indirectness generally increases speculativeness of the xrisk story.
Another disadvantage of moving to Reddit is that it would give the existing material on the EA Forum (which includes a lot of good stuff) less visibility (even though it would presumably stay online).
Overall I’d prefer the EA Forum to continue to exist.
Hi Nick,
Thanks for reaching out and for your interest in these grants. We’re currently working on grant write-ups that we plan to publish soon, but unfortunately, we don’t have anything available to share just yet.
Quick thoughts:
I appreciate the write up and transparency.
I’m a big fan of engineering work. At the same time, I realize it’s expensive, and it seems like we don’t have much money to work with these days. I think this makes it tricky to find situations where it’s clearly a good fit with the existing donors.
Bigger-picture, I imagine many readers here would have little idea of what “new engineering work” would really look like. It’s tough to do a lot with a tiny team, as you point out. I could imagine some features helping the forum, but would also expect many changes to be experimental.
“Everyone going to the Reddit thread, at once” seems doomed to me, as you point out. But I’d feel better about gradual things. Maybe we could have someone try moderating Reddit for a few months, and see if we can make it any better first. “Transitioning the EA Forum” could come very late, only if we’re able to show good success on a smaller scale.
That said, I’m skeptical of Reddit as a primary forum. I don’t know of other smart Academic-aligned groups who have really made it official infrastructure for them. It seems to me like Reddits are often branches of the overall Reddit community, which is quite separate from the EA community, so it will be difficult to find the slice that we want. I feel better about other paid Forum providers, if we go the route of shutting down the EA Forum.
I think that the EA Discords/Slacks could use more support. Perhaps we shouldn’t try to have “One True Platform”, but have a variety of platforms that work with different sets of people.
As I think about it, I think it’s quite possible that many of the obvious technical improvements for the EA Forum, at this point, won’t translate nicely to user growth. It’s just very hard to make user growth happen, especially after a few years of tech improvements.
I think the EA Forum has major problems with scaling, and that this is a hard tech problem. It’s hard to cleanly split the community into sub-communities (I know there’s been some attempts here). So right now I think we have the issue that we can only have one internet community (to some extent), and this scares a bunch of people away.
Personally, what feels most missing to me around EA online is leadership/communication about the big issues, some smart+effective moderation (this is really tough), and experimentation on online infrastructure outside the EA Forum (see Discords, online courses, online meetups, maybe new online platforms, etc). I think there’s a lot of work to do here, but would flag that it’s likely pretty hit-or-miss, maybe making it a more difficult ask for funders.
Anyway, this was just my quick take. Your team obviously has a lot more context.
I’m overall appreciative to the team and to the funders who have supported the team this long.
But we should probably be mostly sad that the ideas have largely not slipped into the public consciousness over the last 14 years.
I kinda like that we’re back (so back?) to “a new movement called effective altruism”.
Of course they’re going for the easy jokes. It’s a comedy show. I’m glad EA is getting more widespread, mainstream exposure.
I’ve now spoken to ~1,400 people as an advisor with 80,000 Hours, and if there’s a quick thing I think is worth more people doing, it’s doing a short reflection exercise about one’s current situation.
Below are some (cluster of) questions I often ask in an advising call to facilitate this. I’m often surprised by how much purchase one can get simply from this—noticing one’s own motivations, weighing one’s personal needs against a yearning for impact, identifying blind spots in current plans that could be triaged and easily addressed, etc.
A long list of semi-useful questions I often ask in an advising call
Your context:
What’s your current job like? (or like, for the roles you’ve had in the last few years…)
The role
The tasks and activities
Does it involve management?
What skills do you use? Which ones are you learning?
Is there something in your current job that you want to change, that you don’t like?
Default plan and tactics
What is your default plan?
How soon are you planning to move? How urgently do you need to get a job?
Have you been applying? Getting interviews, offers? Which roles? Why those roles?
Have you been networking? How? What is your current network?
Have you been doing any learning, upskilling? How have you been finding it?
How much time can you find to do things to make a job change? Have you considered e.g. a sabbatical or going down to a 3/4-day week?
What are you feeling blocked/bottlenecked by?
What are your preferences and/or constraints?
Money
Location
What kinds of tasks/skills would you want to use? (writing, speaking, project management, coding, math, your existing skills, etc.)
What skills do you want to develop?
Are you interested in leadership, management, or individual contribution?
Do you want to shoot for impact? How important is it compared to your other preferences?
How much certainty do you want to have wrt your impact?
If you could picture your perfect job – the perfect combination of the above – which ones would you relax first in order to consider a role?
Reflecting more on your values:
What is your moral circle?
Do future people matter?
How do you compare problems?
Do you buy this x-risk stuff?
How do you feel about expected impact vs certain impact?
For any domain of research you’re interested in:
What’s your answer to the Hamming question? Why?
If possible, I’d recommend trying to answer these questions out loud with another person listening (just like in an advising call!); they might be able to notice confusions, tensions, and places worth exploring further. Some follow up prompts that might be applicable to many of the questions above:
How do you feel about that?
Why is that? Why do you believe that?
What would make you change your mind about that?
What assumptions is that built on? What would change if you changed those assumptions?
Have you tried to work on that? What have you tried? What went well, what went poorly, and what did you learn?
Is there anyone you can ask about that? Is there someone you could cold-email about that?
Good luck!
Hi! Half of the time is spent on MechInterp, the other half on other topics (RL and paper replication).
Not sure what the disagree votes are about, but I agree that it would be nice to have more open source contributors! 😊 The Forum codebase is already open source and we do occasionally get contributions. We also have a (disorganized) list of issues that people can work on. IMO it’s not the easiest codebase to dive into, and we don’t have much capacity to assist people in getting set up, but now that LLM tools are much better I could imagine it being not too onerous to contribute.
If anyone wants to help, I’m happy to suggest issues for you! 🙂 Feel free to reach out to me.