I’ve now spoken to ~1,400 people as an advisor with 80,000 Hours, and if there’s a quick thing I think is worth more people doing, it’s doing a short reflection exercise about one’s current situation.
Below are some (cluster of) questions I often ask in an advising call to facilitate this. I’m often surprised by how much purchase one can get simply from this—noticing one’s own motivations, weighing one’s personal needs against a yearning for impact, identifying blind spots in current plans that could be triaged and easily addressed, etc.
A long list of semi-useful questions I often ask in an advising call
Your context:
What’s your current job like? (or like, for the roles you’ve had in the last few years…)
The role
The tasks and activities
Does it involve management?
What skills do you use? Which ones are you learning?
Is there something in your current job that you want to change, that you don’t like?
Default plan and tactics
What is your default plan?
How soon are you planning to move? How urgently do you need to get a job?
Have you been applying? Getting interviews, offers? Which roles? Why those roles?
Have you been networking? How? What is your current network?
Have you been doing any learning, upskilling? How have you been finding it?
How much time can you find to do things to make a job change? Have you considered e.g. a sabbatical or going down to a 3/4-day week?
What are you feeling blocked/bottlenecked by?
What are your preferences and/or constraints?
Money
Location
What kinds of tasks/skills would you want to use? (writing, speaking, project management, coding, math, your existing skills, etc.)
What skills do you want to develop?
Are you interested in leadership, management, or individual contribution?
Do you want to shoot for impact? How important is it compared to your other preferences?
How much certainty do you want to have wrt your impact?
If you could picture your perfect job – the perfect combination of the above – which ones would you relax first in order to consider a role?
Reflecting more on your values:
What is your moral circle?
Do future people matter?
How do you compare problems?
Do you buy this x-risk stuff?
How do you feel about expected impact vs certain impact?
If possible, I’d recommend trying to answer these questions out loud with another person listening (just like in an advising call!); they might be able to notice confusions, tensions, and places worth exploring further. Some follow up prompts that might be applicable to many of the questions above:
How do you feel about that?
Why is that? Why do you believe that?
What would make you change your mind about that?
What assumptions is that built on? What would change if you changed those assumptions?
Have you tried to work on that? What have you tried? What went well, what went poorly, and what did you learn?
Is there anyone you can ask about that? Is there someone you could cold-email about that?
I have $20 in unused RunPod.io credit (cloud GPU service) that I’m not using and can’t refund. 😢 I’d love to donate it to someone working on any useful — whether it’s for running models, processing data, or prototyping.
I know that folks in EA often favor donating to more effective things rather than less effective things. With that in mind, I have mixed feelings knowing that many Harvard faculty are donating 10%, and that they are donating to the best funded and most prestigious university in the world.
On the one hand, it is really nice to know that they are willing to put their money where their mouth is when their institution is under attack. I get some warm fuzzy feelings from the idea of defending an education institution against political attacks. On the other hand, Harvard University’s endowment is already very large, and Harvard earns a lot of money each year. It is like a very tailored version of a giving pledge: giving to Harvard, giving for one year. Will such a relatively small amount given toward such a relatively large institution do much good? I do wonder what the impact would be if these fairly well-known and well-respected academics announced they were donating 10% to clean water, or to deworming, or to reducing animal suffering. I wonder how much their donations will do for Harvard.
I’ll include a few graphs to illustrate Harvard’s financial strength.
These are from a project I did several months ago using data from the Common Data Set, from College Scorecard, from their Form 990 tax filings, and some data from the college’s websites.
The selection of the non-Harvard schools is fairly arbitrary. For that particular project I just wanted to select a few different types of schools (small liberal arts, more technical focused, etc.) rather than comparing Harvard to other ‘hyper elite’ schools.
I left the endowment graph non-logarithmic just to illustrate the ludicrous difference. Yes, I know it is bad design practice and that it obscures the numbers for the non-Harvard schools.
Ah that’s great info! Would be useful to get similar numbers for EAGx events. I know the overall acceptance rate is quite high, but don’t know how it is for students who are applying for their regional EAGx.
EAGx undergraduate acceptance rate across 2024 and 2025 = ~82%
EAGx first-timer undergraduate acceptance rate across 2024 and 2025 = ~76%
Obvious caveat that if we tell lots of people that the acceptance rate is high, we might attract more people without any context on EA and the rate would go down.
I feel like EAs might be sleeping a bit on digital meetups/conferences.
My impression is that many people prefer in-person events to online ones. But at the same time, a lot of people hate needing to be in the Bay Area / London or having to travel to events.
There was one EAG online during the pandemic (I believe the others were EAGxs), and I had a pretty good experience there. Some downsides, but some strong upsides. It seemed very promising to me.
I’m particularly excited about VR. I have a Quest3, and have been impressed by the experience of chatting to people in VRChat. The main downside is that there aren’t any professional events in VR that would interest me. Quest 3s are expensive ($500), but far cheaper than housing and office space in Berkeley or London.
I’d also flag: 1. I think that video calls can be dramatically improved with better microphone and camera setups. These can cost $200 to $2k or so, but make a major difference. 2. I’ve been doing some digging into platforms similar to GatherTown. I found GatherTown fairly ugly, off-putting, and limited. SpatialChat seems promising, though it’s more expensive. Zoom seems to be experimenting in the space with products like Zoom Huddles (for coworking in small groups), but these are new. 3. I like Focusmate, but think we could have better spaces for EAs/community members. 4. I think that people above the age of 25 or so find VR weird for what I’d describe as mostly status quo bias. Younger people seem to be far more willing and excited to hangout in VR. 5. I obviously think this is a larger business question. It seems like there was a wave of enthusiasm for remote work at COVID, and this has mostly dried up. However, there are still a ton of remote workers. My guess is that businesses are making a major mistake by not investing enough in better remote software and setups. 6. Organizing community is hard, even if its online. I’d like to see more attempts to pay people to organize online coworking spaces and meetups more. 7. I think that online events/conferences have become associated with the most junior talent. This seems like a pity to me. 8. I expect that different online events should come with different communities and different restrictions. A lot of existing online events/conferences are open to everyone, but then this means that they will be optimized for the most junior people. I think that we want a mix here. 9. Personally, I abhor the idea that I need to couple the place where I physically live with the friends and colleagues I have. I’d very much prefer optimizing for these two separately. 10. I think our community would generally be better off if remote work were easier to do. I’d expect this would help on multiple fronts—better talent, happier talent, lower expenses, more resilience from national politics, etc. This is extra relevant giving the current US political climate—this makes it tougher to recommend that others immigrate to the US or even visit (and the situation might get worse). 11. I’d definitely admit that remote work has a lot of downsides right now, especially with the current tech. So I’m not recommending that all orgs go remote. Just that we work on improving our remote/online infrastructure.
Have you checked out the EA Gather? It’s been languishing a bit for want of more input from me, but I still find it a really pleasant place for coworking, and it’s had several events run or part-run on there—though you’d have to check in with the organisers to see how successful they were.
I assumed it’s been mostly dead for a while (haven’t heard about it for a few months). I’m very supportive of it, would like to see it (and more) do well.
It’s still in use, but it has the basic problem of EA services that unless there’s something to announce, there’s not really any socially acceptable way of advertising it.
Similar to “Greenwashing” and “Safetywashing”, I’ve been thinking about “Intellectual Washing.”
The pattern works as: “Find someone who seems like an intellectual who somewhat aligns with your position. Then claim you have strong intellectual (and by extension, logical) support for your views.”
This is easiest to see in sides that you disagree with.
For example, MAGA gets intellectual cred from “The dark enlightenment” / Curtis Yarvin / Peter Thiel / etc. But I’m sure Trump never listened to any of these people, and was likely barely influenced by them. [1]
Hitler famously claimed alignment with Nietzche, and had support from Heidegger. Note that Nietzche didn’t agree with this. And I’d expect Hitler engaged very little with Heidegger’s ideas.
There’s a structural risk for intellectuals: their work can be appropriated not as a nuanced set of ideas to be understood, but as legitimizing tokens for powerful interests.
The dynamics that enable this include: - The difficulty of making a living or gaining attention as a serious thinker - Public resource/interest constraints around complex topics - The ready opportunity to be used as a simple token of support for pre-existing agendas
Note: There’s a long list of types of “X-washing.” There’s an interesting discussion to the best terminology for this are, but I suspect most readers won’t find that particularly interesting. One related concept is that of “selling out”, sometimes where an artist with street cred would pair up with a large brand/label or similar.
[1] While JD Vance might represent some genuine intellectual influence, and Thiel may have achieved specific narrow technical implementations, these appear relatively minor in the broader context of policy influence.
I have a bunch of disagreements with Good Ventures and how they are allocating their funds, but also Dustin and Cari are plausibly the best people who ever lived.
I want to agree, but “best people who ever lived” is a ridiculously high bar! I’d imagine that both of them would be hesitant to claim anything quite that high.
Yeah, sorry: it was obvious to me that this was the intended meaning, after I realized it could be interpreted this way. I noted it because I found the syntactic ambiguity mildly interesting/amusing.
For example, Norman Borlaug is often called “the father of the Green Revolution”, and is credited with saving a billion people worldwide from starving to death. Stanislav Petrov and Vasily Arkhipov prevented a probable nuclear war from happening.
The UK offers better access as a conference location for international participants compared to the US or the EU.
I’m being invited to conferences in different parts of the world as a Turkish citizen, and visa processes for the US and the EU have gotten a lot more difficult lately. I’m unable to even get a visa appointment for several European countries, and my appointment for the US visa was scheduled 16 months out. I believe the situation is similar for visa applicants from other countries. The UK currently offers the smoothest process with timelines of only a few weeks. Conference organizers that seek applications from all over the world could choose the UK over other options.
There’s been some neat work on making AI agent forecasters. Some of these seem to have pretty decent levels of accuracy, vs. certain sets of humans.
And yet, very little of this seems to be used in the wild, from what I can tell.
It’s one thing to show some promising results in a limited study. But ultimately, we want these tools to be used by real people.
I assume some obvious todos would be: 1. Websites where you can easily ask one or multiple AI forecasters questions. 2. Competing services that package “AI forecasting” tools in different ways, focusing on optimizing (positive) engagement. 3. I assume that many AI forecasters should really be racking up good scores in Metaculus/Manifold now. The limitation seems to mainly be effort—neither platform has significant incentives yet.
Optimizing AI forecasting bots, but only in experimental settings, seems akin to optimizing cameras, but only in experimental settings. I’d expect you’d wind up with things that are technically impressive but highly unusable. We might learn a lot about a few technical challenges, but little about what real use would look like or what the key bottlenecks will be.
I’m sure some people are using custom AI tools for polymarket, but I don’t expect that to be very public.
I was focusing on Metaculus/Manifold, where I don’t think there’s much AI bot engagement yet. (Metaculus does have a dedicated tournament, but that’s separate from the main part we see, I believe).
Also what are the main or best open source projects in the space? Or if someone wanted to actually use LMs for forecasting, what is better than just asking o3 to produce a forecast?
Rubenstein says that “As the low-hanging fruit of basic health programs and cash transfers are exhausted, saving lives and alleviating suffering will require more complicated political action, such as reforming global institutions.” Unfortunately, there’s a whole lot of low-hanging fruit out there, and things have recently gotten even worse as of late with the USAID collapse and the UK cutting back on foreign aid.
In general, as the level of EA’s involvement and influence in a given domain increases, the more I start to be concerned about the sort of things that Rubenstein worries about here. When a particular approach is at a smaller size, it’s likely to concentrate on niches where its strengths shine and its limitations are less relevant. I would put the classic GiveWell-type interventions in that category, for instance. Compared to the scope of both the needs in global health & development and the actions of other actors, EA is still a fairly small fish.
I’m currently reviewing Wild Animal Initiative’s strategy in light of the US political situation. The rough idea is that things aren’t great here for wild animal welfare or for science, we’re at a critical time in the discipline when things could grow a lot faster relatively soon, and the UK and the EU might generally look quite a bit better for this work in light of those changes. We do already support a lot of scientist in Europe, so this wouldn’t be a huge shift in strategy. It’s more about how much weight to put toward what locations for community and science building, and also if we need to make any operational changes (at this early stage, we’re trying to be very open-minded about options — anything from offering various kinds of support to staff to opening a UK branch).
However, in trying to get a sense of whether that rough approach is right, it’s extremely hard to get accurate takes (or, at least, to be able to tell whether someone is thinking of the relevant risks rationally). And, its hard to tell whether “how people feel now” will have lasting impact. For example, a lot of the reporting on scientist sentiment sounds extremely grim (example 1, 2, 3), but it’s hard to know what level the effect will be over the next few years—a reduction in scientific talent, certainly, but so much so that the UK is a better place to work given our historical reasons for existing in the US? Less clear.
It doesn’t help that I personally feel extremely angry about the political situation so that probably is biasing my research.
Curious if any US-based EA orgs have considered leaving the US or taking some other operational/strategic step, given the political situation/staff concerns/etc? Why or why not?
Really appreciate you @mal_graham🔸 thinking out loud on this. Watching from Uganda, I totally get the frustration the US climate feels increasingly hostile to science and progressive work like wild animal welfare. So yeah, shifting more focus to the UK/EU makes sense, especially if it helps stabilize research and morale. That said, if you’re already rethinking geography and community building, I’d gently suggest looking beyond the usual Global North pivots. Regions like East Africa are incredibly underrepresented but ecologically critical and honestly, there’s a small but growing base of people here hungry to build this field with proper support. If there’s ever a window to make this movement more global and future-proof, it might be now. Happy to chat more if useful.
Thank you for your comment! It’s actually a topic of quite a lot of discussion for us, so I would love to connect on it. I’ll send you a DM soon.
Just for context, the main reason I’ve felt a little constrained to the US/UK context is due to comparative advantage considerations, such as having staff who are primarily based in those countries/speaking English as our organizational common tongue/being most familiar with those academic communities, etc.
I definitely think the WAW community, in general, should be investing much more outside of just US/UK/EU—but am less sure whether it makes sense for WAI to do so, given our existing investments/strengths. But I could be convinced otherwise!
Even if we keep our main focus in the US/UK, I’d be very interested in hearing more about how WAI might be able to support the “people hungry to build the field” in other countries, so that could be another thing to discuss.
All of the headlines are trying to run with the narrative that this is due to Trump pressure, but I can’t see a clear mechanism for this. Does anyone have a good read on why he’s changed his mind? (Recent events feel like: Buffet moving his money to his kids’ foundations & retiring from BH, divorce)
“A few years ago, I began to rethink that approach. More recently, with the input from our board, I now believe we can achieve the foundation’s goals on a shorter timeline, especially if we double down on key investments and provide more certainty to our partners.”
It seems it was more of a question of whether they could grant larger amounts effectively, which he was considering for multiple years (I don’t know how much of that may be possible due to aid cuts).
I have only speculation, but it’s plausible to me that developments in AI could be playing a role. The original decision in 2000 was to sunset “several decades after [Bill and Melinda Gates’] deaths.” Likely the idea was that handpicked successor leadership could carry out the founders’ vision and that the world would be similar enough to the world at the time of their death or disability for that plan to make sense for several decades after the founders’ deaths. To the extent that Gates thought that the world is going to change more rapidly than he believed in 2000, this plan may look less attractive than it once did.
(just speculating, would like to have other inputs)
I get the impression that sexy ideas get disproportionate attention, and that this may be contributing to the focus on AGI risk at the expense of AI risks coming from narrow AI. Here I mean AGI x-risk/s-risk vs narrow AI (+ possibly malevolent actors or coordination issues) x-risk/s-risk.
I worry about prioritising AGI when doing outreach because it may make the public dismiss the whole thing as a pipe dream. This happened to me a while ago.
My take is that I think there are strong arguments for why AI x-risk is overwhelmingly more important than narrow AI, and I think those arguments are the main reason why x-risk gets more attention among EAs.
Ah I see what you’re saying. I can’t recall seeing much discussion on this. My guess is that it would be hard to develop a non-superintelligent AI that poses an extinction risk but I haven’t really thought about it. It does sound like something that deserves some thought.
When people raise particular concerns about powerful AI, such as risks from synthetic biology, they often talk about them as risks from general AI, but they could come from narrow AI, too. For example some people have talked about the risk that narrow AI could be used by humans to develop dangerous engineered viruses.
My uninformed guess is that an automatic system doesn’t need to be superintelligent to create trouble, it only needs some specific abilities (depending on the kind of trouble).
For example, the machine doesn’t need to be agentic if there is a human agent deciding to make bad stuff happen.
So I think it would be an important point to discuss, and maybe someone has done it already.
I’ve noticed that a lot of the research papers related to artificial intelligence that I see folks citing are not peer reviewed. They tend to be research papers posted to arXiv, papers produced by a company/organization, or otherwise papers that haven’t been reviewed and published in respected/mainstream academic journals.
Is this a concern? I know that there are plenty of problems with the system of academic publishing, but are non-peer reviewed papers fine?
Reasons my gut feeling might be wrong here:
many I’m overly focusing with a sort of status quo bias, overly concerned about anything different from the standard system.
maybe experts in the area find these papers to be of acceptable quality.
maybe the handful of papers I’ve seen outside of traditional peer review aren’t representative, suggesting a sort of availability bias, and actually the vast majority of new AI-relevant papers that people care about are really published in top journals. I’m just browsing the internet, so maybe if I were a researcher in this area speaking with other researchers I would have a better sense of what is actually meaningful.
maybe artificial intelligence is an area where peer review doesn’t matter as much, as results can be easily replicated (unlike, say, a history paper, where maybe you didn’t have access to the same archive or field site as the paper’s author did).
I work in AI. Most papers, in peer reviewed venues or not, are awful. Some, in both categories, are good. Knowing whether a work is peer reviewed or not is weak evidence of quality, since so many good researchers think peer review is dumb and don’t bother (especially in safety). Eg I would generally consider eg “comes from a reputable industry lab” to be somewhat stronger evidence. Imo the reason “was it peer reviewed” is a useful signal in some fields is largely because the best researchers try to get their work peer reviewed, so not being peer reviewed is strong evidence of incompetence. That’s not the case in AI
So, it’s an issue, but in the same way that all citations are problematic if you can’t check them yourself/trust the authors to do due diligence
My understanding is that peer review is somewhat less common in computer science fields because research is often published in conference proceedings without extensive peer review. Of course, you could say that the conference itself is doing the vetting here, and computer science often has the advantage of easy replication by running the supplied code. This applies to some of the papers people are providing… but certainly not all of them.
Peer review is far from perfect, but if something isn’t peer reviewed I won’t fully trust it unless it’s gone through an equivalent amount of vetting by other means. I mean, I won’t fully trust a paper that has gone through external peer review, so I certainly won’t immediately trust something that has gone through nothing.
I’m working on an article about this, but I consider the lack of sufficient vetting to be one of the biggest epistemological problems in EA.
Actually, computer science conferences are peer reviewed. They play a similar role as journals in other fields. I think it’s just a historical curiosity that it’s conferences rather than journals that are the prestigious places to publish in CS!
Of course, this doesn’t change the overall picture of some AI work and much AI safety work not being peer reviewed.
Is there a world where 30% Tariffs on Chinese goods going into America is net positive for the world?
Could the Tariff reduce consumption and carbon emissions a little in the USA, while China puts more focus more on selling goods to lower income countries? Could this perhaps result in a tiny boost in growth in low income countries?
Could the improved wellbeing/welfare stemming from growth in low income countries + reduced American consumption offset the harms caused by economic slowdown in America/China?
Probably not—I’m like 75% sure the answer is no, but thought the question might be worth asking...
Rutger Bregman is taking the world by storm at the moment, promoting his book and concept “Moral Ambition”. Yesterday he was on the Daily show!. It might be the biggest wave of publicity of largely EA ideas since FTX? Most of what he says is an attractive repackaging, or perhaps an evolution of largely EA ideas. He’s striking a chord with the mainstream media, in a way that I’m not sure Effective altruism ever really has (but I wasn’t there in the early days). I would also hazard a guess that his approach might resonate especially well with left-leaning people.
I was wondering if there’s anything EA’s could be DOING at the moment to take advantage of/leverage this unexpected wave of EA-Adjacent publicity. Things like...
1. Help with funding advertising, or anything else he might needs to ride the wave—these opportunities don’t come often. He may well not need money though... 2. Using his videos and ideas as “ins” or advertising to university EA groups or other outreach. I know he’s going to talk at Harvard uni soon—what is the group there’s response? 3. Incorporating some of his language and ideas into how EA presents itself. Phrases like “Moral ambition”, and the “Bermuda triangle of talent” seem like great phrases to adopt into our “lexicon” as it were.
Worth noting: Peter Singer and Rutger Bregman’s School for Moral Ambition are co-hosting a Profit for Good conference in Amsterdam on 11 June—a concrete EA-adjacent collaboration that channels Bregman’s “moral ambition” into effective-charity business models. Another good touchpoint for anyone looking to ride this wave.
I think the fellowships look great, but as paid internships I would have thought they would have been the best way to collaborate with them for a pretty small number of people?
I think many people can/should apply, but of course I expect only few will get in.
I also don’t know if “paid internship” is a good description, I think it’s probably closer to Ambitious Impact programs than to a typical internship (the “founding to give” program is made in collaboration with AIM)
Yes it’s quite exciting! One quick thought: I don’t think SMA/Bregman would be very pleased if EAs regularly started using SMA’s videos and ideas in their outreach (because they want to keep themselves quite separate). Maybe he’s changed his view on that (they’re organising an event with Singer and GWWC shortly), but it was certainly the case a year or so ago. If anyone’s considering doing this on a significant scale, I’d suggest checking with them first.
I think there’s merit in discussing and collaborating, and even keepign something seperate.I do think though that even if they do manage to gather a significant “movement” or “community” around SMA it will end up overlapping/melding with the EA community in significant ways. The concepts are just so aligned that it would be hard to keep the communities separate. Percent overlap will be high especially after a few years.
Perhaps in his homeland the Netherlands this might be possible though as most likely there will be more uptake there.
Also he’s praising AMF, collaborating with AIM and doing events with Singer and GWWC so it would be a little odd for them to use what EA has generated to big up themselves, wile not wanting EA to do the same the other way around at all? This seems unlikely they would want this but maybe I’m missing something.
Appreciate that brother. Personally I don’t mind disagree votes—there’s plenty there that could reasonably be wrong/disagreed with. Its the karma downvoting that surprises me more :D. In saying that I’ve been downvoted for more benign statements ;).
I recently attended his book launch in London, where he was asked about EA. I was surprised by how positive his response was. His main criticism was that EA feels “nerdy,” and that these ideas deserve a much wider audience. I got the impression he sees SMA as at least somewhat aligned with EA, but aimed at a broader audience.
He mentioned Ambitious Impact twice during the talk and profiles them in a chapter in the book. He also shouted out Rob Mather (who was in attendance), and includes at least two chapters on founding and running the Against Malaria Foundation in the book. I haven’t seen other interviews, but it already seems to me like he’s promoting certain EA areas.
RUTGER: I see myself as a pluralist. It’s fine to rely on the full spectrum of human emotions and motivations. Humans are a mixed bag, right? So, we are partially motivated sometimes by things such as compassion, empathy, and altruism, which is wonderful. But we can’t solely rely on that to make this world a wildly better place.
Peter, you’re obviously the founder of the Effective Altruism movement, a movement that I admire. At the same time, though, I feel it’s a bit limited in its reach because many of the effective altruists I’ve spoken to are a bit strange and weird. They’re mainly motivated by this yearning to do good and help others. They are born altruists. A lot of them became vegan when they were very young. Many of them reacted instantly when they read your essay, Famine, Affluence and Morality, and I think what happened in the years around 2010 is that these people discovered one another on social media, and they realised, “Hey, I’m not alone.” But they’ve always been quite weird, which is fine, don’t get me wrong. I’m happy for them to do their work, but at the same time, I thought, perhaps there’s also a place for a broader movement for more “neurotypical people” that relies on other sources of motivation.
Yes, he references quite a few EA case studies in the book and in his talks. From chats I’ve had with people involved, I think they’re being thoughtful about how they relate to the EA brand—trying to reach a broader audience without getting pulled into existing perceptions.
So that’s why I say if you’re thinking of using their work in EA outreach at a significant scale, I’d suggest checking in with them first.
I wonder what can be done to make people more comfortable praising powerful people in EA without feeling like sycophants.
A while ago I saw Dustin Moskovitz commenting on the EA Forum. I thought about expressing my positive impressions of his presence and how incredible it was that he even engaged. I didn’t do that because it felt like sycophancy. The next day he deleted his account. I don’t think my comment would have changed anything in that instance, but I still regretted not commenting.
In general, writing criticism feels more virtuous than writing praise. I used to avoid praising people who had power over me, but now that attitude seems misguided to me. While I’m glad that EA provided an environment where I could feel comfortable criticising the leadership, I’m unhappy about ending up in a situation where occupying leadership positions in EA feels like a curse to potential candidates.
Many community members agree that there is a leadership vacuum in EA. That should lead us to believe people in leadership positions should be rewarded more than they currently are. Part of that reward could be encouragement and I am personally committing to comment on things I like about EA more often.
Beside the point, Dustin Moskovitz deleting his account seems somewhat important, any idea what is going on there? Of course he is a free person and has every right to do that.
In general, writing criticism feels more virtuous than writing praise.
FWIW it feels the opposite to me. Writing praise feels good; writing criticism feels bad.
(I guess you could say that it’s virtuous to push through those bad feelings and write the criticism anyway? I don’t get any positive feelings or self-image from following that supposed virtue, though.)
Quickly: 1. I agree that this is tricky! I think it can be quite tough to be critical, but as you point out, it can also be quite tough to be positive. 2. One challenge with being positive to those in power is that people can have a hard time believing it. Like, you might just be wanting to be liked. Of course, I assume most people would still recommend you being honest, its just can be hard for others to know how to trust it. Also, the situation obviously changes when you’re complementing people without power. (i.e. emerging/local leaders)
Hey! I’m requesting some help with “Actions for Impact”, it’s a notion page with activities people can get involved in that take less than 30 minutes and can contribute to EA cause areas. This includes signing petitions, emailing MPs, voting for effective charities in competitions, responding to ‘calls for evidence’, or sharing something online. EA UK has the notion page linked on their website: https://www.effectivealtruism.uk/get-involved
It should serve as a hub to leverage the size of the EA community when it’s needed.
I’m excited about the idea and I thought I’d have enough time to keep it updated and share it with organisations and people, but I really don’t. If the idea sounds exciting and you have an hour or two per week spare please DM me, I’d really appreciate a couple of extra hands to get the ball rolling a bit more (especially if you have involvement in EA community building as I don’t at all).
I’ve now spoken to ~1,400 people as an advisor with 80,000 Hours, and if there’s a quick thing I think is worth more people doing, it’s doing a short reflection exercise about one’s current situation.
Below are some (cluster of) questions I often ask in an advising call to facilitate this. I’m often surprised by how much purchase one can get simply from this—noticing one’s own motivations, weighing one’s personal needs against a yearning for impact, identifying blind spots in current plans that could be triaged and easily addressed, etc.
A long list of semi-useful questions I often ask in an advising call
Your context:
What’s your current job like? (or like, for the roles you’ve had in the last few years…)
The role
The tasks and activities
Does it involve management?
What skills do you use? Which ones are you learning?
Is there something in your current job that you want to change, that you don’t like?
Default plan and tactics
What is your default plan?
How soon are you planning to move? How urgently do you need to get a job?
Have you been applying? Getting interviews, offers? Which roles? Why those roles?
Have you been networking? How? What is your current network?
Have you been doing any learning, upskilling? How have you been finding it?
How much time can you find to do things to make a job change? Have you considered e.g. a sabbatical or going down to a 3/4-day week?
What are you feeling blocked/bottlenecked by?
What are your preferences and/or constraints?
Money
Location
What kinds of tasks/skills would you want to use? (writing, speaking, project management, coding, math, your existing skills, etc.)
What skills do you want to develop?
Are you interested in leadership, management, or individual contribution?
Do you want to shoot for impact? How important is it compared to your other preferences?
How much certainty do you want to have wrt your impact?
If you could picture your perfect job – the perfect combination of the above – which ones would you relax first in order to consider a role?
Reflecting more on your values:
What is your moral circle?
Do future people matter?
How do you compare problems?
Do you buy this x-risk stuff?
How do you feel about expected impact vs certain impact?
For any domain of research you’re interested in:
What’s your answer to the Hamming question? Why?
If possible, I’d recommend trying to answer these questions out loud with another person listening (just like in an advising call!); they might be able to notice confusions, tensions, and places worth exploring further. Some follow up prompts that might be applicable to many of the questions above:
How do you feel about that?
Why is that? Why do you believe that?
What would make you change your mind about that?
What assumptions is that built on? What would change if you changed those assumptions?
Have you tried to work on that? What have you tried? What went well, what went poorly, and what did you learn?
Is there anyone you can ask about that? Is there someone you could cold-email about that?
Good luck!
https://economics.mit.edu/news/assuring-accurate-research-record
A really important paper on how AI speeds up R&D discovery was withdrawn and the PhD student who wrote it is no longer at MIT.
I have $20 in unused RunPod.io credit (cloud GPU service) that I’m not using and can’t refund. 😢 I’d love to donate it to someone working on any useful — whether it’s for running models, processing data, or prototyping.
Feel free to message me if you want it.
I know that folks in EA often favor donating to more effective things rather than less effective things. With that in mind, I have mixed feelings knowing that many Harvard faculty are donating 10%, and that they are donating to the best funded and most prestigious university in the world.
On the one hand, it is really nice to know that they are willing to put their money where their mouth is when their institution is under attack. I get some warm fuzzy feelings from the idea of defending an education institution against political attacks. On the other hand, Harvard University’s endowment is already very large, and Harvard earns a lot of money each year. It is like a very tailored version of a giving pledge: giving to Harvard, giving for one year. Will such a relatively small amount given toward such a relatively large institution do much good? I do wonder what the impact would be if these fairly well-known and well-respected academics announced they were donating 10% to clean water, or to deworming, or to reducing animal suffering. I wonder how much their donations will do for Harvard.
I’ll include a few graphs to illustrate Harvard’s financial strength.
Some notes about the graphs:
These are from a project I did several months ago using data from the Common Data Set, from College Scorecard, from their Form 990 tax filings, and some data from the college’s websites.
The selection of the non-Harvard schools is fairly arbitrary. For that particular project I just wanted to select a few different types of schools (small liberal arts, more technical focused, etc.) rather than comparing Harvard to other ‘hyper elite’ schools.
I left the endowment graph non-logarithmic just to illustrate the ludicrous difference. Yes, I know it is bad design practice and that it obscures the numbers for the non-Harvard schools.
As a group organiser I was wildly miscalibrated about the acceptance rate for EAGs! I spoke to the EAG team, and here are the actual figures:
The overall acceptance rate for undergraduate student is about ¾! (2024)
For undergraduate first timers, it’s about ½ (Bay Area 2025)
If that’s peaked your interest, EAG London 2025 applications close soon—apply here!
Jemima
Ah that’s great info! Would be useful to get similar numbers for EAGx events. I know the overall acceptance rate is quite high, but don’t know how it is for students who are applying for their regional EAGx.
EAGx undergraduate acceptance rate across 2024 and 2025 = ~82%
EAGx first-timer undergraduate acceptance rate across 2024 and 2025 = ~76%
Obvious caveat that if we tell lots of people that the acceptance rate is high, we might attract more people without any context on EA and the rate would go down.
(I’ve not closely checked the data)
I feel like EAs might be sleeping a bit on digital meetups/conferences.
My impression is that many people prefer in-person events to online ones. But at the same time, a lot of people hate needing to be in the Bay Area / London or having to travel to events.
There was one EAG online during the pandemic (I believe the others were EAGxs), and I had a pretty good experience there. Some downsides, but some strong upsides. It seemed very promising to me.
I’m particularly excited about VR. I have a Quest3, and have been impressed by the experience of chatting to people in VRChat. The main downside is that there aren’t any professional events in VR that would interest me. Quest 3s are expensive ($500), but far cheaper than housing and office space in Berkeley or London.
I’d also flag:
1. I think that video calls can be dramatically improved with better microphone and camera setups. These can cost $200 to $2k or so, but make a major difference.
2. I’ve been doing some digging into platforms similar to GatherTown. I found GatherTown fairly ugly, off-putting, and limited. SpatialChat seems promising, though it’s more expensive. Zoom seems to be experimenting in the space with products like Zoom Huddles (for coworking in small groups), but these are new.
3. I like Focusmate, but think we could have better spaces for EAs/community members.
4. I think that people above the age of 25 or so find VR weird for what I’d describe as mostly status quo bias. Younger people seem to be far more willing and excited to hangout in VR.
5. I obviously think this is a larger business question. It seems like there was a wave of enthusiasm for remote work at COVID, and this has mostly dried up. However, there are still a ton of remote workers. My guess is that businesses are making a major mistake by not investing enough in better remote software and setups.
6. Organizing community is hard, even if its online. I’d like to see more attempts to pay people to organize online coworking spaces and meetups more.
7. I think that online events/conferences have become associated with the most junior talent. This seems like a pity to me.
8. I expect that different online events should come with different communities and different restrictions. A lot of existing online events/conferences are open to everyone, but then this means that they will be optimized for the most junior people. I think that we want a mix here.
9. Personally, I abhor the idea that I need to couple the place where I physically live with the friends and colleagues I have. I’d very much prefer optimizing for these two separately.
10. I think our community would generally be better off if remote work were easier to do. I’d expect this would help on multiple fronts—better talent, happier talent, lower expenses, more resilience from national politics, etc. This is extra relevant giving the current US political climate—this makes it tougher to recommend that others immigrate to the US or even visit (and the situation might get worse).
11. I’d definitely admit that remote work has a lot of downsides right now, especially with the current tech. So I’m not recommending that all orgs go remote. Just that we work on improving our remote/online infrastructure.
Have you checked out the EA Gather? It’s been languishing a bit for want of more input from me, but I still find it a really pleasant place for coworking, and it’s had several events run or part-run on there—though you’d have to check in with the organisers to see how successful they were.
I assumed it’s been mostly dead for a while (haven’t heard about it for a few months). I’m very supportive of it, would like to see it (and more) do well.
It’s still in use, but it has the basic problem of EA services that unless there’s something to announce, there’s not really any socially acceptable way of advertising it.
Similar to “Greenwashing” and “Safetywashing”, I’ve been thinking about “Intellectual Washing.”
The pattern works as: “Find someone who seems like an intellectual who somewhat aligns with your position. Then claim you have strong intellectual (and by extension, logical) support for your views.”
This is easiest to see in sides that you disagree with.
For example, MAGA gets intellectual cred from “The dark enlightenment” / Curtis Yarvin / Peter Thiel / etc. But I’m sure Trump never listened to any of these people, and was likely barely influenced by them. [1]
Hitler famously claimed alignment with Nietzche, and had support from Heidegger. Note that Nietzche didn’t agree with this. And I’d expect Hitler engaged very little with Heidegger’s ideas.
There’s a structural risk for intellectuals: their work can be appropriated not as a nuanced set of ideas to be understood, but as legitimizing tokens for powerful interests.
The dynamics that enable this include:
- The difficulty of making a living or gaining attention as a serious thinker
- Public resource/interest constraints around complex topics
- The ready opportunity to be used as a simple token of support for pre-existing agendas
Note: There’s a long list of types of “X-washing.” There’s an interesting discussion to the best terminology for this are, but I suspect most readers won’t find that particularly interesting. One related concept is that of “selling out”, sometimes where an artist with street cred would pair up with a large brand/label or similar.
[1] While JD Vance might represent some genuine intellectual influence, and Thiel may have achieved specific narrow technical implementations, these appear relatively minor in the broader context of policy influence.
What can ordinary people do to reduce AI risk? People who don’t have expertise in AI research / decision theory / policy / etc.
Some ideas:
Donate to orgs that are working to AI risk (which ones, though?)
Write letters to policy-makers expressing your concerns
Be public about your concerns. Normalize caring about x-risk
I have a bunch of disagreements with Good Ventures and how they are allocating their funds, but also Dustin and Cari are plausibly the best people who ever lived.
I want to agree, but “best people who ever lived” is a ridiculously high bar! I’d imagine that both of them would be hesitant to claim anything quite that high.
“Plausibly best people who have ever lived” is a much lower bar than “best people who have ever lived”.
If you are like me, this comment will leave you perplexed. After a while, I realized that it should not be read as
but as
fwiw i instinctively read it as the 2nd, which i think is caleb’s intended reading
I was going for the second, adding some quotes to make it clearer.
Yeah, sorry: it was obvious to me that this was the intended meaning, after I realized it could be interpreted this way. I noted it because I found the syntactic ambiguity mildly interesting/amusing.
For example, Norman Borlaug is often called “the father of the Green Revolution”, and is credited with saving a billion people worldwide from starving to death. Stanislav Petrov and Vasily Arkhipov prevented a probable nuclear war from happening.
It’s true how many people actually give away so much money as they make it?
The UK offers better access as a conference location for international participants compared to the US or the EU.
I’m being invited to conferences in different parts of the world as a Turkish citizen, and visa processes for the US and the EU have gotten a lot more difficult lately. I’m unable to even get a visa appointment for several European countries, and my appointment for the US visa was scheduled 16 months out. I believe the situation is similar for visa applicants from other countries. The UK currently offers the smoothest process with timelines of only a few weeks. Conference organizers that seek applications from all over the world could choose the UK over other options.
There’s been some neat work on making AI agent forecasters. Some of these seem to have pretty decent levels of accuracy, vs. certain sets of humans.
And yet, very little of this seems to be used in the wild, from what I can tell.
It’s one thing to show some promising results in a limited study. But ultimately, we want these tools to be used by real people.
I assume some obvious todos would be:
1. Websites where you can easily ask one or multiple AI forecasters questions.
2. Competing services that package “AI forecasting” tools in different ways, focusing on optimizing (positive) engagement.
3. I assume that many AI forecasters should really be racking up good scores in Metaculus/Manifold now. The limitation seems to mainly be effort—neither platform has significant incentives yet.
Optimizing AI forecasting bots, but only in experimental settings, seems akin to optimizing cameras, but only in experimental settings. I’d expect you’d wind up with things that are technically impressive but highly unusable. We might learn a lot about a few technical challenges, but little about what real use would look like or what the key bottlenecks will be.
I haven’t been following this area closely, but why aren’t they making a lot of money on polymarket?
I’m sure some people are using custom AI tools for polymarket, but I don’t expect that to be very public.
I was focusing on Metaculus/Manifold, where I don’t think there’s much AI bot engagement yet. (Metaculus does have a dedicated tournament, but that’s separate from the main part we see, I believe).
Also what are the main or best open source projects in the space? Or if someone wanted to actually use LMs for forecasting, what is better than just asking o3 to produce a forecast?
There’s some relevant discussion here:
https://forum.effectivealtruism.org/posts/TG2zCDCozMcDLgoJ5/metaculus-q4-ai-benchmarking-bots-are-closing-the-gap?commentId=TvwwuKB6rNASzMNoo
Basically, it seems like people haven’t outperformed the Metaculus template bot much, which IMO is fairly underwhelming, but it is what it is.
You can do simple tips though like run it a few times and average the results.
Would anyone be up for reading and responding to this article? I find myself agreeing with a lot of it.
”Effective altruism is a movement that excludes poor people”
This is a ten year old article, but it was discussed at the time—see e.g. here.
Rubenstein says that “As the low-hanging fruit of basic health programs and cash transfers are exhausted, saving lives and alleviating suffering will require more complicated political action, such as reforming global institutions.” Unfortunately, there’s a whole lot of low-hanging fruit out there, and things have recently gotten even worse as of late with the USAID collapse and the UK cutting back on foreign aid.
In general, as the level of EA’s involvement and influence in a given domain increases, the more I start to be concerned about the sort of things that Rubenstein worries about here. When a particular approach is at a smaller size, it’s likely to concentrate on niches where its strengths shine and its limitations are less relevant. I would put the classic GiveWell-type interventions in that category, for instance. Compared to the scope of both the needs in global health & development and the actions of other actors, EA is still a fairly small fish.
I’m currently reviewing Wild Animal Initiative’s strategy in light of the US political situation. The rough idea is that things aren’t great here for wild animal welfare or for science, we’re at a critical time in the discipline when things could grow a lot faster relatively soon, and the UK and the EU might generally look quite a bit better for this work in light of those changes. We do already support a lot of scientist in Europe, so this wouldn’t be a huge shift in strategy. It’s more about how much weight to put toward what locations for community and science building, and also if we need to make any operational changes (at this early stage, we’re trying to be very open-minded about options — anything from offering various kinds of support to staff to opening a UK branch).
However, in trying to get a sense of whether that rough approach is right, it’s extremely hard to get accurate takes (or, at least, to be able to tell whether someone is thinking of the relevant risks rationally). And, its hard to tell whether “how people feel now” will have lasting impact. For example, a lot of the reporting on scientist sentiment sounds extremely grim (example 1, 2, 3), but it’s hard to know what level the effect will be over the next few years—a reduction in scientific talent, certainly, but so much so that the UK is a better place to work given our historical reasons for existing in the US? Less clear.
It doesn’t help that I personally feel extremely angry about the political situation so that probably is biasing my research.
Curious if any US-based EA orgs have considered leaving the US or taking some other operational/strategic step, given the political situation/staff concerns/etc? Why or why not?
Really appreciate you @mal_graham🔸 thinking out loud on this. Watching from Uganda, I totally get the frustration the US climate feels increasingly hostile to science and progressive work like wild animal welfare. So yeah, shifting more focus to the UK/EU makes sense, especially if it helps stabilize research and morale. That said, if you’re already rethinking geography and community building, I’d gently suggest looking beyond the usual Global North pivots. Regions like East Africa are incredibly underrepresented but ecologically critical and honestly, there’s a small but growing base of people here hungry to build this field with proper support. If there’s ever a window to make this movement more global and future-proof, it might be now. Happy to chat more if useful.
Thank you for your comment! It’s actually a topic of quite a lot of discussion for us, so I would love to connect on it. I’ll send you a DM soon.
Just for context, the main reason I’ve felt a little constrained to the US/UK context is due to comparative advantage considerations, such as having staff who are primarily based in those countries/speaking English as our organizational common tongue/being most familiar with those academic communities, etc.
I definitely think the WAW community, in general, should be investing much more outside of just US/UK/EU—but am less sure whether it makes sense for WAI to do so, given our existing investments/strengths. But I could be convinced otherwise!
Even if we keep our main focus in the US/UK, I’d be very interested in hearing more about how WAI might be able to support the “people hungry to build the field” in other countries, so that could be another thing to discuss.
Bill Gates: “My new deadline: 20 years to give away virtually all my wealth”—https://www.gatesnotes.com/home/home-page-topic/reader/n20-years-to-give-away-virtually-all-my-wealth
All of the headlines are trying to run with the narrative that this is due to Trump pressure, but I can’t see a clear mechanism for this. Does anyone have a good read on why he’s changed his mind? (Recent events feel like: Buffet moving his money to his kids’ foundations & retiring from BH, divorce)
Why not what seems to be the obvious mechanism: the cuts to USAID making this more urgent and imperative. Or am I missing something?
“A few years ago, I began to rethink that approach. More recently, with the input from our board, I now believe we can achieve the foundation’s goals on a shorter timeline, especially if we double down on key investments and provide more certainty to our partners.”
It seems it was more of a question of whether they could grant larger amounts effectively, which he was considering for multiple years (I don’t know how much of that may be possible due to aid cuts).
I have only speculation, but it’s plausible to me that developments in AI could be playing a role. The original decision in 2000 was to sunset “several decades after [Bill and Melinda Gates’] deaths.” Likely the idea was that handpicked successor leadership could carry out the founders’ vision and that the world would be similar enough to the world at the time of their death or disability for that plan to make sense for several decades after the founders’ deaths. To the extent that Gates thought that the world is going to change more rapidly than he believed in 2000, this plan may look less attractive than it once did.
(just speculating, would like to have other inputs)
I get the impression that sexy ideas get disproportionate attention, and that this may be contributing to the focus on AGI risk at the expense of AI risks coming from narrow AI. Here I mean AGI x-risk/s-risk vs narrow AI (+ possibly malevolent actors or coordination issues) x-risk/s-risk.
I worry about prioritising AGI when doing outreach because it may make the public dismiss the whole thing as a pipe dream. This happened to me a while ago.
My take is that I think there are strong arguments for why AI x-risk is overwhelmingly more important than narrow AI, and I think those arguments are the main reason why x-risk gets more attention among EAs.
Thank you for your comment. I edited my post for clarity. I was already thinking of x-risk or s-risk (both in AGI risk and in narrow AI risk).
Ah I see what you’re saying. I can’t recall seeing much discussion on this. My guess is that it would be hard to develop a non-superintelligent AI that poses an extinction risk but I haven’t really thought about it. It does sound like something that deserves some thought.
When people raise particular concerns about powerful AI, such as risks from synthetic biology, they often talk about them as risks from general AI, but they could come from narrow AI, too. For example some people have talked about the risk that narrow AI could be used by humans to develop dangerous engineered viruses.
My uninformed guess is that an automatic system doesn’t need to be superintelligent to create trouble, it only needs some specific abilities (depending on the kind of trouble).
For example, the machine doesn’t need to be agentic if there is a human agent deciding to make bad stuff happen.
So I think it would be an important point to discuss, and maybe someone has done it already.
I’ve noticed that a lot of the research papers related to artificial intelligence that I see folks citing are not peer reviewed. They tend to be research papers posted to arXiv, papers produced by a company/organization, or otherwise papers that haven’t been reviewed and published in respected/mainstream academic journals.
Is this a concern? I know that there are plenty of problems with the system of academic publishing, but are non-peer reviewed papers fine?
Reasons my gut feeling might be wrong here:
many I’m overly focusing with a sort of status quo bias, overly concerned about anything different from the standard system.
maybe experts in the area find these papers to be of acceptable quality.
maybe the handful of papers I’ve seen outside of traditional peer review aren’t representative, suggesting a sort of availability bias, and actually the vast majority of new AI-relevant papers that people care about are really published in top journals. I’m just browsing the internet, so maybe if I were a researcher in this area speaking with other researchers I would have a better sense of what is actually meaningful.
maybe artificial intelligence is an area where peer review doesn’t matter as much, as results can be easily replicated (unlike, say, a history paper, where maybe you didn’t have access to the same archive or field site as the paper’s author did).
I work in AI. Most papers, in peer reviewed venues or not, are awful. Some, in both categories, are good. Knowing whether a work is peer reviewed or not is weak evidence of quality, since so many good researchers think peer review is dumb and don’t bother (especially in safety). Eg I would generally consider eg “comes from a reputable industry lab” to be somewhat stronger evidence. Imo the reason “was it peer reviewed” is a useful signal in some fields is largely because the best researchers try to get their work peer reviewed, so not being peer reviewed is strong evidence of incompetence. That’s not the case in AI
So, it’s an issue, but in the same way that all citations are problematic if you can’t check them yourself/trust the authors to do due diligence
My understanding is that peer review is somewhat less common in computer science fields because research is often published in conference proceedings without extensive peer review. Of course, you could say that the conference itself is doing the vetting here, and computer science often has the advantage of easy replication by running the supplied code. This applies to some of the papers people are providing… but certainly not all of them.
Peer review is far from perfect, but if something isn’t peer reviewed I won’t fully trust it unless it’s gone through an equivalent amount of vetting by other means. I mean, I won’t fully trust a paper that has gone through external peer review, so I certainly won’t immediately trust something that has gone through nothing.
I’m working on an article about this, but I consider the lack of sufficient vetting to be one of the biggest epistemological problems in EA.
Actually, computer science conferences are peer reviewed. They play a similar role as journals in other fields. I think it’s just a historical curiosity that it’s conferences rather than journals that are the prestigious places to publish in CS!
Of course, this doesn’t change the overall picture of some AI work and much AI safety work not being peer reviewed.
Is there a world where 30% Tariffs on Chinese goods going into America is net positive for the world?
Could the Tariff reduce consumption and carbon emissions a little in the USA, while China puts more focus more on selling goods to lower income countries? Could this perhaps result in a tiny boost in growth in low income countries?
Could the improved wellbeing/welfare stemming from growth in low income countries + reduced American consumption offset the harms caused by economic slowdown in America/China?
Probably not—I’m like 75% sure the answer is no, but thought the question might be worth asking...
Rutger Bregman is taking the world by storm at the moment, promoting his book and concept “Moral Ambition”. Yesterday he was on the Daily show!. It might be the biggest wave of publicity of largely EA ideas since FTX? Most of what he says is an attractive repackaging, or perhaps an evolution of largely EA ideas. He’s striking a chord with the mainstream media, in a way that I’m not sure Effective altruism ever really has (but I wasn’t there in the early days). I would also hazard a guess that his approach might resonate especially well with left-leaning people.
I was wondering if there’s anything EA’s could be DOING at the moment to take advantage of/leverage this unexpected wave of EA-Adjacent publicity. Things like...
1. Help with funding advertising, or anything else he might needs to ride the wave—these opportunities don’t come often. He may well not need money though...
2. Using his videos and ideas as “ins” or advertising to university EA groups or other outreach. I know he’s going to talk at Harvard uni soon—what is the group there’s response?
3. Incorporating some of his language and ideas into how EA presents itself. Phrases like “Moral ambition”, and the “Bermuda triangle of talent” seem like great phrases to adopt into our “lexicon” as it were.
Thoughts?
Worth noting: Peter Singer and Rutger Bregman’s School for Moral Ambition are co-hosting a Profit for Good conference in Amsterdam on 11 June—a concrete EA-adjacent collaboration that channels Bregman’s “moral ambition” into effective-charity business models. Another good touchpoint for anyone looking to ride this wave.
https://www.moralambition.org/profit-for-good-conference-live-stream
I think for most people applying to their fellowships would be the best way to collaborate with SMA to do good (as he mentions in the video)
I think the fellowships look great, but as paid internships I would have thought they would have been the best way to collaborate with them for a pretty small number of people?
I think many people can/should apply, but of course I expect only few will get in.
I also don’t know if “paid internship” is a good description, I think it’s probably closer to Ambitious Impact programs than to a typical internship (the “founding to give” program is made in collaboration with AIM)
Yes it’s quite exciting! One quick thought: I don’t think SMA/Bregman would be very pleased if EAs regularly started using SMA’s videos and ideas in their outreach (because they want to keep themselves quite separate). Maybe he’s changed his view on that (they’re organising an event with Singer and GWWC shortly), but it was certainly the case a year or so ago. If anyone’s considering doing this on a significant scale, I’d suggest checking with them first.
I think there’s merit in discussing and collaborating, and even keepign something seperate.I do think though that even if they do manage to gather a significant “movement” or “community” around SMA it will end up overlapping/melding with the EA community in significant ways. The concepts are just so aligned that it would be hard to keep the communities separate. Percent overlap will be high especially after a few years.
Perhaps in his homeland the Netherlands this might be possible though as most likely there will be more uptake there.
Also he’s praising AMF, collaborating with AIM and doing events with Singer and GWWC so it would be a little odd for them to use what EA has generated to big up themselves, wile not wanting EA to do the same the other way around at all? This seems unlikely they would want this but maybe I’m missing something.
Yup, I largely agree, I’m not sure why people are disagree voting with you.
Appreciate that brother. Personally I don’t mind disagree votes—there’s plenty there that could reasonably be wrong/disagreed with. Its the karma downvoting that surprises me more :D. In saying that I’ve been downvoted for more benign statements ;).
I recently attended his book launch in London, where he was asked about EA. I was surprised by how positive his response was. His main criticism was that EA feels “nerdy,” and that these ideas deserve a much wider audience. I got the impression he sees SMA as at least somewhat aligned with EA, but aimed at a broader audience.
He mentioned Ambitious Impact twice during the talk and profiles them in a chapter in the book. He also shouted out Rob Mather (who was in attendance), and includes at least two chapters on founding and running the Against Malaria Foundation in the book.
I haven’t seen other interviews, but it already seems to me like he’s promoting certain EA areas.
He expresses similar views in his recent interview with Peter Singer:
I also think that EA feels super nerdy and these ideas deserve a broader audience.
That’s good!
Yes, he references quite a few EA case studies in the book and in his talks. From chats I’ve had with people involved, I think they’re being thoughtful about how they relate to the EA brand—trying to reach a broader audience without getting pulled into existing perceptions.
So that’s why I say if you’re thinking of using their work in EA outreach at a significant scale, I’d suggest checking in with them first.
Very fair!
I wonder what can be done to make people more comfortable praising powerful people in EA without feeling like sycophants.
A while ago I saw Dustin Moskovitz commenting on the EA Forum. I thought about expressing my positive impressions of his presence and how incredible it was that he even engaged. I didn’t do that because it felt like sycophancy. The next day he deleted his account. I don’t think my comment would have changed anything in that instance, but I still regretted not commenting.
In general, writing criticism feels more virtuous than writing praise. I used to avoid praising people who had power over me, but now that attitude seems misguided to me. While I’m glad that EA provided an environment where I could feel comfortable criticising the leadership, I’m unhappy about ending up in a situation where occupying leadership positions in EA feels like a curse to potential candidates.
Many community members agree that there is a leadership vacuum in EA. That should lead us to believe people in leadership positions should be rewarded more than they currently are. Part of that reward could be encouragement and I am personally committing to comment on things I like about EA more often.
I think using an anonymous account helps a bit with that, especially when writing praise feels cringy
Beside the point, Dustin Moskovitz deleting his account seems somewhat important, any idea what is going on there? Of course he is a free person and has every right to do that.
FWIW it feels the opposite to me. Writing praise feels good; writing criticism feels bad.
(I guess you could say that it’s virtuous to push through those bad feelings and write the criticism anyway? I don’t get any positive feelings or self-image from following that supposed virtue, though.)
Quickly:
1. I agree that this is tricky! I think it can be quite tough to be critical, but as you point out, it can also be quite tough to be positive.
2. One challenge with being positive to those in power is that people can have a hard time believing it. Like, you might just be wanting to be liked. Of course, I assume most people would still recommend you being honest, its just can be hard for others to know how to trust it. Also, the situation obviously changes when you’re complementing people without power. (i.e. emerging/local leaders)
Hey! I’m requesting some help with “Actions for Impact”, it’s a notion page with activities people can get involved in that take less than 30 minutes and can contribute to EA cause areas. This includes signing petitions, emailing MPs, voting for effective charities in competitions, responding to ‘calls for evidence’, or sharing something online. EA UK has the notion page linked on their website: https://www.effectivealtruism.uk/get-involved
It should serve as a hub to leverage the size of the EA community when it’s needed.
I’m excited about the idea and I thought I’d have enough time to keep it updated and share it with organisations and people, but I really don’t. If the idea sounds exciting and you have an hour or two per week spare please DM me, I’d really appreciate a couple of extra hands to get the ball rolling a bit more (especially if you have involvement in EA community building as I don’t at all).