I keep coming back to this post and feeling that, if anything, I didn’t express strongly enough just how awful and dangerous Trump is.
Denis
This is a great post!
I’ve worked in non-EA roles where I was a hiring manager and we had many high-quality applicants for a single role. For example, hiring post-doc chemists is humbling when you see 50 CV’s of people who have each done incredible work and are far more qualified than I am.
At first, it seems like an abundance of choice. But what is surprising is that we almost never reject someone without a real reason. Sure, “there were better candidates” can be true, but usually I can put my finger on a few reasons why we decided this. You are probably a great person, but if you don’t get hired, it’s very likely that someone can tell you exactly why—what was it that that other candidate had or did differently?
So feedback is super useful—but only if you can get good, honest feedback—and you’ll only get this if you are very receptive, not defensive and totally respectful of the interviewer’s time.
For good, motivated candidates, I often offer to do a 30 minute feedback session after their last interview. I will get quite granular “when we asked you X, you replied Y, and it wasn’t a very convincing answer, we would expect a candidate of your calibre to have given answer Z” or “we had 4 applicants who had done full post-docs in small-angle light scattering, which is the core of the role, and it was always going to be difficult for you without this experience.” And also very basic things like “If you start to feel tired, have a strong coffee. We’re judging you against other candidates who are fully focused, if you’re tired, it’s just harder.”
When I applied for my first job in a full-time EA role, a very helpful hiring manager, Michael Aird, did exactly this, and gave me so much good feedback and tangible advice that it really step-changed my approach to EA job-seeking.
I still got plenty of rejection though :) - I was even rejected as an attendee for EAG London even while I was doing an incubator with AIM ! So it’s also great to get used to rejection and learn from it!
Yes, in many circles the EA brand is toxic.
But sometimes we stick our heads in the sand as if that were something we couldn’t control.
Or maybe some EA’s kind of like this feeling of being outsiders and being the minority. I don’t know.
Every other group I’ve ever worked with accepts that PR is part of the world. Companies know that they will get good press and bad press, and that it won’t always reflect reality, but they hire people to make it as positive as possible. Politicians are the same. They run focus groups and figure out what words to use to share their message with the public, to maximise support.
Too often we act like we’re above all that. We’re right and that’s enough. If people can’t accept that, that’s their loss.
But it’s not their loss. It’s our loss. it’s the world’s loss.
Public perception of EA’s outside the EA community is often “a bunch of ‘rationalist’ tech guys who like to argue about abstract concepts and believe that AI should have rights,” or something along those lines. This is totally at odds with vast majority of EA’s who are among the most generous, caring people in the world, who want to help people and animals who are suffering.
A world run by EA’s, or on EA principles, would be so wonderful. This should be our vision if we’re truly sincere. But if we want to make this happen, we need to be willing to get our hands dirty, do the PR, challenge the newspaper articles that mis-characterize us, learn to communicate in 15 second tweets as well as 22222222243 word essays so that more people can be exposed to EA ideas rather than stereotypes.
If you ask anyone outside the EA community to name an EA, they probably only have heard of SBF. If you push them, they might wonder if Elon Musk is also an EA. It’s no wonder they don’t trust EA’s. But it’s up to us to proactively change that perception.
There may have been a time when EA’s should have stayed out of politics. This isn’t it.
There may been times when we should separate our EA discussions from political opinions—even if we feel strongly about political questions, we should keep those opinions away from our EA discussions.
Today, we do not have that luxury. We need to get our hands dirty.
Many of us care deeply about the world, yet for fear of being called “partisan” do not dare to point out the obvious FACT that there is one party which currently is standing for everything that EA’s oppose.
I have written this before, and I got a lot of downvotes, but I will say it again.
By far the most effective, impactful think the EA movement could do would be to find a way to stop Donald Trump and his cronies destroying so much.
I fully accept that EA’s should include and listen to both Democrats and Republicans, liberals and conservatives. But Republicans, even more than Democrats, should be putting their necks on the line to stop Trump destroying their party in addition to destroying the US.There is no coherent way that anyone could be an EA and a Trump supporter.
Stopping Trump destroying the US, destroying AI Governance, destroying global aid, destroying climate-action, … is the single most important task in the world right now.
Those of us outside the US need our US colleagues, EA’s and non-EA’s, to do what you can. We need to push our own politicians not to be such pathetic walk-over appeasers too, and we’re working on that.
So yes, we need to start engaging in politics, at least until this emergency is over.
Amazing !
Congrats from pledger # 9397.
Also wonderful cameos from Helene and Romain (who was the person who pushed me over the line to pledge, one of the rare good decisions I’ve made …).
As someone who considers myself both an EA and a socialist (by the normal definition), I am confused by this post :D.
Socialists believe in things like social safety nets, universal health care, equal opportunity education, respect for minorities—essentially, they believe that all humans deserve respect and the chance of a healthy, happy life, regardless of their circumstances of birth.
I think most EA’s believe something similar.
Furthermore, if adopting what you describe as socialist thinking were the best (most effective) way to bring about change, EA’s would support that.
But I don’t think you’ve described socialist thinking, but rather Marxist philosophy.
And so, what you have described is not an attempt to turn EA’s into socialists, but rather an attempt to turn them into Marxists.
The problem with Marxism, as perfectly captured by Bertrand Russell, is that it is a very negative, hate-filled philosophy. It is a bit like MAGA—it is defined by who it hates (the Bourgoisie), it is focused on cutting them down.
EA, on the other hand, is driven by love. It is about helping people, helping animals, helping avoid existential risks. EA’s do not focus on “who are the people we want to hurt?” as Marxists do, but rather on who can we help, and how best can we help them.
In my experience, most socialists (as distinct from Marxists) have a similar philosophy.
I love this post. In the past week I have had a few eye-opening moments which strongly support this way of thinking:
1. I was speaking to a junior EU official, like under 30, less than 3 years in his role. He mentioned that people underestimate the influence people at his level can have in a big organisation. We all know that it is the politicians and very senior officials who decide the budget for important interventions, anything from Developmental Aid to AI Safety. What we often overlook is that it’s often very junior people who execute these instructions, and this often means that they get to decide (or at least suggest) how the money the politicians have approved should be spent at a granular level. People who take the time to learn about this process and then look for the appropriate roles can find themselves deciding which charity should get millions of euros, or which initiative to support. People put massive effort into debating policy (how big is the budget) and in this arena, we tend to have very little influence against all the big players. But if we were to focus on how specifically a part of the budget is spent (e.g. ensuring it supports evidence-based, effective interventions), we could have much more impact. And yet, very few of us do this.
2. I’ve been following Rutger Bregman for a while now, and one of the things he keeps emphasising is the importance of actually doing something tangible. I also read the following provocative quote from Cate Hall (Useful Fictions) : “Ideas are cheap and easy to find; execution is everything. Effective altruists would be a lot more effective if they internalized this.”
When we do tangible things, we tend to need tangible, boring skill-sets. The ability to parse long legal documents or study financial spread-sheets. A deep understanding of arcane areas of law and precedent—e.g. tax-law, liability law as it relates to tobacco companies, etc. for two of Rutger’s initiatives. We all have great theories about how the world should be and those are important visions to keep in mind, but it isn’t for the want of these visions that progress is so slow.
People who work in politics already understand this. The movements which succeed don’t just have big visions, they also have thousands of volunteers who study the precise rules of vote-counting, who look at the logistics of getting their voters to the polling stations, who (if they’re in the GOP anyway) look for rules that might enable them to prevent likely opponents from voting or even from running as candidates. The boring tedious stuff.
How much of the tragedy of the past 25 years would have been avoided if some Democrat 2000 had spent a few hours studying the legal details of hanging chads and found a way to just count those votes before the whole drama even started? If someone had done that, we would never have known their name or what they did, nobody would write poems about them, but they might have prevented multiple wars and millions of deaths.
Felicitations Jen et Romain!
This is fantastic progress for year 1, and augurs very well for the future. All of us starting new EGI’s have a lot to learn from what you have done, and what worked well and didn’t work. So really appreciate you writing this article and sharing. The Effective Giving community is just wonderful for sharing resources and wisdom, almost that alone makes it worth being part of.
At Effective Giving Ireland, we’re about a year behind you, but looking to deliver something good for Giving Season 2025. We’ll definitely take your experience into account, and probably pester you guys with questions …
Good luck, and continue the fantastic progress!
I love this post. I don’t necessarily agree with everything, but I love that you are willing to say something provocative, to stick your neck on the line and say what probably a lot of people are thinking.
I am in exaclty this situation. I am not a vegan, and I donate to a great charity, FarmKind. I believe that my net impact on animal welfare is positive. But I also agree that this is largely part of my privilege of being able to donate without much hardship.
This post, more than anything else I’ve read or seen on this topic, made me pause and question my own ethics. Any post that has that effect is a good post, we can all do with having our ethical assumptions challenged every now and then.
There are complex arguments about the value and necessity of being vegan, I am not expert enough to add new value to that debate.
My one observation, from someone living in a world where if you mention EA, people respond “what’s that?”—if they’ve heard of it, it’s because of SBF. And in this world, especially in Europe, veganism is sometimes seen (absolutely without justification!) as something that people do to impress others, rather than necessarily as virtuous. (look at all the jokes about vegans). So I’m not sure how much the showing virtue argument works outside areas where veganism is already popular. But it surely doesn’t hurt—more vegans will lead to even more vegans …
So thanks for a great post !!
PS I really hope the people to gave it an X also replied or commented, I think when someone presents a coherent argument, and you disagree with it enough to give it an X, you should explain what exactly you disagree with.
That is awesome feedback, James. Thank you!
An interesting way to think about this is that when you donate money to a good cause, nobody can ever take that away from you. Not the tax-man, not Trump, not a recession or a stock-market collapse. You will forever have that “credit” in your account.
The idea of this post is so obviously correct that anything else just doesn’t make sense. If it’s a competition for who has earned / inherited the most money—which sadly for some people it is—then why shouldn’t money voluntarily given away be part of the total?
Right now, rich-lists are a contest to find the greediest humans, people who amass but do not share huge fortunes. Maybe calling the the “Forbes Greed list” would help change this??
People who have devoted more time and energy to EA and have a deeper grasp of it should have a bigger role in defining what is or isn’t worth other people reading. It’s not just judgment (is it right or wrong), it’s also originality—is this a new opinion for EA’s to think about? is this a topic which EA’s haven’t really engaged? It’s hard for a new person to make these calls.
Karma is a reasonably good indicator of meaningful engagement with the EA forum—as good as any other that can be quickly and fairly calculated.
I would add one caveat: to use a more powerful supervote, a person should be required to add a comment. From personal experience, I am very happy to have dissenting opinions and arguments against my posts, but it’s frustrating to get downvotes without any explanation.
Great post!
As a senior professional who went through the hiring process for EA groups, but also as a senior professional who has hired people (and hires people) both for traditional (profit-driven) organisations and for impact/mission-driven organisations, my only comment would be that this is great advice for any role.
As hiring managers, we love people who are passionate and curious, and it just feels weird for someone to claim to be passionate about something but not have read up about it or followed what’s happening in their field.
In terms of the job-search within EA, the only detail I would add is that there are a huge number of really nice, friendly, supportive people who give great feedback if you ask. One of my first interviewers did a 1-hour interview, after which he (rightly) did not continue the process. He explained very clearly why and what skills I was missing. He also set up an additional call where he talked through how my skill-set might be most valuable within an impactful role, and some ideas. He gave me lots of connections to people he knew. And so on. And he offered to help if I needed help.
Within EA, this is the norm. People really respect that someone more senior wants to help make the world a bit better, they want to help.
Thank you for writing this John!
I’m not sure this post (from Givewell, who are great and are doing the best they can in a bad situation) is the right place.
I also agree with other commenters that many EA’s do engage with political topics, with policy makers and some of the most impactful examples of EA work have been where we have succeed in changing laws (for example, about lead pollution).
I also accept that many EA’s (for example, myself) tend to engage in politics separately from EA activities, and maybe see the two as complementary activities.
So it’s not about engaging in politics—EA’s do that—but about engaging in large-scale politics, and especially at critical moments like now.
But I find a massive disconnect when a group claims to be looking to do the most effective things possible, and when obviously by far the most effective thing to do right now is to engage in preventing President Trump from destroying the world, and yet any suggestion that EA’s get involved in that gets shot down. I made a post on this theme that has MINUS 29 Karma. My point was just that we need to put energy into stopping President Trump from destroying the world. Nobody explained what they had against it, they just voted it down.
I think there is an important distinction here. I don’t think this is about EA’s becoming associated with one political party (in the US or elsewhere). That would just put people off.
But the follow-up question would be how to get involved.
Because right now, absurdly, EA does not have a high reputation with the general public. Recently, in an article on AI Safety, about the AI-2027 paper that you may have heard about, the NY Times had the following quote: “Mr. Kokotajlo and Mr. Lifland both have ties to Effective Altruism, another philosophical movement popular among tech workers that has been making dire warnings about A.I. for years.” The clear implication was that that somehow gives their opinions less credibility, as if EA were some sort of cult—rather than a group of people who think clearly and rationally.
In a better society, EA’s would be an important influence group, just like doctors, scientists, economists or whatever. People would say “this action is strongly opposed by EA’s” as a strong argument against something. Right now, we are not there. If the EA community were to come out officially as calling President Trump a threat to democracy, this would probably be seized upon by the right-wing media as proof that he was doing a great job and annoying all the right people.
[My second most downvoted post was one where I dared to suggest that EA’s should do more to stand up for ourselves when we are ridiculed in the press … unfortunately we live in a world where, much as we may not like it, image matters, and if we let others treat us like a small, weird minority, then when important moments like AGI or Trump come along, we don’t have as much influence as we should have with the general public.]
So, basically, I love your post, I think I fully feel how you feel—but I’m also not sure what exactly we should do. Maybe EA’s engaging as individuals to stop Trump, encouraging all their friends to do the same is the best we can hope for.
I’m curious to know if you have tangible suggestions of what the EA community can and should do.
Nice post and I fully agree.
Unfortunately it all goes back to inadequate math education and effective disinformation campaigns. Whether it was tobacco or climate change, those who opposed change and regulation have always focused on uncertainty as a reason not to act, or to delay. And they have succeeded in convincing the vast majority of the public. The mentality is: “even the scientists don’t agree on whether we’ll have a global catastrophe or total human extinction—so until we’re sure which one it is, let’s just keep using fossil fuels and pumping out carbon dioxide.”
With AI, I liken most of humanity’s mentality to that of a lazy father watching a football game who needs a soda. And there is a store just across a busy highway from his house. He could go get the soda, but he might miss an important score. So instead he sends his 7-year-old son to the store. Because, realistically, there’s a good chance that his son won’t get hit by a car, while if he goes himself, it is certain that he’ll miss a part of the game.
No parent would think like that. But when it comes to AI, that’s how we think.
And timelines are just the nth excuse to keep thinking that way. “We don’t need to act yet, it mightn’t happen for 5 years—some people say even 10 years.”
The challenge for us is to somehow wake people up before it’s too late, and despite the fact that the people who are in the best position to pause are the most gung-ho of all, whether they are CEO’s or US president, because they personally have everything to gain from accelerating AI, even if it ends up screwing everyone else (and let’s be realistic, they don’t really care about anyone else).
I wrote this post one month ago, it received minus 29 votes and 6 x’s.
Do people still feel the same way? Or are you now realising that this man is trying to turn the US into his own personal Russia? That there is a model for this that he is following—look at Turkey or Poland or Hungary or Slovakia or Brazil or Argentina or Venezuela. All slighly different, but similar in the way that an apparently stable, mature democracy was hijacked by a populist movement and eventually became an authoritarian state where the constitution and the rule of law were gradually replaced by the whims of one individual.
I spent some time in Venezuela when Chavez was in power, and it is scarily similar to the US right now. At the time, it was early in Chavez’s rule, the economy was still working, the country was rich although with a lot of terrible poverty and many people, even educated people, supported Chavez’s vision of a more equal society. But now the country has been destroyed.
I have read a wonderful novel, Europe Central, by William Vollmann, which describes what it was like to live under Stalin. So much parallels what’s happening in the US today, from punishing people for expressing the “wrong” opinions, to, for example the way Stalin was the person who decided if Shostakovich’s latest works were acceptable or not—just like the way Trump is taking over the Kennedy Center.
And this is happening to the most powerful country in the world, the country that used to be the good guys in a world where Russia and China support so much that is bad.
To me this is utterly terrifying. And I’m not sure why EA’s don’t see this as a problem.Is it that EA’s are secretly libertarians who actually think that some of what Trump is doing is good?
Or is it that we rather focus on narrow problems that seem more tractable, and leave the global political problems to others?
Are those of us in Europe missing something?
Could anyone enlighten me?
It would be amazing if some of the people who downvoted this and or disagreed with it could provide some perspective on why.
Specifically: do you genuinely believe that stopping Trump’s destruction of so much that is good and altruistic and necessary in the world is not an important and worthy objective? Or do you not believe that EA’s should get involved in the dirty world of politics?
[NickLaing’s comment is great, but was based on a previous version that I’d had updated even before I saw his comment.]
Hi Nick,
I fully agree with you. In fact, after I re-read the post, I realised I urgently needed to edit it. I had intended the idea of actual assassination to be provocative, but instead it read as if I was actively proposing it.
What I’m hoping for is, indeed, non-violent options, protests, etc.
What I’m objecting to, though, is him feeling he can break laws and accepted conventions at will, but everyone else blindly following them to enable him. For example, this is the moment when the EU could take a strong, moral stance. We could propose, in the short term, to literally replace the US—fund US Aid, pay the workers, etc., which could be both helpful for those who need help and a really powerful rebuke of Trump. But we could also just refuse to treat him seriously.
For example, I’m Irish. On March 17th, St Patrick’s Day, traditionally Irish leaders visit the US president and give him some shamrock. Many Irish people want us to skip the visit this year, and to instead make a very public point about wanting nothing to do with Mr. Trump—while still having massive respect for all the great things the US stands for. But it looks like it will go ahead as normal, he’ll get a nice photo-op, and everything will seem normal.
It’s not normal. We shouldn’t normalise it.
But I totally agree with you, assassination is not the literal answer. Hopefully you are one of the few people who read it before I edited it :D
Cheers
Denis
Hi Alex,
Thanks for writing this wonderful post. I’ve been following and supporting GFI for a while and I actually looked at working in Alternative Protein (I’m a PhD chemical engineer and spend most of my career doing scale-up research) - but it is surprisingly hard to get into, and so I ended up working with a pretty amazing direct air capture start-up.
Alternative Protein has so much potential to be a win/win/win/win for the world—climate, land-use, water, nutrition, animal-suffering, preventing famines—it’s a total no-brainer … except to the lobbies who want to preserve the status quo. It is shocking that we don’t spend 100 x what we currently spend on bringing this technology to the market.
Over the past 3 years, while I haven’t been working on alternative protein, I have been learning so much (not intentionally!) that may be relevant to the challenges you describe. I won’t try to capture it all here, but would be happy to talk to one of your scale-up team members.
Let me briefly explain what I’ve learned:
Aggressive scaling is possible and you can get it funded. DAC is an even less attractive market than Alternative Protein in many ways, but there is a way to get investors and regulators on board. But it’s not trivial. It requires going beyond the business-as-usual approach and focusing on scaling. Basically, one company that says “we’ll let all those other people figure out the details—we’re going to scale—fast! - and we’re going to be ready to use the best technology that the others develop. In other words, instead of waiting until you’re “ready” to scale, you scale in parallel with the technology growth. This can be compelling for investors, because you have a tangible time-line within which you plan to be profitable. (Yes, I know this is massively over-simplified and so on, but I can share real examples of where this strategy has worked and how).
There is EU Funding at the right scale. One of my side-roles while I was between roles was as an “expert” reviewer for EU Horizon projects. They have “flagship projects” which get up to ~ 20 million euros of funding—these are designed to get the first full-scale production plant build for technologies that struggle to scale. I reviewed proposals in a different area, but I’m sure that alternative protein can have potential in some ways. Writing these proposals is hard work and very tedious, but it can be the breakthrough that is needed.
Legislation is a vital part of the battle. The recent farcical ruling in the EU that products cannot be called meat-names if they’re not meat is an example of what can go wrong. (I live in Brussels but I’m not really in the policy / lobbying network, but I see people who are in the climate space, and it is very powerful. Less in the sense that you can influence major policy decisions, but more in the sense that you can influence which initiatives a quite junior commission officer might decide to support with the 100 million euros they have to invest in some particular objective. Do you have people on the ground, in PLux, chatting to people about how alternative protein is a great way to help the climate, to reduce animal suffering, to provide food security for the EU, … ??
I think Rutger Bregman is a big supporter of Alternative Protein. Certainly one of the co-leaders of his program is. It would be interesting to see if there’s a way that he would consider Alternative Protein as a topic for the next generation of the School of Moral Ambition. This would give a big injection of resources and support in the non-technical aspects, like legislation and funding. If you have a tangible proposal of what this might look like, I know my contact in his org would get it to him.
Many scale-up projects fail at the zeroth step (this from my long industrial career) because they have not clearly defined the one (or two, or three) technical obstacles that, if solved, would enable scale-up. This is also a reason that Horizon applications fail. You need an absolutely ruthless analysis of all the assumptions you’re making, and a “devil’s advocate” review before you can then say “if we could solve this, we could scale this technology.” But once you get it down to the point where you need just one or two innovations, it starts to become more interesting to investors and research funders.
I wish I had time to follow and deeply understand the technology behind alternative protein—I followed a few lectures and read some articles, but I’m not a biochemical engineer, and so I don’t pretend to have the necessary technical mastery. But there are already lots of amazing scientists and engineers working on the technical challenges. If you think it’d be useful to chat to someone from a more hard-nosed scale-up perspective, let me know.