I graduated from Georgetown University in December, 2021 with degrees in economics, mathematics and a philosophy minor. There, I founded and helped to lead Georgetown Effective Altruism. Over the last few years recent years, I’ve interned at the Department of the Interior, the Federal Deposit Insurance Corporation, and Nonlinear.
Blog: aaronbergman.net
New interview with Will MacAskill by @MHR🔸
Almost a year after the 2024 holiday season Twitter fundraiser, we managed to score a very exciting “Mystery EA Guest” to interview: Will MacAskill himself.
@MHR🔸 was the very talented interviewer and shrimptastic fashion icon
Thanks to @AbsurdlyMax🔹 for help behind the scenes
And of course huge thanks to Will for agreeing to do this
Summary, highlights, and transcript below video!
Summary and Highlights
(summary AI-generated)
Effective Altruism has changed significantly since its inception. With the arrival of “mega donors” and major institutional changes, does individual effective giving still matter in 2025?
Will MacAskill—co-founder of the Centre for Effective Altruism and Giving What We Can, and currently a senior research fellow at the Forethought Institute—says the answer is a resounding yes. In fact, he argues that despite the resources currently available, individuals are “systematically not ambitious enough” relative to the scale of the problems the world faces.
In this special interview for the 2025 EA Twitter Fundraiser, Will joins host Matt to discuss the evolution of the movement’s focus. They discuss why animal welfare—specifically the fight against factory farming—has risen in prominence relative to global health, and why Will believes those working on it are “on the right side of history.”
Will also shares updates from his current work at the Forethought Institute, where he is moving beyond standard AI safety concerns to focus on “model character”—the idea that as AI agents become more autonomous, their embedded ethics and personality will determine how our economy and society function.
Matt and Will discuss:
Why “mega donors” haven’t made individual giving obsolete
The “founder effect” that initially prioritized global health over animal welfare
The funniest moment from the What We Owe the Future media tour (involving Tyler Cowen)
Why Forethought is focused on the moral status of digital beings
Will’s call for the community to embrace an “impartial, altruistic scout mindset” on social media
Highlights
On earning to give:
On current, most exciting work at Forethought:
On what he’d tell EA Twitter:
Transcript
MATT
Hello everyone and welcome. I’m Matt from the 2025 EA Twitter Fundraiser. I’m here with Will MacAskill himself, who likely needs no introduction for folks watching this. But in case anyone does, Will is a senior research fellow at the Forethought Institute and former associate professor at Oxford University. He co-founded Giving What We Can and the Centre for Effective Altruism. He is the author of numerous books and articles, including Doing Good Better, What We Owe the Future, and my personal favorite, An Introduction to Utilitarianism. Thank you for joining us, Will.
WILL
Thanks for the interview.
MATT
We’re doing this in service of promoting effective giving. To start us off, we were curious to ask: What do you think the role of effective giving should be in the EA community in 2025? And maybe in particular, should people be more ambitious with trying to earn to give?
WILL
Yes. I think of effective giving as a core part of Effective Altruism. It was the start of it all, and it continues to be important. It continues to be important even if you see “mega donors” coming on the scene, because there are often a lot of things that individuals can fund that the mega donors can’t. And even taking the mega donors into account, there are still enormous problems in the world that they are not going to be able to solve.
Then, should people be more ambitious? I genuinely think yes. I think people systematically aren’t ambitious enough, so the answer is almost always yes. Again, the ambition you have should match the scale of the problems that we’re facing—and the scale of those problems is very large indeed.
MATT
Absolutely. One of those big problems—and the one that we’re raising money for in this year’s online fundraiser—is specifically farmed animal welfare. That’s something that has been really popular with highly engaged EAs recently. If you look at, for example, the donation election on the Forum, you’ll see there’s really a lot of enthusiasm around that. However, it is something that has historically played a smaller role in public EA messaging. Do you think that’s a good balance, or is that something that you would like to see change?
WILL
I definitely think that there was never some grand plan of reducing the focus on animal welfare. I and some others really got in via global health and development, so I think that really set the stage—there is a bit of a founder effect there. And then I think it’s also just the case that maybe there is naturally wider reach because more people are concerned [about human-centric causes].
But yes, I’d be down for that to change. The problem of factory farming is just so stark and so unnecessary. The arguments for [addressing] it are so good.
I remember when I became vegetarian twenty years ago—I’m getting old now—I got a lot of pushback. It was quite a weird thing to do. People thought it was “holier than thou.” It was like you were morally grandstanding or judging others at the time. That has changed. Now, regarding the people who are vegetarian or vegan who really care about animal welfare, it’s just clear that these are the good people. They are on the right side of history.
The amount of good that you can do is just absolutely enormous; the sheer amount you can affect is huge. So, yes, I’d love to see that rise in prominence.
MATT
Very cool. When I was doing some research for this, I was looking back at old versions of the 80,000 Hours and Giving What We Can websites on the Wayback Machine. I was surprised that the first-ever archive of the 80,000 Hours page has farmed animal welfare as one of the issues on there. So, you mentioned the founder effect, but I was pleasantly surprised to see just how early this was getting into EA messaging.
WILL
We all cared. I was vegetarian. Little known fact: I and others at 80,000 Hours helped to set up Animal Charity Evaluators.
MATT
Yeah, I remember reading that at some point.
WILL
That was because we felt it was a gap at that point—that was in 2012 or something.
MATT
Well, one of the ways that we are trying to promote the fundraiser is leaning into EA memes. That’s certainly something that’s always been fun about the online EA community. Do you have a favorite EA meme that you’ve run across?
WILL
I feel I have many favorites. One is the April Fools’ posts naming Giving What We Can “Naming What We Can.”
I also feel it’s a shame that Dustin Moskovitz is no longer memeing quite as hard as he used to. But something he did… there was a big debate on Reddit where people were complaining about Wytham Abbey. There was a debate in the comments between who bought it—who owned it. One person was saying it was Will MacAskill, and the other was saying it was Owen Cotton-Barratt—both of which are incorrect. And then someone came in and said, “What if it’s neither of those people, but some mystery third person?”
That commenter, like many threads down into this Reddit post, was Dustin Moskovitz himself, who actually bought it. So, I salute him. Maybe not a meme per se, but part of the same culture.
MATT
Maybe if we lean in, we can get him to participate a little bit more again.
WILL
Maybe.
MATT
You did quite the media blitz for What We Owe the Future. Was there a funniest or most unexpected story you have from that whirlwind tour?
WILL
The thing that leaps out the most is when I was being interviewed by Tyler Cowen. Podcasts normally have a bit of a chat and get a bit of a warm-up. Tyler just immediately goes in: “Okay, so what is something you like that’s ineffective?”
I just couldn’t think of an answer. I went completely blank. So this gets edited from the podcast, but there were minutes of me just being like… [silence]. It kind of threw me off for the whole episode, really. It’s not that I only like maximally efficient things, to be clear!
MATT
I sometimes listen to Conversations with Tyler, and yes, he sometimes has that energy where you really feel like he should let the guest take a breather.
WILL
That was right at the start.
MATT
Well, Will, I know you’ve been doing some really exciting work at Forethought these days. Do you want to tell us a little bit about the work that’s going on there and maybe what you’re most excited about?
WILL
Sure. Forethought in general is trying to do research that is as useful as possible in helping the world as a whole navigate the transition to extremely advanced AI—superintelligence. Distinctively, we tend to not focus on the risk of loss of control to AI systems themselves. Our work often hits on that or helps on that, and it’s a big risk—we’re glad that other people are working on it—but I personally don’t think that makes up the vast majority of the issues we face.
We are often looking at neglected issues other than that. That can include:
The risk of intense concentration of power among humans.
The idea that we should be taking seriously the moral status of digital beings themselves and what kind of rights they have.
A number of other issues, like how AI will interact with society’s ability to reason well and make good decisions.
Maybe the thing I’ll emphasize most at the moment, because I’m very excited about it from an impact perspective but I think people don’t appreciate it quite enough, is the idea of “model character” or AI personality.
There is now OpenAI, which has this “Model Spec,” and then Claude’s “soul” was leaked. Basically, these companies are having to make decisions about how these AIs behave. That affects hundreds of millions of people every day already. And then in the future, it will determine how our most active economic agents in the world act.
There’s a real spectrum that you can imagine between AI that is as maximally like a tool as possible—just an obedient servant—versus an AI that actually has its own sense of ethics and own conscience. Once you’ve got that, then take every single hairy ethical issue or important circumstance the AIs will face; we really should be thinking in advance about how we want them to behave. I’d be keen for a lot more thought to be going into that.
MATT
Absolutely. That seems incredibly important. All right, well to close, is there anything else you want to tell the people of EA Twitter?
WILL
The main thing is just that I have been loving what I’ve been seeing as a resurgence in EA activity online—on Substack and Twitter in particular. If you think I haven’t been following along with popcorn on the Andy Masley data center water usage story, you would be wrong.
Honestly, I just love seeing more people really coming out to bat for EA ideas, demonstrating EA ideas on all sorts of topics—whether that’s data center usage or the utter monstrosity of current shrimp factory farming. So I would just love for more people to feel empowered to be shouting about the giving they’re doing, the causes that they’re working on, and demonstrating the impartial, altruistic scout mindset that I think is EA at its best.
MATT
Awesome. Well, you heard it here first, folks: post more.
WILL
And thanks for the fundraiser; thanks to everybody who has contributed. It’s very, very cool to see this.
Matt
Yes, we definitely are really grateful for everyone who has donated so far and who will continue to donate, because the animal welfare fund is awesome and supporting effective animal interventions is a really, really fantastic cause.
All right. Well, thank you, Will, for joining us. It’s been a pleasure speaking with you.
WILL
It’s been a joy. Cool. Merry Christmas, Happy Holidays.
MATT
Merry Christmas.