In this fireside chat from EA Global 2018: San Francisco, Will MacAskill asks Holden Karnofsky of the Open Philanthropy Project about a wide range of topics. The conversation covers Open Phil’s strategy and current focuses, cause prioritization, Holden’s work habits and schedule, and early wins from Open Phil’s first couple years of grantmaking.
A transcript of their discussion is below, which we have lightly edited for clarity. You can also watch it on YouTube and read it on effectivealtruism.org.
The Talk
Will: To warm us up, give us an overview of what Open Phil has been up to in the last 12 months, and what your plans are for the following year.
Holden: Open Philanthropy is a grant maker; that’s our main activity. Right now, I would say we’re in an intermediate stage of our development. We’ve been giving away a bit over 100 million dollars a year for the last couple of years. We do want to grow that number at some point, but we believe that right now we should focus on strengthening the organization, our intellectual frameworks, our approach, and our operations. We want to get used to operating at this scale—which is big, for a grant maker—before going bigger.
So, it’s a several year transition period. Last year, in addition to grant making, we did a lot of work to strengthen our operations, and to get more clarity on cause prioritization. How much money is eventually going to go into each of the different causes that we work in? How much should go into criminal justice reform; how much should go to support GiveWell’s top charities, which we try to weigh against the other things; how much should to AI risk, biosecurity, etc.?
That’s been the focus of the last year or so. Going forward, it’s kind of the same. We’re very focused on hiring right now. We just had a director of operations start, and we’ve been hiring pretty fast on the operations side. We’re trying to build a robust organization that is ready to make a ton of grants, do it efficiently and do it with a good experience for the grantees.
And then, the other hiring we’re doing is a push for research analysts. These are going to be the people who help us choose between causes, help us answer esoteric questions that most foundations don’t analyze. Like how much money should go to each cause, and what causes we should work on. We expect our research analysts to eventually become core contributors to the organization. So, a major endeavor this year has been gearing up to make our best guess at who we should be hiring for that. It’s really a capacity-building, hiring time and also a time when we’re really intense about figuring out the question of how much money should go to each cause.
Will: Fantastic. One thing you mentioned was then, something you’ve been working on is how do you divide the money across these very different cause areas. What progress do you feel like you’ve made on that over the last year?
Holden: First, I want to give a little background on the question, because it’s kind of a weird one and it’s one that often doesn’t come up in a philanthropic setting. We work in a bunch of very different causes. So, like I said, we work on criminal justice reform, farm animal welfare, and on global health, to name a few. We have to decide how much money goes into each cause. So one way that you might try to decide this is you might say well, what are we trying to do and how much of it are we doing for every dollar that we spend?
So you might say that we’re trying to prevent premature deaths, and ask: “How many premature deaths are we preventing for every dollar we spend?” Or you might try to come up with a more inclusive, universal metric of good accomplished. There are different ways to do that. One way is to value different things according to one scale. And so you could use a framework similar to the quality framework where you say, if you avert a blindness, that’s half as good as saving a life, for example.
And so, you could put every intervention on one scale and then say, how many units of good, so to speak, are we accomplishing for each dollar we spend? And then you would just divide up the money so that you get the maximum overall. This might look like putting money into your best cause, and at a certain point, when it’s no longer your best cause because you’re reaching diminishing returns, you put money into another cause, and so on.
There’s what I consider a problem with this approach, although not everyone would consider it a problem. What I consider a problem is that we’ve run into two mind-bending fundamental questions that seem like it’s very hard to get away from. These questions are very hard to answer and then they have a really huge impact on how we give. One of them is: How do you value animals versus humans? For example, how do you value chickens versus humans?
One on hand, if you value, let’s say, to simplify, we’re deciding between GiveWell top charities which try to help humans with global health interventions: bed nets, cash transfers, things like that, versus animal advocacy groups that try to push for better treatment of animals on factory farms. Without going too much into what the numbers look like, if you decide that you value a chicken’s experience 1% as much as a human’s, you get the result that you should only work on chickens. All the money should go into farm animal welfare.
On the other hand, let’s say that you go from 1% to 0.001%, or 0%. In that case, you should just put all the money toward helping humans. There’s one extremely important parameter, and we don’t know what its value should be. When you move to one number, it says you should put all the money over here. And when you move to another number, it says you should put all the money over there. The even trickier version of this question is when we talk about preventing global catastrophic risks or existential risks.
When we talk about our work on things like AI risk, biosecurity, or climate change, where the goal is not to help some specific set of persons or chickens, but rather to hopefully do something that will be positive for all future generations, the question is: How many future generations are there? If you prevent some kind of existential risk, did you just do the equivalent of preventing 7 billion premature deaths—which is about the population of the world—or did you just do the equivalent of preventing a trillion, trillion, trillion, trillion, trillion untimely deaths? It depends how many people there are in the future. It’s very hard to pick the right number for that. And whichever number you do pick, it ends up taking over the whole model.
Based on the uncertainty around these two questions, we have determined that there are a bunch of reasons we don’t want to go all in on one cause. That’s something we’ve written up on our blog. Among other problems, if you go all in on one cause, you get all the idiosyncrasies of that cause. If you change your mind later, you might miss a lot of opportunities to do good. It could also become very hard to be broadly appealing to donors, if your whole work is premised on one weird assumption—or one weird number—that could have been something different.
We don’t want to be all in on one cause. So at the end of the last year, we determined that we want to have various buckets of capital, each of which is based on different assumptions. And so we might have one bucket of capital that says, preventing an existential risk is worth a trillion, trillion, trillion premature deaths averted; we value every grant by how it affects the long run trajectory of the world. And other bucket might say, we’re just going to look for things that affect people alive today, or that have impact we can see in our lifetimes. And then you have another bucket that takes chickens very seriously and another one that doesn’t. Then you have to determine the relative sizes of each bucket.
So, we have a multiple stage process where we first say there’s x dollars, how much is going to be in what we call the animal-inclusive bucket versus the human-centric bucket? How much is going to be in the long-termist bucket versus the near-termist bucket? And then within each individual bucket, we use our target metrics more normally and decide how much to give to, for example, AI risk, biosecurity, and climate change?
A benefit of this approach is that we’re attacking the problems at two parallel levels. One of them is the abstract: we have x dollars, so how much do we want to put in the animal-inclusive bucket versus the human-centric bucket? We could start with 50-50 as a prior and then say, well, we actually take this one bucket more seriously, so we’ll put more in there. We could make other adjustments, too.
Apart from the abstract layer, we’ve also started moving toward addressing cause prioritization tangibly, where we create a table that says for each set of dollars we can spend, we’ll get this many chickens helped and this many humans helped and this many points of reduction in existential risk. And so, under different assumptions, a given intervention might looks excellent by one metric, okay by another, and really bad by a third. So just by looking at the table, we can understand the trade-offs we’re making.
And so a lot of the work we’re doing now, and a lot of the work that I think new hires will do, is filling out that table. A lot of it is really guesswork, but if we can understand roughly what we’re buying with different approaches, then hopefully we can make a decision from a better standpoint of reflective equilibrium.
Will: As part of that, you’ve got to have this answer to the question of your last dollar of funding. How good is that last dollar? And how do you think about that, given this framework?
Holden: The last dollar question is very central to our work, and closely related to our dilemma as a grantmaker. When someone sends in a grant proposal, we have to decide whether to make it or not. In some sense, that question reduces to the question of whether it’s better to make the grant or save the money. And in turn, that question reduces to whether the grant will be higher impact than our last dollar spent.
For a long time, we had this last dollar concept that was based on GiveDirectly. GiveDirectly is a charity that gives out cash. For every $100 you give, they try to get $90 to someone in a globally low income household. If a grant’s impact is lower than giving to GiveDirectly, then we probably shouldn’t make it, since GiveDirectly has essentially infinite demand for money, and we could just give the money to them instead. If, however, the grant looks higher-impact than GiveDirectly, then maybe we should consider funding it.
Recently, we’ve refined our thinking, and refined the question of our last dollar. GiveDirectly is a near-termist, human-centric kind of charity, so it’s a good point of comparison for that bucket. For other buckets, though, our last dollar could look very different. If instead of counting people we’re helping, we’re counting points of reduction in global catastrophic risks, then what does our last dollar look like? It probably doesn’t look like GiveDirectly, because there are probably better ways to accomplish that long-termist goal.
We spent a bunch of time over the last couple of months trying to answer that question. What is the GiveDirectly of long-termism? What is the thing we can just spend unlimited money on that does as well as we can for increasing the odds of a bright future for the world? We tried a whole bunch of different things; we looked into different possibilities. Right now, we’ve landed on this idea of platform technology for the rapid development of medical countermeasures.
The idea is you would invest heavily in research and development and you would hope that what you get out from a massive investment, is the ability to more quickly develop a vaccine or treatment in case there’s ever a disastrous pandemic. Then you can estimate the risk of a pandemic over different timeframes, how much this would help and how much you’re going to speed it up. And our estimate ended up being that we could probably spend over 10 billion dollars, in present value terms, on this kind of work.
We also estimated it—this is really wild and it’s just a guess, since we’re trying to start with broad contours of things and then get more refined—but we estimate something like the low, low price of 245 trillion dollars per time that you prevent extinction. 245 trillion dollars per reduced extinction event actually comes out to a pretty good deal.
Will: That’s like total wealth of the world so-
Holden: Yeah, exactly. But there’s all this future wealth too, so it’s actually a good deal. And the funny thing is that if you just count people who are alive today and you look at the cost per death averted probabilistically and then expected terms, it’s actually pretty good. It’s kind of in the same ballpark as top charities, for GiveWell. Now, very questionable analysis, because GiveWell’s cost effectiveness estimates are quite rigorous and well-researched and this is emphatically not.
So, you don’t want to go too far with that and you don’t want to say these numbers are the same, and the preliminary look is we think we can probably do better than this with most of our spending. But it interesting to see that that last dollar looks like not a bad deal and that we can compare any other grants we’re making to it, that have the long-term future of humanity as their target. We can say, are they better or worse than this medical countermeasure platform tech? Because we can spend as much money as we want on that.
We don’t have a last dollar estimate yet for animal welfare. We do have a view that the current work is extremely cost effective; it’s under a dollar per animal’s life significantly improved in a way that’s similar to a death averted. But it’s more of a reducing suffering thing a lot of the time. And we do think we could expand the budget a lot before we saw diminishing returns there, so that’s a number we’re working with until we get a better last dollar.
Will: Are there some cause areas that you’re more personally excited about than others?
Holden: I get differently excited about different causes on different days and I’m super excited about everything we’re doing because it was all picked very carefully to be our best bet for doing the most good. As soon as a given course of action looks bad, we tend to close it down.
I generally am excited about everything. There’s different pros and cons to the different work. The farm animal work is really exciting because we’re seeing tangible wins and I think the criminal justice work also looks that way. It’s just really great to be starting to see that we made a grant, something happened and it led to someone having a better life. Often, that does feel like the most exciting.
On the other hand, over the last couple of years, I’ve gotten more and more excited about the long-termist work for a different reason, which is that I’ve started to really believe we could be living at a uniquely high leverage moment in history. To set the stage, I think people tend to walk around thinking well, the world economy grows at around 2% a year. In real terms, maybe 1 to 3%, and that’s how things are. That’s how things have been for a long time, and how they will be for a long time.
But really, that’s a weird way of thinking about things, because that rate of growth is, historically, highly anomalous. It’s been only 200 or 300 years that we’ve had that level of growth, which is only about 10 generations. It’s a tiny, tiny fraction of human history. Before that we had much slower growth. And then when we look to the future, for many, many reasons that I’m not going to get into now, we believe there are advanced technologies, such as highly capable AI, where you could either see that growth rate skyrocket and then maybe flatten out as we get radically better at doing science and developing new technologies, or you could see a global catastrophe.
A less happy way that our time is high leverage is that it could be possible, in the near future, to wipe ourselves out completely with nuclear weapons. There may be other ways of doing it too, like climate change and new kinds of pandemics with synthetic biology. 100 years ago, there was basically almost no reasonably likely way for humanity to go extinct. Now there are a few ways it could happen, which we want to make less likely. So there’s a lot of things, both good and bad, that look special about this time that we live in.
It’s kind of the highest upside, the highest downside, maybe that’s ever been seen before. One way of thinking about it is you could think that more humanity has been around for hundreds of thousands or maybe millions of years, depending how you count it. And humanity could still be around for billions of years more, but we might be in the middle of the most important hundred years.
And when I think about it that way, then I think boy, someone should really be keeping their eye on this. To a large degree, however, people aren’t. Even with climate change, which is better known than a lot of other risks, sufficient action is not being taken. Governments are not making it a priority to the extent that I think they reasonably should. So as people who have the freedom to spend money how we want and the ability to think about these things and act on them without having to worry about a profit motive or accountability to near-term outcomes, we’re in a really special position to do something and I think that it’s exciting. It’s also scary.
Will: Over the last year, what are some particular grants that you think people in the audience might not know as much about, that you are particularly excited about, that you think are going to be particularly good or important?
Holden: There are many grants I’m excited about. I’m guessing people know about things like our OpenAI grant, and our many grants to support animal welfare or corporate campaigning in the U.S., and abroad in India, Europe, Latin America, et. I’ll skip over those, and name some others people might not have heard of.
I’m very excited about the AI Fellows Program. We recently announced a set of early career AI scientists who we are giving fellowships to. It was a very competitive process. These scientists are really, really exceptional. They’re some of the best AI researchers, flat out, for their career stage. They’re also interested in AI safety, not just in making AI more powerful but in making it something that’s going to have better outcomes, behave better, have fewer bugs, etc. They are interested in working on the alignment problem, and things like that.
We found a great combination of really excellent technical abilities, seriousness and an open-mindedness to some of the less mainstream parts of AI that we think are the most important, like the alignment problem. Our goal here is to help these fellows learn from the AI safety community and get up to speed, and become some of the best AI safety researchers out there. We also hope to make it common knowledge in AI that working on AI safety is a good career move. It’s exciting, as a foundation, to be engaging in field building making work on AI safety a good career move.
Another exciting development is that our science team is working on a universal antiviral drug. We believe that a lot of viruses—maybe all of them—rely on particular proteins in your body to replicate themselves and to have their effects. So, if you can inhibit those proteins, and we already have some drugs that do inhibit them which we use for cancer treatment, you might have a drug that, while you wouldn’t want to take every time you got a virus, it might work on every virus. Which could make it a really excellent thing to have if some unexpectedly terrible pandemic comes, and an excellent thing to stockpile. So, that’s super cool.
Also, I’m often excited for grants we make that are all about speed. So there’s a couple of science grants where there’s a technology that looks great, everyone’s excited about it and there’s not much to argue about. Gene drives to potentially eradicate malaria, for example. And an experimental treatment for sepsis that could save a lot of lives, is another. For these good, exciting things, we speed their development up.
I feel proud as an organization and of our operations team, that we’re sometimes able to make a grant where we’re like, yeah, everyone knows this is good, but we’re the fastest. We can get the money out right away, and we can make these good things happen sooner, and happening sooner saves enough lives that it’s a good deal.
So, those are some of the things I’m particularly excited about.
Will: What do you wish was happening in the EA community that currently isn’t? That might be projects, organizations, career paths, lines of intellectual inquiry and so on.
Holden: I think the EA community is an exciting thing and a great source of interesting ideas. Members of the EA community have affected our thinking, and our cause prioritization, a lot. But I think right now, it’s a community that is very heavy on people who are philosopher types, economist types and computer programmer types, all of which somewhat describe me.
We have a lot of those types of person, and I think they do a lot of good. But I would like to see a broader variety of different types of people in the community. Because I think there are people who are more intuitive thinkers who wouldn’t want to sit around at a party debating whether the parameter for the moral weight of chickens is 1% or 0.01%, and what anthropics might have to do with that.
They might not be interested in that, but they still have serious potential as effective altruists because they’re able to say, “I would like to help people and I would like to do it as well as I can.” And so they might be able to say things like, “Boy there’s an issue in the world. Maybe it’s animal welfare, maybe it’s AI, or maybe it’s something else. It’s so important and no one’s working on it. I think there’s something I can do about it, so I want to work on that.”
You don’t need to engage in hours of philosophy to get there. And I think a lot of people who are more intuitive thinkers who may be less interested in philosophical underpinnings have a lot to offer us as a community, and can accomplish a lot of things that the philosopher/programmer/economist type is not always as strong in. More diversity is something I would love to see. I would like to see it if the Effective Altruism Community could find a way to get a broader variety of people, and be a little bit more like that.
Will: You’re managing a lot of money, and there’s a lot at stake. How do you keep yourself sane? Do you overwork? How do you balance working with time off?
Holden: I think we’re working on a lot of exciting stuff, and I certainly know people who when they’re working on exciting stuff, they work all the time and tend to burn out. I’ve pretty intense about not doing that. I co-founded GiveWell 11 years ago, and been working continuously since then. GiveWell, and now Open Philanthropy. Maybe at the beginning it felt like I was sprinting, but right now it really, really feels like a marathon. I try to treat it that way.
I’m very attentive to if I start feeling a lack of motivation. When that happens, I just take a break. I do a lot of really stupid things with my time that put me in a better mood. I don’t feel bad about it and-
Will: Name some stupid things, go on.
Holden: Like going to weird conferences where I don’t know anyone or have anything to contribute and talking there. Playing video games. My wife and I like to have stuffed animals with very bizarre personalities and act out. So now you know some things.
And I don’t feel bad about it. Something that I’ve done, actually for a long time now, I think starting two years into GiveWell, is that I tracked my focused hours, my meaningful hours. And at a certain point, I just took an average over like the last six months and I was like, that’s my target, that’s the average. When I hit the average, I’m done for the week, unless it’s a special week, and I really need to work more that week.
This approach is sustainable, and I think going in more than the average wouldn’t be. Also, it puts me in a mindset where I know how many hours I have, and I have to make the most of them. I think there are people who say, “If I can just work hard enough, I’ll get everything done.” But I don’t think that works for me, and I think it is often a bad idea to think that way in general. One way that I think about it is that the most you can increase your output from working harder is often around 25%.
If you want to increase your output by 5x, or by 10x, you can’t just work harder. You need to get better at skipping things, deciding what not to do, deciding what shortcuts to take. And also you need to get good at hiring, managing, deciding who should be doing what, deciding it shouldn’t be you doing a lot of things. I feel like my productivity has gone up by a very large amount and there is a lot of variance. Like when I make bad decisions, I might only get a tenth as much done in a month. But working more hours doesn’t fix that. So even though I think we’re doing a lot of exciting stuff, I do take it easy in some sense.
Will: Over the last decade, what do you think is some of the biggest mistakes you’ve made, or things you wish you’d done differently?
Holden: I’ve got lots of fun mistakes. But you said last decade, which rules out a lot of the fun stuff.
I will say a couple of things. One thing is that I think early on, and still sometimes, we’ve communicated in a careless way. Especially early on, our view was that more attention was better, that we really needed to get people paying attention to us. And the problem is that a lot of the things we said, they’ll never go away: the internet never forgets.
And I think also people who may have been turned off by our early communications, we’ll never get the second chance to make that first impression. When I look back at it, I think was it really wasn’t that important to get so much attention. I think over the long run, if we had just been quiet and said something when we really had something to say, and said it carefully, maybe things would have gone better. We’ve really succeeded to the extent that we’ve succeeded, just having research that we can explain and that people resonate with. I don’t know that we had to seek attention much beyond that, too.
Another mistake that I look back on, I think I was too slow to get excited about the Effective Altruism Community. When we were starting off, I knew that we were working on something that most people didn’t seem to care about. I knew that we were asking the question, how do we do the most good that we can with a certain amount of resources? And I knew that there were other people asking a similar question, but we were speaking very different languages from each other. And so it was hard for me to really see that those people were asking the same question that I was.
I think we were a bit dismissive. I wouldn’t say totally dismissive, since we talked to the proto-effective-altruists before it was called effective altruism. But it didn’t occur to me that if I was missing important insights about my work, the most likely way to find those insights was to find people who have the same goal as me, but had different perspectives.
And the fact that the Effective Altruism Community speaks a different language from me and a lot of their stuff sounds loopy, I mean, that’s just good. That means that there’s going to be at least some degree of different perspective between us. I think we’ve profited a lot from engaging with the EA community. We’ve learned a lot and we could have done it earlier, so I think waiting was a mistake.
And then the final thing, here’s a class of mistakes that I won’t go into detail on. In general, I feel like the decisions these days that I’m most nervous about are hiring and recruiting, because most of the things we do at Open Phil, we’ve figured out how to do them in an incremental way. We do something, we see how it goes, and no single step is ever that disastrous or that epic.
But when you’re recruiting, you just have someone asking you, “Should I leave my job or not?” And you have to say yes or no. It’s such an incredibly high leverage decision that it’s become clear to me that when we do it wrong, it’s a huge problem and a huge cost to us. When we do it right, however, it improves everything we do. Basically, everything we’ve been able to do is because of the people that we have.
Hiring decisions really are the make or break decisions, and we often have to make them in a week and with limited information. Sometimes we get them right, and sometimes we get them wrong, I think there’s a lot of ones we’ve gotten wrong that I don’t know about. And so a lot of the biggest mistakes have to be in that category.
Will: So then what career advice would you give the EAs in the audience who are currently figuring out what they ought to do? How do you think about that question in general?
Holden: In some sense, I have only one career to look at, although since we interact with a lot of grantees, we also do notice who’s having a big impact according to us and what’s been their trajectory. And one thing that I do think whenever I talk to people about this topic, I get the sense that effective altruists, especially early in their career, are often too impatient for impact in the short term.
A lot of the people I know who seem to be in the best position to do something big, they did something for five years, 10 years, 20 years. Sometimes the thing was random but they picked up skills, they picked up connections, they picked up expertise. A lot of the big wins we’ve seen, both stuff we’ve founded and stuff we haven’t, it looks less like someone came out of college and had impact every year, and then it added up to something good. It looks more like someone might have just been working on themselves and their career for 20 years, and then they had one really good year and that’s everything. That one good year can make your career in terms of your impact. I do think a lot of early career effective altruists might do better if they made the opposite mistake, which I also think would also be a mistake, and just forgot about impact. If they just said, “What can I be good at? How can I grow? How can I become the person I want to be?” I think that probably wouldn’t be worse than the current status quo, and it might be better. I think the ideal is some kind of balance. I don’t know if I’m right or not, but it’s definitely advice that I give a lot.
Will: A couple of people are interested in the small question of just what is your average numbers of hours per week then? I think you can say no comment if you don’t…
Holden: For focused hours, for the time I was doing it, it was like 40. Hours on the clock would be more than that. And then recently I’ve actually stopped counting them up because now I’m in meetings all the time. And one of the things that I’ve found is that my hours are way higher when I have a ton of meetings. If I’m sitting there trying to write a blog post, they are way lower. My hours worked don’t seem as worth tracking as they used to.
Will: Yeah. I mean, different sorts of hours can be a hundred times more valuable. Like, Frank Ramsey was one of the most important thinkers of the early 20th century, but died at 26, which is why no one knows about him. He just worked four hours a day. He made amazing breakthroughs in philosophy, decision theory, economics, and maths.
Holden: Yeah, that’s incredible.
Will: Another thing a couple of people were interested in was Open Phil’s attitudes toward political funding. Firstly, just whether you have a policy with respect to funding organizations that do political lobbying. And then secondly, in particular, if you’d fund particular candidates more than other candidates who may increase or decrease existential risks.
Holden: There’s no real reason in principle that OpenPhil can’t make political donations. We treat political giving, or policy-oriented giving, the same as anything else. Which is we say hey, if we work on this, what are the odds that our funding contributes to something good happening in the world and what is the value of that good? And if you multiply out the probabilities, how good does that make the work look? How does it compare to our last dollar, and how does it compare to our other work?
If it looks good enough and there aren’t other concerns, we’ll do it. Most of our grants, we’re not actually calculating these figures explicitly, but we’re trying to do something that approximates them. For example, we work on causes that are important, neglected and tractable. And we tend to rate things on importance, neglectedness and tractability, because we think those things are predictive and corellative with the total good accomplished per dollar.
A lot of times you can’t really get a good estimate of how much good you’re doing per dollar, but you can use that idea to guide yourself and to motivate yourself. But I do think that some ways, in politics, there’s an elevated risk of inaccurate estimation. You should have an elevated risk that you’re just wrong. If it looks like giving to bed nets to prevent malaria helps people a certain amount, and giving to some very controversial issue that you’re sure you’re right about, helps people a certain amount, and they’re the same, probably the bed nets are better, because you’ve probably got some bias towards what you want to believe in politics.
That said, I don’t think things are necessarily balanced. On political issues, people have a lot of reasons for holding the political views they do, other than what’s best for the world as a whole. And so when our goal is what’s best for the world as a whole, it’s not always that complicated to figure out which side is the side to go in on. It’s not always impossible to see. We can and do fund things that are aimed at changing policy. And in some cases, we’ve recommended contributions that are trying to change political outcomes.
Will: What are your plans are for trying to influence other major foundations and philanthropists?
Holden: One of the cool things about Open Phil is that we are trying to do our work in a way where we find out how to help the most people with the least money, or how to do the most good with the least money, per dollar. Then we recommend our findings to our main funders, who are Cari and Dustin. But there’s no reason that recommendation would be different for another person.
A lot of times, if there were a Will MacAskill foundation, maybe it would be about what Will wants and its recommendations would not be interesting to other people. Actually, I know your foundation would not have that issue, but many people’s would. We want the lessons we learn and research we accomplish to be applicable for other philanthropists, rather than only our particular funders.
Down the line, I would love it to be the case that we see way more good things to do with money than Cari and Dustin have money. In that case, we’d go out and pitch to other people on it and try to raise far more than Cari and Dustin could give. That’s definitely where we’re trying to go. But it’s not what we’re focused on right now, because we’re still below the giving level we would need just to accomplish the giving goals of Cari and Dustin. We’re focused on meeting those goals for now, and we’re focused on improving our organization.
We want more of a track record, a stronger intellectual framework, and just to be better in general. We’d like Something more solid to point to and say, here’s our reason that this is a good way to do philanthropy. We make early moves now: we talk to people who are philanthropists, or who will be philanthropists, but it’s not our big focus now. I think in a few years, it could be.
Will: You’ve emphasized criminal justice reform, existential risks with bio and AI, animal welfare, and global health. Are there another causes that you think you’ll branch out into over the next couple of years? And if so, what might those be? What are some potential candidates?
Holden: I mean, over the next few years, we’re trying to stay focused when it comes to our grant making. But as we’re doing that, we also want to find more clarity around, again, this question of long-termism versus near-termism and how much money is going to each. I think that will affect how we look for new cause areas in the future.
So I think we will look for new causes in the future, but it’s not our biggest focus in the immediate term. One thing we do currently, however, is a lot of the time we will pick a cause based on, partly, who we can hire for it. We’re very big believers in a people-centric approach to giving. We believe that a lot of times if you support someone who’s really good, it makes an enormous difference. Even if maybe the cause is 10% worse, it can be worth it if the person is way more promising.
We originally had a bunch of policy causes we were interested in hiring for. Criminal justice became a major cause for us, and the other ones didn’t, because we found someone who we were excited to lead our criminal justice reform work—that’s Chloe Cockburn—and we didn’t for some of the other ones. But I would say, one cause we may get more involved in in the future, and I hope we do, is macroeconomic stabilization policy. It’s not the world’s best known cause. The idea is that some of the most important decisions in the world are made at the Federal Reserve, which is trying to balance the risks of inflation and unemployment.
We’ve come to the view that there’s an institutional bias in a particular direction. We believe that there is more inflation aversion than is consistent with a “most good for everyone” attitude. We think some of that bias reflects the politics and pressures around the Federal Reserve. We’ve been interested in macroeconomic stabilization for a while. There’s this not very well-known institution, which is not very well understood and makes esoteric decisions. It’s not a big political issue, but it may have a bigger impact on the world economy and on the working class than basically anything else the government is doing. Maybe even bigger than anything else that anyone is doing.
So I would love to get more involved in that cause but I think to do it really well, we would need someone who’s all the way in. I mean, we would need someone who’s just obsessed with that issue, because it’s very complex.
Will: Do you feel like that’s justified on the near-termist human-centric view, or do you think it’s got potentially very long-run impacts too?
Holden: I think it’s kind of a twofer. We haven’t tried to do the calculations on both axes, but certainly, it seems like it could provide broad-based growth and lower unemployment. There are a lot of reasons to think thoseat might lead to better societal outcomes. Outcomes such as better broad-based values, which are then reflected in the kinds of policies we enact and the kinds of people we elect.
I also think that if the economy is growing, and especially if that growth is benefiting everyone across the economy: if labor markets are tighter, and if workers have better bargaining power, better lives, better prospects in the future, then global catastrophic risk might decrease in some way. I haven’t totally decided, how does the magnitude of that compare to everything else? But I think if we had the opportunity to go bigger on that cause, we would be thinking harder about it.
Will: Could something happen or could you learn something such that you’d say, okay, actually we’re just going to go all in on one cause? Like is that conceivable, or are you going to stay diversified no matter what?
Holden: I think it’s conceivable. We could go all in, but not soon. I think as long as we think of ourselves as an early stage organization, one of the big reasons to not go all in is option value. We get to learn about a lot of different kinds of giving, and we’re building capacity for a lot of things, so I want to have the option to change our minds later.
There’s a bunch of advantages to being spread out across different causes. Maybe in 30 years we’ll just be like, “Well, we’ve been at this forever and we’re not going to change or minds, so now we’re going all in.” And that’s something I can imagine, but I can’t really imagine it happening soon.
Will: What attitudes should a small individual donor have? A couple of thoughts, one is like, well, can they donate to Open Phil or Good Ventures? Another is, what’s the point in me donating, if there’s this huge foundation that I think is very good?
Holden: Individual donors cannot to donate to Open Phil at the moment. We just haven’t set that process up. One possible alternative is the Effective Altruism Funds at CEA. Some of our stuff manage donor-advised funds that are not Open Phil, that are outside Open Phil. But you can give to an animal welfare fund that is run by Lewis, who’s our farm animal welfare program officer, and he will look at what was he not able to fund that he wished he could have funded with Open Phil funds, and he’ll use your funds on it.
More broadly, I think donating can definitely do a lot of good, despite Open Phil’s existence. The capital we’re working with is a lot compared to any individual, but it’s not a lot compared to the size of the need in the world, and the amount of good that we can accomplish.
Certainly, we do not believe, given our priorities, given the weight that we’re putting on long-termism versus near-termism, we are now pretty confident that we just do not have enough available to fund the GiveWell top charities to their capacity.
So, you can donate toward bed nets to prevent malaria, seasonal chemoprevention treatment also to prevent malaria, deworming, and other cool programs. You can donate there and you will get an amazing deal in terms of helping people for a little money. I know some people don’t feel satisfied with that; they think they can do better with long-termism or with an animal-centric view. I think if you’re animal-centric, we also currently have some limits on the budget for animal welfare and you can give to that Effective Altruism Fund or you can look at Animal Charity Evaluators and their recommendations, or just give to farm animal groups that you believe in.
Long-termism right now is the one where it’s the least obvious what someone should do. But I think there are still things to do, to reduce global catastrophic risks. Among the things going on, we’re currently very hesitant to be more than 50% of any organization’s budget, unless we feel incredibly well positioned to be their only point of accountability.
There are organizations where we say we understand exactly what’s going on, and we’re fine to be the only decision-maker on how the org is doing. But for other orgs, we just don’t want them to be that dependent on us. So there are orgs working on existential risk reduction, global catastrophic risk reduction and effective altruist community building. Most of them we just won’t go over 50%. And so they need, in some sense, to match our money with money from others.
I would say that generally, no matter what you’re into, there’s definitely something good you can do with your money. I still think donating is good.
Will: Are you interested in funding AI strategy, and have you funded it in the past?
Holden: AI strategy is a huge interest of ours. One way of putting it is when we think about potential risk from very advanced AI, we think of two problems that interact with each other and make each other potentially worse. It’s very hard to see the future, but these are things that are worth thinking about because of how big a deal they would be.
One of them is the value alignment problem, and that’s the idea that it may be very hard to build something that’s much smarter than anyone who built it, much better at thinking in every way and much better at optimizing, and still have that thing behave itself, so to speak. It may be very hard to get it to do what its creator intended and do it more intelligently than they would, but not too differently. That’s the value alignment problem. It’s mostly considered a technical problem.
A lot of the research that’s going on is on how we can build AI systems that work with vague goals. Humans are likely to give vague goals to AI. So how can they work with vague, poorly defined goals, and still figure them out in a way that’s in line with the original human intentions?
Also, how can we build AI systems that are robust? We want AI systems that, if they trained to one environment and then the environment changes, don’t totally break. They should realize that they’re dealing with something new, so that they don’t totally do crazy things. These are the kinds of ideas that comprise the alignment problem, which can mostly be addressed through technical research.
There’s also this other side we think about, the deployment problem, which is what if you have an extremely powerful general purpose technology that could lead to a really large concentration of power? Then we ask: who wields that power? I mean, is it the government, is it a company, is it a person? Who has the moral right to launch such a thing and to say hey, I have this very powerful thing and I’m going to use it to change the world in some way?
Who has the right to do that, and what kind of outcomes can we expect based on if different groups do it? One of the things that we’re worried about is that you might have a world where, let’s say, two different countries are both trying to be the first to develop a very powerful AI. Because if they can, they believe it’ll help their power, their standing in the world. And because they’re in this race and because they’re competitive with each other, they’re cutting corners and they’re launching as fast as they can. Under this sort of pressure, they might not be very careful about solving the alignment problem. And so they could release a carelessly designed AI, which ends up unaligned. Meaning, the AI could end up behaving in crazy ways and having bugs. And something that’s very intelligent and very powerful and has bugs could be very, very bad.
So AI strategy, in my take, is working on the deployment problem. Reducing the odds that there’s going to be this arms race or whatever, increasing the odds that there’s instead a deliberate wise decision about what powerful AI is going to be used for and who’s going to make that decision. There are a lot of really interesting questions there. We’re super interested in it.
We’ve definitely made grants in the AI strategy area: we’ve supported The Future of Humanity Institute to do work on it. We’re currently investigating a couple of other grants there, too. We’ve put out a job posting for someone who wants to think about AI strategy all day, and doesn’t have another place to go. If we find such a person who’s good enough at it, we could fund them ourselves.
And then also, we support OpenAI to help both with the control problem and the deployment problem. We’re trying to encourage OpenAI as an organization to do a lot of technical safety research, but also to be thinking hey, we’re a company that might lead AI. What are we going to do if we’re there? Who are we going to loop in? We’re going to be in conversations about how this thing should be used. What’s our position going to be on who should use it?
We’ve really encouraged them to build out a team of people who can be thinking about that question all the time, and they are working on that. And so, this is a major interest of ours.
Will: You mentioned one of the key ways in which it could be a worry is if there’s an arms race. Perhaps licitly, if it’s war time or something. Would you be interested, and how do you think about trying to make grants to reduce the chance of some sort of great power war?
Holden: I think it would be really good to have lower odds of a great power war, just flat out. Maybe some of the possibility for advanced technologies makes it even more good to reduce the odds of a great power war, and that’s an area that we have not spent much time on. I know there is a reasonable amount of foundation interest in promoting peace. And so it’s not immediately neglected at first glance.
That said, a lot of things that look like they’re getting attention at first glance, you refine your model a little bit, you decide what the most promising angle on it is, and you might find that there’s something neglected. So preventing great power war might be an example of a really awesome cause that we haven’t had the capacity to look into. And if someone else saw something great to do, they should definitely not wait for us.
Will: Another cause that some people were asking about was suffering of animals in the wild, whether you might be interested in making grants to improve that issue.
Holden: Sure. Off the bat, you have to ask if there’s anything you could do to improve wild animal welfare that wouldn’t cause a lot of other problems. We could potentially cause a lot of problems; there might be a kind of hubris in intervening in very complex ecosystems that we didn’t create and that we don’t properly understand.
Another problem with working on wild animal welfare is we haven’t seen a lot of shovel-ready actions to take. Something I will say is there’s probably a lot going on where human activity or some other factor is causing wild animals to suffer. There are probably animals in the wild, a really large number of them, that could be a lot better off than they are. Maybe they could just could have better lives if not for certain things about the ecosystem they’re in that humans may or may not have caused.
And so I see potential there. But we’ve got the issue I mentioned, where what we can do about it isn’t obvious. It’s not obvious how to write grants to improve the welfare of animals in the wild, without having a bunch of problems come along with them.
It hasn’t been that clear to us, but we are continuing to talk about it. Looking into it in the background is something we may do in the future.
Will: In the 2017 year review, you said well we’ve actually already been able to see some successes in criminal justice reform and animal welfare. So I’d be interested if you’d just talk a bit more about that and then, what lessons do you feel you’ve learned, and whether they transfer to the things which are harder to measure success?
Holden: So, we’re early. We’ve only been doing large scale grantmaking for a couple of years. A lot of our grantmaking is on these long timeframes, so it’s a little early to be asking, do we see impact? But I would say we’re seeing early hints of impact. The clearest case is farm animal welfare. We came in when there were a couple of big victories that had been achieved, like a McDonald’s pledge that all their eggs will be cage-free. So there was definitely already momentum.
So we came in, and we poured gasoline on it. We went to all the groups that had been getting these wins and we went to some of the groups that hadn’t and we said, “Would you like to do more? Would you like to grow your staff? Would you like to go after these groups?” And within a year, basically every major grocer and every major fast food company in America had made a cage-free pledge, approximately.
And so hopefully, if those pledges are adhered to, which is a question and something we work on, but hopefully 10 years from now you won’t even be able to get eggs from caged chickens in the U.S., it’ll be very impractical to do so. That’d be nice. I mean, I’m not happy with how the cage-free chickens are treated either, but it’s a lot better. It’s a big step up and I think it’s also a big morale win for the movement and creates some momentum, because from there, what we’ve been doing now is starting to build the international corporate campaigns. Some of those already existed and some of them didn’t, but we have been funding work all over the world. Next time, we would love to be part of those early wins that got the ball rolling, rather than coming in late and trying to make things go faster. We’ve seen wins on broiler chickens, which is the next step in the U.S. after layer hens. We’ve seen wins overseas, so that’s been exciting. And these corporate campaigns had been one of the quicker things we funded. Because I think a lot of times with corporations, it wouldn’t actually cost them that much to treat their chickens better. Somebody just has to complain loudly about it and then it happens, seems to be how it goes.
In criminal justice reform, we picked that cause partly because we saw the opportunity to potentially make a difference and get some wins on a lot of other policy causes. One of the early things we noticed was a couple of bipartisan bills in Illinois that we think are going to have quite a large impact on incarceration there and that we believe that our grantees, with our marginal funding, were crucial for.
We’ve also seen the beginning of a mini-wave of prosecutors getting elected—head prosecutors—who have different values from the normal head prosecutors. So instead of being tough on crime, they frame things in different terms. For example, Larry Krasner in Philadelphia has put out a memo out to his whole office that says, “When you propose someone’s sentence, you are going to have to estimate the cost of that to the state. It’s like $45,000 a year to put someone in prison, and you’re going to have to explain why it is worth that money to us to put this person in prison. And if you want to start a plea bargain and you want to start it lower than the minimum sentence, you’re going to have to get my permission. I’m the head prosecutor.”
It’s a different attitude. These prosecutors are saying, “My goal is not to lock up as many people as possible, my goal is to balance costs and benefits and do right by my community.” And I think there have been a bunch of orgs and a bunch of funders involved in that. I don’t think any of these are things I would call Open Philanthropy productions. They’re things that we think we helped with, we sped along, we got some share of the work being done.
We’re excited about that. Those are two of the causes that have nearer-term ambitions. I can’t say I’ve seen wins on biosecurity yet, I mean other than the fact that there’s been no pandemic that killed everyone. But I can’t give us any credit for that.
I also think about our small wins now as lessons for the future. I think we’ve seen what’s been working and what’s not in those causes where there’s more action and more things happening. I think one of the things that we are seeing is our basic setup of giving program officers high autonomy, has been working pretty well. A lot of these grants are not the ones I would have come up with. And in some cases, they weren’t even ones I was very excited about beforehand. We have systems for trying to give our program officers the ability to sometimes make grants that we don’t fully agree with, and try to reduce veto points. We try to reduce the need for total consensus, and have people at the organization try and make bets that may not be universally agreed to or even agreed to by me, or Cari and Dustin. And I think looking at some of the grants that have been effective, that that’s been a good move. So I will continue to do things that don’t seem right to me, and I’m very excited about that.
Holden Karnofsky: Fireside chat (2018)
Link post
In this fireside chat from EA Global 2018: San Francisco, Will MacAskill asks Holden Karnofsky of the Open Philanthropy Project about a wide range of topics. The conversation covers Open Phil’s strategy and current focuses, cause prioritization, Holden’s work habits and schedule, and early wins from Open Phil’s first couple years of grantmaking.
A transcript of their discussion is below, which we have lightly edited for clarity. You can also watch it on YouTube and read it on effectivealtruism.org.
The Talk
Will: To warm us up, give us an overview of what Open Phil has been up to in the last 12 months, and what your plans are for the following year.
Holden: Open Philanthropy is a grant maker; that’s our main activity. Right now, I would say we’re in an intermediate stage of our development. We’ve been giving away a bit over 100 million dollars a year for the last couple of years. We do want to grow that number at some point, but we believe that right now we should focus on strengthening the organization, our intellectual frameworks, our approach, and our operations. We want to get used to operating at this scale—which is big, for a grant maker—before going bigger.
So, it’s a several year transition period. Last year, in addition to grant making, we did a lot of work to strengthen our operations, and to get more clarity on cause prioritization. How much money is eventually going to go into each of the different causes that we work in? How much should go into criminal justice reform; how much should go to support GiveWell’s top charities, which we try to weigh against the other things; how much should to AI risk, biosecurity, etc.?
That’s been the focus of the last year or so. Going forward, it’s kind of the same. We’re very focused on hiring right now. We just had a director of operations start, and we’ve been hiring pretty fast on the operations side. We’re trying to build a robust organization that is ready to make a ton of grants, do it efficiently and do it with a good experience for the grantees.
And then, the other hiring we’re doing is a push for research analysts. These are going to be the people who help us choose between causes, help us answer esoteric questions that most foundations don’t analyze. Like how much money should go to each cause, and what causes we should work on. We expect our research analysts to eventually become core contributors to the organization. So, a major endeavor this year has been gearing up to make our best guess at who we should be hiring for that. It’s really a capacity-building, hiring time and also a time when we’re really intense about figuring out the question of how much money should go to each cause.
Will: Fantastic. One thing you mentioned was then, something you’ve been working on is how do you divide the money across these very different cause areas. What progress do you feel like you’ve made on that over the last year?
Holden: First, I want to give a little background on the question, because it’s kind of a weird one and it’s one that often doesn’t come up in a philanthropic setting. We work in a bunch of very different causes. So, like I said, we work on criminal justice reform, farm animal welfare, and on global health, to name a few. We have to decide how much money goes into each cause. So one way that you might try to decide this is you might say well, what are we trying to do and how much of it are we doing for every dollar that we spend?
So you might say that we’re trying to prevent premature deaths, and ask: “How many premature deaths are we preventing for every dollar we spend?” Or you might try to come up with a more inclusive, universal metric of good accomplished. There are different ways to do that. One way is to value different things according to one scale. And so you could use a framework similar to the quality framework where you say, if you avert a blindness, that’s half as good as saving a life, for example.
And so, you could put every intervention on one scale and then say, how many units of good, so to speak, are we accomplishing for each dollar we spend? And then you would just divide up the money so that you get the maximum overall. This might look like putting money into your best cause, and at a certain point, when it’s no longer your best cause because you’re reaching diminishing returns, you put money into another cause, and so on.
There’s what I consider a problem with this approach, although not everyone would consider it a problem. What I consider a problem is that we’ve run into two mind-bending fundamental questions that seem like it’s very hard to get away from. These questions are very hard to answer and then they have a really huge impact on how we give. One of them is: How do you value animals versus humans? For example, how do you value chickens versus humans?
One on hand, if you value, let’s say, to simplify, we’re deciding between GiveWell top charities which try to help humans with global health interventions: bed nets, cash transfers, things like that, versus animal advocacy groups that try to push for better treatment of animals on factory farms. Without going too much into what the numbers look like, if you decide that you value a chicken’s experience 1% as much as a human’s, you get the result that you should only work on chickens. All the money should go into farm animal welfare.
On the other hand, let’s say that you go from 1% to 0.001%, or 0%. In that case, you should just put all the money toward helping humans. There’s one extremely important parameter, and we don’t know what its value should be. When you move to one number, it says you should put all the money over here. And when you move to another number, it says you should put all the money over there. The even trickier version of this question is when we talk about preventing global catastrophic risks or existential risks.
When we talk about our work on things like AI risk, biosecurity, or climate change, where the goal is not to help some specific set of persons or chickens, but rather to hopefully do something that will be positive for all future generations, the question is: How many future generations are there? If you prevent some kind of existential risk, did you just do the equivalent of preventing 7 billion premature deaths—which is about the population of the world—or did you just do the equivalent of preventing a trillion, trillion, trillion, trillion, trillion untimely deaths? It depends how many people there are in the future. It’s very hard to pick the right number for that. And whichever number you do pick, it ends up taking over the whole model.
Based on the uncertainty around these two questions, we have determined that there are a bunch of reasons we don’t want to go all in on one cause. That’s something we’ve written up on our blog. Among other problems, if you go all in on one cause, you get all the idiosyncrasies of that cause. If you change your mind later, you might miss a lot of opportunities to do good. It could also become very hard to be broadly appealing to donors, if your whole work is premised on one weird assumption—or one weird number—that could have been something different.
We don’t want to be all in on one cause. So at the end of the last year, we determined that we want to have various buckets of capital, each of which is based on different assumptions. And so we might have one bucket of capital that says, preventing an existential risk is worth a trillion, trillion, trillion premature deaths averted; we value every grant by how it affects the long run trajectory of the world. And other bucket might say, we’re just going to look for things that affect people alive today, or that have impact we can see in our lifetimes. And then you have another bucket that takes chickens very seriously and another one that doesn’t. Then you have to determine the relative sizes of each bucket.
So, we have a multiple stage process where we first say there’s x dollars, how much is going to be in what we call the animal-inclusive bucket versus the human-centric bucket? How much is going to be in the long-termist bucket versus the near-termist bucket? And then within each individual bucket, we use our target metrics more normally and decide how much to give to, for example, AI risk, biosecurity, and climate change?
A benefit of this approach is that we’re attacking the problems at two parallel levels. One of them is the abstract: we have x dollars, so how much do we want to put in the animal-inclusive bucket versus the human-centric bucket? We could start with 50-50 as a prior and then say, well, we actually take this one bucket more seriously, so we’ll put more in there. We could make other adjustments, too.
Apart from the abstract layer, we’ve also started moving toward addressing cause prioritization tangibly, where we create a table that says for each set of dollars we can spend, we’ll get this many chickens helped and this many humans helped and this many points of reduction in existential risk. And so, under different assumptions, a given intervention might looks excellent by one metric, okay by another, and really bad by a third. So just by looking at the table, we can understand the trade-offs we’re making.
And so a lot of the work we’re doing now, and a lot of the work that I think new hires will do, is filling out that table. A lot of it is really guesswork, but if we can understand roughly what we’re buying with different approaches, then hopefully we can make a decision from a better standpoint of reflective equilibrium.
Will: As part of that, you’ve got to have this answer to the question of your last dollar of funding. How good is that last dollar? And how do you think about that, given this framework?
Holden: The last dollar question is very central to our work, and closely related to our dilemma as a grantmaker. When someone sends in a grant proposal, we have to decide whether to make it or not. In some sense, that question reduces to the question of whether it’s better to make the grant or save the money. And in turn, that question reduces to whether the grant will be higher impact than our last dollar spent.
For a long time, we had this last dollar concept that was based on GiveDirectly. GiveDirectly is a charity that gives out cash. For every $100 you give, they try to get $90 to someone in a globally low income household. If a grant’s impact is lower than giving to GiveDirectly, then we probably shouldn’t make it, since GiveDirectly has essentially infinite demand for money, and we could just give the money to them instead. If, however, the grant looks higher-impact than GiveDirectly, then maybe we should consider funding it.
Recently, we’ve refined our thinking, and refined the question of our last dollar. GiveDirectly is a near-termist, human-centric kind of charity, so it’s a good point of comparison for that bucket. For other buckets, though, our last dollar could look very different. If instead of counting people we’re helping, we’re counting points of reduction in global catastrophic risks, then what does our last dollar look like? It probably doesn’t look like GiveDirectly, because there are probably better ways to accomplish that long-termist goal.
We spent a bunch of time over the last couple of months trying to answer that question. What is the GiveDirectly of long-termism? What is the thing we can just spend unlimited money on that does as well as we can for increasing the odds of a bright future for the world? We tried a whole bunch of different things; we looked into different possibilities. Right now, we’ve landed on this idea of platform technology for the rapid development of medical countermeasures.
The idea is you would invest heavily in research and development and you would hope that what you get out from a massive investment, is the ability to more quickly develop a vaccine or treatment in case there’s ever a disastrous pandemic. Then you can estimate the risk of a pandemic over different timeframes, how much this would help and how much you’re going to speed it up. And our estimate ended up being that we could probably spend over 10 billion dollars, in present value terms, on this kind of work.
We also estimated it—this is really wild and it’s just a guess, since we’re trying to start with broad contours of things and then get more refined—but we estimate something like the low, low price of 245 trillion dollars per time that you prevent extinction. 245 trillion dollars per reduced extinction event actually comes out to a pretty good deal.
Will: That’s like total wealth of the world so-
Holden: Yeah, exactly. But there’s all this future wealth too, so it’s actually a good deal. And the funny thing is that if you just count people who are alive today and you look at the cost per death averted probabilistically and then expected terms, it’s actually pretty good. It’s kind of in the same ballpark as top charities, for GiveWell. Now, very questionable analysis, because GiveWell’s cost effectiveness estimates are quite rigorous and well-researched and this is emphatically not.
So, you don’t want to go too far with that and you don’t want to say these numbers are the same, and the preliminary look is we think we can probably do better than this with most of our spending. But it interesting to see that that last dollar looks like not a bad deal and that we can compare any other grants we’re making to it, that have the long-term future of humanity as their target. We can say, are they better or worse than this medical countermeasure platform tech? Because we can spend as much money as we want on that.
We don’t have a last dollar estimate yet for animal welfare. We do have a view that the current work is extremely cost effective; it’s under a dollar per animal’s life significantly improved in a way that’s similar to a death averted. But it’s more of a reducing suffering thing a lot of the time. And we do think we could expand the budget a lot before we saw diminishing returns there, so that’s a number we’re working with until we get a better last dollar.
Will: Are there some cause areas that you’re more personally excited about than others?
Holden: I get differently excited about different causes on different days and I’m super excited about everything we’re doing because it was all picked very carefully to be our best bet for doing the most good. As soon as a given course of action looks bad, we tend to close it down.
I generally am excited about everything. There’s different pros and cons to the different work. The farm animal work is really exciting because we’re seeing tangible wins and I think the criminal justice work also looks that way. It’s just really great to be starting to see that we made a grant, something happened and it led to someone having a better life. Often, that does feel like the most exciting.
On the other hand, over the last couple of years, I’ve gotten more and more excited about the long-termist work for a different reason, which is that I’ve started to really believe we could be living at a uniquely high leverage moment in history. To set the stage, I think people tend to walk around thinking well, the world economy grows at around 2% a year. In real terms, maybe 1 to 3%, and that’s how things are. That’s how things have been for a long time, and how they will be for a long time.
But really, that’s a weird way of thinking about things, because that rate of growth is, historically, highly anomalous. It’s been only 200 or 300 years that we’ve had that level of growth, which is only about 10 generations. It’s a tiny, tiny fraction of human history. Before that we had much slower growth. And then when we look to the future, for many, many reasons that I’m not going to get into now, we believe there are advanced technologies, such as highly capable AI, where you could either see that growth rate skyrocket and then maybe flatten out as we get radically better at doing science and developing new technologies, or you could see a global catastrophe.
A less happy way that our time is high leverage is that it could be possible, in the near future, to wipe ourselves out completely with nuclear weapons. There may be other ways of doing it too, like climate change and new kinds of pandemics with synthetic biology. 100 years ago, there was basically almost no reasonably likely way for humanity to go extinct. Now there are a few ways it could happen, which we want to make less likely. So there’s a lot of things, both good and bad, that look special about this time that we live in.
It’s kind of the highest upside, the highest downside, maybe that’s ever been seen before. One way of thinking about it is you could think that more humanity has been around for hundreds of thousands or maybe millions of years, depending how you count it. And humanity could still be around for billions of years more, but we might be in the middle of the most important hundred years.
And when I think about it that way, then I think boy, someone should really be keeping their eye on this. To a large degree, however, people aren’t. Even with climate change, which is better known than a lot of other risks, sufficient action is not being taken. Governments are not making it a priority to the extent that I think they reasonably should. So as people who have the freedom to spend money how we want and the ability to think about these things and act on them without having to worry about a profit motive or accountability to near-term outcomes, we’re in a really special position to do something and I think that it’s exciting. It’s also scary.
Will: Over the last year, what are some particular grants that you think people in the audience might not know as much about, that you are particularly excited about, that you think are going to be particularly good or important?
Holden: There are many grants I’m excited about. I’m guessing people know about things like our OpenAI grant, and our many grants to support animal welfare or corporate campaigning in the U.S., and abroad in India, Europe, Latin America, et. I’ll skip over those, and name some others people might not have heard of.
I’m very excited about the AI Fellows Program. We recently announced a set of early career AI scientists who we are giving fellowships to. It was a very competitive process. These scientists are really, really exceptional. They’re some of the best AI researchers, flat out, for their career stage. They’re also interested in AI safety, not just in making AI more powerful but in making it something that’s going to have better outcomes, behave better, have fewer bugs, etc. They are interested in working on the alignment problem, and things like that.
We found a great combination of really excellent technical abilities, seriousness and an open-mindedness to some of the less mainstream parts of AI that we think are the most important, like the alignment problem. Our goal here is to help these fellows learn from the AI safety community and get up to speed, and become some of the best AI safety researchers out there. We also hope to make it common knowledge in AI that working on AI safety is a good career move. It’s exciting, as a foundation, to be engaging in field building making work on AI safety a good career move.
Another exciting development is that our science team is working on a universal antiviral drug. We believe that a lot of viruses—maybe all of them—rely on particular proteins in your body to replicate themselves and to have their effects. So, if you can inhibit those proteins, and we already have some drugs that do inhibit them which we use for cancer treatment, you might have a drug that, while you wouldn’t want to take every time you got a virus, it might work on every virus. Which could make it a really excellent thing to have if some unexpectedly terrible pandemic comes, and an excellent thing to stockpile. So, that’s super cool.
Also, I’m often excited for grants we make that are all about speed. So there’s a couple of science grants where there’s a technology that looks great, everyone’s excited about it and there’s not much to argue about. Gene drives to potentially eradicate malaria, for example. And an experimental treatment for sepsis that could save a lot of lives, is another. For these good, exciting things, we speed their development up.
I feel proud as an organization and of our operations team, that we’re sometimes able to make a grant where we’re like, yeah, everyone knows this is good, but we’re the fastest. We can get the money out right away, and we can make these good things happen sooner, and happening sooner saves enough lives that it’s a good deal.
So, those are some of the things I’m particularly excited about.
Will: What do you wish was happening in the EA community that currently isn’t? That might be projects, organizations, career paths, lines of intellectual inquiry and so on.
Holden: I think the EA community is an exciting thing and a great source of interesting ideas. Members of the EA community have affected our thinking, and our cause prioritization, a lot. But I think right now, it’s a community that is very heavy on people who are philosopher types, economist types and computer programmer types, all of which somewhat describe me.
We have a lot of those types of person, and I think they do a lot of good. But I would like to see a broader variety of different types of people in the community. Because I think there are people who are more intuitive thinkers who wouldn’t want to sit around at a party debating whether the parameter for the moral weight of chickens is 1% or 0.01%, and what anthropics might have to do with that.
They might not be interested in that, but they still have serious potential as effective altruists because they’re able to say, “I would like to help people and I would like to do it as well as I can.” And so they might be able to say things like, “Boy there’s an issue in the world. Maybe it’s animal welfare, maybe it’s AI, or maybe it’s something else. It’s so important and no one’s working on it. I think there’s something I can do about it, so I want to work on that.”
You don’t need to engage in hours of philosophy to get there. And I think a lot of people who are more intuitive thinkers who may be less interested in philosophical underpinnings have a lot to offer us as a community, and can accomplish a lot of things that the philosopher/programmer/economist type is not always as strong in. More diversity is something I would love to see. I would like to see it if the Effective Altruism Community could find a way to get a broader variety of people, and be a little bit more like that.
Will: You’re managing a lot of money, and there’s a lot at stake. How do you keep yourself sane? Do you overwork? How do you balance working with time off?
Holden: I think we’re working on a lot of exciting stuff, and I certainly know people who when they’re working on exciting stuff, they work all the time and tend to burn out. I’ve pretty intense about not doing that. I co-founded GiveWell 11 years ago, and been working continuously since then. GiveWell, and now Open Philanthropy. Maybe at the beginning it felt like I was sprinting, but right now it really, really feels like a marathon. I try to treat it that way.
I’m very attentive to if I start feeling a lack of motivation. When that happens, I just take a break. I do a lot of really stupid things with my time that put me in a better mood. I don’t feel bad about it and-
Will: Name some stupid things, go on.
Holden: Like going to weird conferences where I don’t know anyone or have anything to contribute and talking there. Playing video games. My wife and I like to have stuffed animals with very bizarre personalities and act out. So now you know some things.
And I don’t feel bad about it. Something that I’ve done, actually for a long time now, I think starting two years into GiveWell, is that I tracked my focused hours, my meaningful hours. And at a certain point, I just took an average over like the last six months and I was like, that’s my target, that’s the average. When I hit the average, I’m done for the week, unless it’s a special week, and I really need to work more that week.
This approach is sustainable, and I think going in more than the average wouldn’t be. Also, it puts me in a mindset where I know how many hours I have, and I have to make the most of them. I think there are people who say, “If I can just work hard enough, I’ll get everything done.” But I don’t think that works for me, and I think it is often a bad idea to think that way in general. One way that I think about it is that the most you can increase your output from working harder is often around 25%.
If you want to increase your output by 5x, or by 10x, you can’t just work harder. You need to get better at skipping things, deciding what not to do, deciding what shortcuts to take. And also you need to get good at hiring, managing, deciding who should be doing what, deciding it shouldn’t be you doing a lot of things. I feel like my productivity has gone up by a very large amount and there is a lot of variance. Like when I make bad decisions, I might only get a tenth as much done in a month. But working more hours doesn’t fix that. So even though I think we’re doing a lot of exciting stuff, I do take it easy in some sense.
Will: Over the last decade, what do you think is some of the biggest mistakes you’ve made, or things you wish you’d done differently?
Holden: I’ve got lots of fun mistakes. But you said last decade, which rules out a lot of the fun stuff.
I will say a couple of things. One thing is that I think early on, and still sometimes, we’ve communicated in a careless way. Especially early on, our view was that more attention was better, that we really needed to get people paying attention to us. And the problem is that a lot of the things we said, they’ll never go away: the internet never forgets.
And I think also people who may have been turned off by our early communications, we’ll never get the second chance to make that first impression. When I look back at it, I think was it really wasn’t that important to get so much attention. I think over the long run, if we had just been quiet and said something when we really had something to say, and said it carefully, maybe things would have gone better. We’ve really succeeded to the extent that we’ve succeeded, just having research that we can explain and that people resonate with. I don’t know that we had to seek attention much beyond that, too.
Another mistake that I look back on, I think I was too slow to get excited about the Effective Altruism Community. When we were starting off, I knew that we were working on something that most people didn’t seem to care about. I knew that we were asking the question, how do we do the most good that we can with a certain amount of resources? And I knew that there were other people asking a similar question, but we were speaking very different languages from each other. And so it was hard for me to really see that those people were asking the same question that I was.
I think we were a bit dismissive. I wouldn’t say totally dismissive, since we talked to the proto-effective-altruists before it was called effective altruism. But it didn’t occur to me that if I was missing important insights about my work, the most likely way to find those insights was to find people who have the same goal as me, but had different perspectives.
And the fact that the Effective Altruism Community speaks a different language from me and a lot of their stuff sounds loopy, I mean, that’s just good. That means that there’s going to be at least some degree of different perspective between us. I think we’ve profited a lot from engaging with the EA community. We’ve learned a lot and we could have done it earlier, so I think waiting was a mistake.
And then the final thing, here’s a class of mistakes that I won’t go into detail on. In general, I feel like the decisions these days that I’m most nervous about are hiring and recruiting, because most of the things we do at Open Phil, we’ve figured out how to do them in an incremental way. We do something, we see how it goes, and no single step is ever that disastrous or that epic.
But when you’re recruiting, you just have someone asking you, “Should I leave my job or not?” And you have to say yes or no. It’s such an incredibly high leverage decision that it’s become clear to me that when we do it wrong, it’s a huge problem and a huge cost to us. When we do it right, however, it improves everything we do. Basically, everything we’ve been able to do is because of the people that we have.
Hiring decisions really are the make or break decisions, and we often have to make them in a week and with limited information. Sometimes we get them right, and sometimes we get them wrong, I think there’s a lot of ones we’ve gotten wrong that I don’t know about. And so a lot of the biggest mistakes have to be in that category.
Will: So then what career advice would you give the EAs in the audience who are currently figuring out what they ought to do? How do you think about that question in general?
Holden: In some sense, I have only one career to look at, although since we interact with a lot of grantees, we also do notice who’s having a big impact according to us and what’s been their trajectory. And one thing that I do think whenever I talk to people about this topic, I get the sense that effective altruists, especially early in their career, are often too impatient for impact in the short term.
A lot of the people I know who seem to be in the best position to do something big, they did something for five years, 10 years, 20 years. Sometimes the thing was random but they picked up skills, they picked up connections, they picked up expertise. A lot of the big wins we’ve seen, both stuff we’ve founded and stuff we haven’t, it looks less like someone came out of college and had impact every year, and then it added up to something good. It looks more like someone might have just been working on themselves and their career for 20 years, and then they had one really good year and that’s everything. That one good year can make your career in terms of your impact. I do think a lot of early career effective altruists might do better if they made the opposite mistake, which I also think would also be a mistake, and just forgot about impact. If they just said, “What can I be good at? How can I grow? How can I become the person I want to be?” I think that probably wouldn’t be worse than the current status quo, and it might be better. I think the ideal is some kind of balance. I don’t know if I’m right or not, but it’s definitely advice that I give a lot.
Will: A couple of people are interested in the small question of just what is your average numbers of hours per week then? I think you can say no comment if you don’t…
Holden: For focused hours, for the time I was doing it, it was like 40. Hours on the clock would be more than that. And then recently I’ve actually stopped counting them up because now I’m in meetings all the time. And one of the things that I’ve found is that my hours are way higher when I have a ton of meetings. If I’m sitting there trying to write a blog post, they are way lower. My hours worked don’t seem as worth tracking as they used to.
Will: Yeah. I mean, different sorts of hours can be a hundred times more valuable. Like, Frank Ramsey was one of the most important thinkers of the early 20th century, but died at 26, which is why no one knows about him. He just worked four hours a day. He made amazing breakthroughs in philosophy, decision theory, economics, and maths.
Holden: Yeah, that’s incredible.
Will: Another thing a couple of people were interested in was Open Phil’s attitudes toward political funding. Firstly, just whether you have a policy with respect to funding organizations that do political lobbying. And then secondly, in particular, if you’d fund particular candidates more than other candidates who may increase or decrease existential risks.
Holden: There’s no real reason in principle that OpenPhil can’t make political donations. We treat political giving, or policy-oriented giving, the same as anything else. Which is we say hey, if we work on this, what are the odds that our funding contributes to something good happening in the world and what is the value of that good? And if you multiply out the probabilities, how good does that make the work look? How does it compare to our last dollar, and how does it compare to our other work?
If it looks good enough and there aren’t other concerns, we’ll do it. Most of our grants, we’re not actually calculating these figures explicitly, but we’re trying to do something that approximates them. For example, we work on causes that are important, neglected and tractable. And we tend to rate things on importance, neglectedness and tractability, because we think those things are predictive and corellative with the total good accomplished per dollar.
A lot of times you can’t really get a good estimate of how much good you’re doing per dollar, but you can use that idea to guide yourself and to motivate yourself. But I do think that some ways, in politics, there’s an elevated risk of inaccurate estimation. You should have an elevated risk that you’re just wrong. If it looks like giving to bed nets to prevent malaria helps people a certain amount, and giving to some very controversial issue that you’re sure you’re right about, helps people a certain amount, and they’re the same, probably the bed nets are better, because you’ve probably got some bias towards what you want to believe in politics.
That said, I don’t think things are necessarily balanced. On political issues, people have a lot of reasons for holding the political views they do, other than what’s best for the world as a whole. And so when our goal is what’s best for the world as a whole, it’s not always that complicated to figure out which side is the side to go in on. It’s not always impossible to see. We can and do fund things that are aimed at changing policy. And in some cases, we’ve recommended contributions that are trying to change political outcomes.
Will: What are your plans are for trying to influence other major foundations and philanthropists?
Holden: One of the cool things about Open Phil is that we are trying to do our work in a way where we find out how to help the most people with the least money, or how to do the most good with the least money, per dollar. Then we recommend our findings to our main funders, who are Cari and Dustin. But there’s no reason that recommendation would be different for another person.
A lot of times, if there were a Will MacAskill foundation, maybe it would be about what Will wants and its recommendations would not be interesting to other people. Actually, I know your foundation would not have that issue, but many people’s would. We want the lessons we learn and research we accomplish to be applicable for other philanthropists, rather than only our particular funders.
Down the line, I would love it to be the case that we see way more good things to do with money than Cari and Dustin have money. In that case, we’d go out and pitch to other people on it and try to raise far more than Cari and Dustin could give. That’s definitely where we’re trying to go. But it’s not what we’re focused on right now, because we’re still below the giving level we would need just to accomplish the giving goals of Cari and Dustin. We’re focused on meeting those goals for now, and we’re focused on improving our organization.
We want more of a track record, a stronger intellectual framework, and just to be better in general. We’d like Something more solid to point to and say, here’s our reason that this is a good way to do philanthropy. We make early moves now: we talk to people who are philanthropists, or who will be philanthropists, but it’s not our big focus now. I think in a few years, it could be.
Will: You’ve emphasized criminal justice reform, existential risks with bio and AI, animal welfare, and global health. Are there another causes that you think you’ll branch out into over the next couple of years? And if so, what might those be? What are some potential candidates?
Holden: I mean, over the next few years, we’re trying to stay focused when it comes to our grant making. But as we’re doing that, we also want to find more clarity around, again, this question of long-termism versus near-termism and how much money is going to each. I think that will affect how we look for new cause areas in the future.
So I think we will look for new causes in the future, but it’s not our biggest focus in the immediate term. One thing we do currently, however, is a lot of the time we will pick a cause based on, partly, who we can hire for it. We’re very big believers in a people-centric approach to giving. We believe that a lot of times if you support someone who’s really good, it makes an enormous difference. Even if maybe the cause is 10% worse, it can be worth it if the person is way more promising.
We originally had a bunch of policy causes we were interested in hiring for. Criminal justice became a major cause for us, and the other ones didn’t, because we found someone who we were excited to lead our criminal justice reform work—that’s Chloe Cockburn—and we didn’t for some of the other ones. But I would say, one cause we may get more involved in in the future, and I hope we do, is macroeconomic stabilization policy. It’s not the world’s best known cause. The idea is that some of the most important decisions in the world are made at the Federal Reserve, which is trying to balance the risks of inflation and unemployment.
We’ve come to the view that there’s an institutional bias in a particular direction. We believe that there is more inflation aversion than is consistent with a “most good for everyone” attitude. We think some of that bias reflects the politics and pressures around the Federal Reserve. We’ve been interested in macroeconomic stabilization for a while. There’s this not very well-known institution, which is not very well understood and makes esoteric decisions. It’s not a big political issue, but it may have a bigger impact on the world economy and on the working class than basically anything else the government is doing. Maybe even bigger than anything else that anyone is doing.
So I would love to get more involved in that cause but I think to do it really well, we would need someone who’s all the way in. I mean, we would need someone who’s just obsessed with that issue, because it’s very complex.
Will: Do you feel like that’s justified on the near-termist human-centric view, or do you think it’s got potentially very long-run impacts too?
Holden: I think it’s kind of a twofer. We haven’t tried to do the calculations on both axes, but certainly, it seems like it could provide broad-based growth and lower unemployment. There are a lot of reasons to think thoseat might lead to better societal outcomes. Outcomes such as better broad-based values, which are then reflected in the kinds of policies we enact and the kinds of people we elect.
I also think that if the economy is growing, and especially if that growth is benefiting everyone across the economy: if labor markets are tighter, and if workers have better bargaining power, better lives, better prospects in the future, then global catastrophic risk might decrease in some way. I haven’t totally decided, how does the magnitude of that compare to everything else? But I think if we had the opportunity to go bigger on that cause, we would be thinking harder about it.
Will: Could something happen or could you learn something such that you’d say, okay, actually we’re just going to go all in on one cause? Like is that conceivable, or are you going to stay diversified no matter what?
Holden: I think it’s conceivable. We could go all in, but not soon. I think as long as we think of ourselves as an early stage organization, one of the big reasons to not go all in is option value. We get to learn about a lot of different kinds of giving, and we’re building capacity for a lot of things, so I want to have the option to change our minds later.
There’s a bunch of advantages to being spread out across different causes. Maybe in 30 years we’ll just be like, “Well, we’ve been at this forever and we’re not going to change or minds, so now we’re going all in.” And that’s something I can imagine, but I can’t really imagine it happening soon.
Will: What attitudes should a small individual donor have? A couple of thoughts, one is like, well, can they donate to Open Phil or Good Ventures? Another is, what’s the point in me donating, if there’s this huge foundation that I think is very good?
Holden: Individual donors cannot to donate to Open Phil at the moment. We just haven’t set that process up. One possible alternative is the Effective Altruism Funds at CEA. Some of our stuff manage donor-advised funds that are not Open Phil, that are outside Open Phil. But you can give to an animal welfare fund that is run by Lewis, who’s our farm animal welfare program officer, and he will look at what was he not able to fund that he wished he could have funded with Open Phil funds, and he’ll use your funds on it.
More broadly, I think donating can definitely do a lot of good, despite Open Phil’s existence. The capital we’re working with is a lot compared to any individual, but it’s not a lot compared to the size of the need in the world, and the amount of good that we can accomplish.
Certainly, we do not believe, given our priorities, given the weight that we’re putting on long-termism versus near-termism, we are now pretty confident that we just do not have enough available to fund the GiveWell top charities to their capacity.
So, you can donate toward bed nets to prevent malaria, seasonal chemoprevention treatment also to prevent malaria, deworming, and other cool programs. You can donate there and you will get an amazing deal in terms of helping people for a little money. I know some people don’t feel satisfied with that; they think they can do better with long-termism or with an animal-centric view. I think if you’re animal-centric, we also currently have some limits on the budget for animal welfare and you can give to that Effective Altruism Fund or you can look at Animal Charity Evaluators and their recommendations, or just give to farm animal groups that you believe in.
Long-termism right now is the one where it’s the least obvious what someone should do. But I think there are still things to do, to reduce global catastrophic risks. Among the things going on, we’re currently very hesitant to be more than 50% of any organization’s budget, unless we feel incredibly well positioned to be their only point of accountability.
There are organizations where we say we understand exactly what’s going on, and we’re fine to be the only decision-maker on how the org is doing. But for other orgs, we just don’t want them to be that dependent on us. So there are orgs working on existential risk reduction, global catastrophic risk reduction and effective altruist community building. Most of them we just won’t go over 50%. And so they need, in some sense, to match our money with money from others.
I would say that generally, no matter what you’re into, there’s definitely something good you can do with your money. I still think donating is good.
Will: Are you interested in funding AI strategy, and have you funded it in the past?
Holden: AI strategy is a huge interest of ours. One way of putting it is when we think about potential risk from very advanced AI, we think of two problems that interact with each other and make each other potentially worse. It’s very hard to see the future, but these are things that are worth thinking about because of how big a deal they would be.
One of them is the value alignment problem, and that’s the idea that it may be very hard to build something that’s much smarter than anyone who built it, much better at thinking in every way and much better at optimizing, and still have that thing behave itself, so to speak. It may be very hard to get it to do what its creator intended and do it more intelligently than they would, but not too differently. That’s the value alignment problem. It’s mostly considered a technical problem.
A lot of the research that’s going on is on how we can build AI systems that work with vague goals. Humans are likely to give vague goals to AI. So how can they work with vague, poorly defined goals, and still figure them out in a way that’s in line with the original human intentions?
Also, how can we build AI systems that are robust? We want AI systems that, if they trained to one environment and then the environment changes, don’t totally break. They should realize that they’re dealing with something new, so that they don’t totally do crazy things. These are the kinds of ideas that comprise the alignment problem, which can mostly be addressed through technical research.
There’s also this other side we think about, the deployment problem, which is what if you have an extremely powerful general purpose technology that could lead to a really large concentration of power? Then we ask: who wields that power? I mean, is it the government, is it a company, is it a person? Who has the moral right to launch such a thing and to say hey, I have this very powerful thing and I’m going to use it to change the world in some way?
Who has the right to do that, and what kind of outcomes can we expect based on if different groups do it? One of the things that we’re worried about is that you might have a world where, let’s say, two different countries are both trying to be the first to develop a very powerful AI. Because if they can, they believe it’ll help their power, their standing in the world. And because they’re in this race and because they’re competitive with each other, they’re cutting corners and they’re launching as fast as they can. Under this sort of pressure, they might not be very careful about solving the alignment problem. And so they could release a carelessly designed AI, which ends up unaligned. Meaning, the AI could end up behaving in crazy ways and having bugs. And something that’s very intelligent and very powerful and has bugs could be very, very bad.
So AI strategy, in my take, is working on the deployment problem. Reducing the odds that there’s going to be this arms race or whatever, increasing the odds that there’s instead a deliberate wise decision about what powerful AI is going to be used for and who’s going to make that decision. There are a lot of really interesting questions there. We’re super interested in it.
We’ve definitely made grants in the AI strategy area: we’ve supported The Future of Humanity Institute to do work on it. We’re currently investigating a couple of other grants there, too. We’ve put out a job posting for someone who wants to think about AI strategy all day, and doesn’t have another place to go. If we find such a person who’s good enough at it, we could fund them ourselves.
And then also, we support OpenAI to help both with the control problem and the deployment problem. We’re trying to encourage OpenAI as an organization to do a lot of technical safety research, but also to be thinking hey, we’re a company that might lead AI. What are we going to do if we’re there? Who are we going to loop in? We’re going to be in conversations about how this thing should be used. What’s our position going to be on who should use it?
We’ve really encouraged them to build out a team of people who can be thinking about that question all the time, and they are working on that. And so, this is a major interest of ours.
Will: You mentioned one of the key ways in which it could be a worry is if there’s an arms race. Perhaps licitly, if it’s war time or something. Would you be interested, and how do you think about trying to make grants to reduce the chance of some sort of great power war?
Holden: I think it would be really good to have lower odds of a great power war, just flat out. Maybe some of the possibility for advanced technologies makes it even more good to reduce the odds of a great power war, and that’s an area that we have not spent much time on. I know there is a reasonable amount of foundation interest in promoting peace. And so it’s not immediately neglected at first glance.
That said, a lot of things that look like they’re getting attention at first glance, you refine your model a little bit, you decide what the most promising angle on it is, and you might find that there’s something neglected. So preventing great power war might be an example of a really awesome cause that we haven’t had the capacity to look into. And if someone else saw something great to do, they should definitely not wait for us.
Will: Another cause that some people were asking about was suffering of animals in the wild, whether you might be interested in making grants to improve that issue.
Holden: Sure. Off the bat, you have to ask if there’s anything you could do to improve wild animal welfare that wouldn’t cause a lot of other problems. We could potentially cause a lot of problems; there might be a kind of hubris in intervening in very complex ecosystems that we didn’t create and that we don’t properly understand.
Another problem with working on wild animal welfare is we haven’t seen a lot of shovel-ready actions to take. Something I will say is there’s probably a lot going on where human activity or some other factor is causing wild animals to suffer. There are probably animals in the wild, a really large number of them, that could be a lot better off than they are. Maybe they could just could have better lives if not for certain things about the ecosystem they’re in that humans may or may not have caused.
And so I see potential there. But we’ve got the issue I mentioned, where what we can do about it isn’t obvious. It’s not obvious how to write grants to improve the welfare of animals in the wild, without having a bunch of problems come along with them.
It hasn’t been that clear to us, but we are continuing to talk about it. Looking into it in the background is something we may do in the future.
Will: In the 2017 year review, you said well we’ve actually already been able to see some successes in criminal justice reform and animal welfare. So I’d be interested if you’d just talk a bit more about that and then, what lessons do you feel you’ve learned, and whether they transfer to the things which are harder to measure success?
Holden: So, we’re early. We’ve only been doing large scale grantmaking for a couple of years. A lot of our grantmaking is on these long timeframes, so it’s a little early to be asking, do we see impact? But I would say we’re seeing early hints of impact. The clearest case is farm animal welfare. We came in when there were a couple of big victories that had been achieved, like a McDonald’s pledge that all their eggs will be cage-free. So there was definitely already momentum.
So we came in, and we poured gasoline on it. We went to all the groups that had been getting these wins and we went to some of the groups that hadn’t and we said, “Would you like to do more? Would you like to grow your staff? Would you like to go after these groups?” And within a year, basically every major grocer and every major fast food company in America had made a cage-free pledge, approximately.
And so hopefully, if those pledges are adhered to, which is a question and something we work on, but hopefully 10 years from now you won’t even be able to get eggs from caged chickens in the U.S., it’ll be very impractical to do so. That’d be nice. I mean, I’m not happy with how the cage-free chickens are treated either, but it’s a lot better. It’s a big step up and I think it’s also a big morale win for the movement and creates some momentum, because from there, what we’ve been doing now is starting to build the international corporate campaigns. Some of those already existed and some of them didn’t, but we have been funding work all over the world. Next time, we would love to be part of those early wins that got the ball rolling, rather than coming in late and trying to make things go faster. We’ve seen wins on broiler chickens, which is the next step in the U.S. after layer hens. We’ve seen wins overseas, so that’s been exciting. And these corporate campaigns had been one of the quicker things we funded. Because I think a lot of times with corporations, it wouldn’t actually cost them that much to treat their chickens better. Somebody just has to complain loudly about it and then it happens, seems to be how it goes.
In criminal justice reform, we picked that cause partly because we saw the opportunity to potentially make a difference and get some wins on a lot of other policy causes. One of the early things we noticed was a couple of bipartisan bills in Illinois that we think are going to have quite a large impact on incarceration there and that we believe that our grantees, with our marginal funding, were crucial for.
We’ve also seen the beginning of a mini-wave of prosecutors getting elected—head prosecutors—who have different values from the normal head prosecutors. So instead of being tough on crime, they frame things in different terms. For example, Larry Krasner in Philadelphia has put out a memo out to his whole office that says, “When you propose someone’s sentence, you are going to have to estimate the cost of that to the state. It’s like $45,000 a year to put someone in prison, and you’re going to have to explain why it is worth that money to us to put this person in prison. And if you want to start a plea bargain and you want to start it lower than the minimum sentence, you’re going to have to get my permission. I’m the head prosecutor.”
It’s a different attitude. These prosecutors are saying, “My goal is not to lock up as many people as possible, my goal is to balance costs and benefits and do right by my community.” And I think there have been a bunch of orgs and a bunch of funders involved in that. I don’t think any of these are things I would call Open Philanthropy productions. They’re things that we think we helped with, we sped along, we got some share of the work being done.
We’re excited about that. Those are two of the causes that have nearer-term ambitions. I can’t say I’ve seen wins on biosecurity yet, I mean other than the fact that there’s been no pandemic that killed everyone. But I can’t give us any credit for that.
I also think about our small wins now as lessons for the future. I think we’ve seen what’s been working and what’s not in those causes where there’s more action and more things happening. I think one of the things that we are seeing is our basic setup of giving program officers high autonomy, has been working pretty well. A lot of these grants are not the ones I would have come up with. And in some cases, they weren’t even ones I was very excited about beforehand. We have systems for trying to give our program officers the ability to sometimes make grants that we don’t fully agree with, and try to reduce veto points. We try to reduce the need for total consensus, and have people at the organization try and make bets that may not be universally agreed to or even agreed to by me, or Cari and Dustin. And I think looking at some of the grants that have been effective, that that’s been a good move. So I will continue to do things that don’t seem right to me, and I’m very excited about that.