Quick thank you for checking in on old predictions. I really appreciate when people do that kind of thing
ClaireZabel
After a few years of missing the mark, this year I’ve exceeded my goal of giving 10% of my income away by a substantial margin (I never took the Giving What We Can pledge, but I still aspire at this point in my career to exceed the 10% bar).
It’s bittersweet, because I think that the reason I succeeded is that it seems like there’s more funding gaps than there were a few years ago—insofar as that’s about there being more good giving opportunities (which I think it partly is) that’s exciting but I also think it’s partly due to there being more promising opportunities that aren’t getting funded by large funders or other small funders, for a variety of reasons. And that’s unfortunate.
I got into EA more than ten years ago when the focus was more squarely on giving. I think that the relative switch over to careers has been a great thing and to this day I think the overwhelming majority of my impact will come through my choices about my career rather than my giving. But I’d encourage “old school” EAs like me who haven’t been as focused on personal giving to re-engage with the question of:
Whether they’re giving away an amount that they would endorse, or are falling short
If they are giving to the most impactful causes they can find, or might have gotten into the habit of giving somewhere that isn’t as strong as some of the newer opportunities
Right but this requires believing the future will be better if humans survive. I take ops point as saying she doesn’t agree or is at least skeptical
I think the post isn’t clear between the stances “it would make the far future better to end factory farming now” and “the only path by which the far future is net positive requires ending factory farming”, or generally how much of the claim that we should try to end factory farming now is motivated by the “if we can’t do that, we shouldn’t attempt to do longtermist interventions because they will probably fail” vs. “if we can’t do that, we shouldn’t attempt to do longtermist interventions because they are less valuable becuase the EV of the future is worse”
Anyway, working to cause humans to survive requires (or at least, is probably motivated by) thinking the future will be better that way. Not all longtermism is about that (see e.g. s-risk mitigation), and those parts are also relevant to the hinge of history question.
I think again, the point of OP is trying to make is we have very little proof of concept of getting people to go against their best interests. And so if doing what’s right isn’t in the ai companies best interest op wouldn’t believe we can get them to do what we think they should.
I am saying aligning AI is in the best interests of AI companies, unlike the situation with ending factory farming and animal ag companies, which is a relevant difference. Any AI company that could align their AIs once and for all for $10M would do it in a heartbeat. I don’t think they will do nearly enough to align their AIs (so in that sense, their incentives are not humanity’s incentives), given the stakes, but they do want to at least a little
I think this argument is pretty wrong for a few reasons:
It generalizes way too far… for example, you could say “Before trying to shape the far future, why don’t we solve [insert other big problem]? Isn’t the fact that we haven’t solved [other big problem] bad news about our ability to shape the far future positively?” Of course, our prospects would look more impressive if we had solved many other big problems. But I think it’s an unfair and unhelpful test to pick a specific big problem, notice that we haven’t solved it, and infer that we need to solve it first.
Many, if not most, longtermists believe we’re living near a hinge of history and might have very little time remaining to try to influence it. Waiting until we first ended factory farming would inherently forgo a huge fraction of the time remaining on those views to make a difference.
You say “It is a stirring vision, but it rests on a fragile assumption: that humanity is capable of aligning on a mission, coordinating across cultures and centuries, and acting with compassion at scale.” but that’s not true/exactly; I don’t think longtermism rests on the assuption that the best thing to do is try to directly cause that right now (see the hinge of history link above). For example, I’m not sure how we would end factory farming, but it might require, as you allude to, massive global coordination. In contrast, creating techniques to align AIs might require only a relatively small group of researchers, and a small group of AI companies adopting research that is in their best interests to use. To be clear, there are longtermist-relevant interventions that might also require global and widespread coordination, but they don’t all require it (and the ones I’m most optimistic about don’t require it, because global coordination is very difficult).
Related to the above, the problems are just different, and require different skills and resources (and shaping the far future isn’t necessarily harder than ending factory farming; for example, I wouldn’t be surprised if cutting bio x-risk in half ends up being much easier than ending factory farming). Succeeding at one is unlikely to be the best practice for succeeding at the other.
(I think factory farming is a moral abomination of gigantic proportions, I feel deep gratitude for people who are trying to end it, and dearly hope they succeed.)
I think there are examples supporting many different approaches and it depends immensely on what you’re trying to do, the levers available to you and the surrounding context. E.g. in the more bold and audacious, less cooperative direction, Chiune Sugihara or Osckar Schindler come to mind. Petrov doesn’t seem like a clear example in the “non-reckless” direction, and I’d put Arkhipov in a similar boat (they both acted rapidly under uncertainty in a way the people around them disagreed with, and took responsibility for a whole big situation when it probably would have been very easy to say to themselves that it wasn’t their job to do things other than obey orders and go with the group).
Thanks so much, Will! (Speaking just for myself) I really liked and agree with much of your post, and am glad you wrote it!
I agree with the core argument that there’s a huge and very important role for EA-style thinking on the questions related to making the post-AGI transition go well; I hope EA thought and values play a huge role in research on these questions, both because I think EAs are among the people most likely to address these questions rigorously (and they are hugely neglected) and because I think EA-ish values are likely to come to particularly compassionate and open-minded proposals for action on these questions.
Specifically, you cite my post
“EA and Longtermism: not a crux for saving the world”, and my quote
I think that recruiting and talent pipeline work done by EAs who currently prioritize x-risk reduction (“we” or “us” in this post, though I know it won’t apply to all readers) should put more emphasis on ideas related to existential risk, the advent of transformative technology, and the ‘most important century’ hypothesis, and less emphasis on effective altruism and longtermism, in the course of their outreach.
And say
This may have been a good recommendation at the time; but in the last three years the pendulum has heavily swung the other way, sped along by the one-two punch of the FTX collapse and the explosion of interest and progress in AI, and in my view has swung too far.
I agree with you that in the intervening time, the pendulum has swung too far in the other direction, and am glad to see your pushback.
One thing I want to clarify (that I expect you to agree with):
There’s little in the way of public EA debate; the sense one gets is that most of the intellectual core have “abandoned” EA
I think it’s true that much of the intellectual core has stopped focusing on EA as the path to achieving EA goals. I think that most of the intellectual core continues to hold EA values and pursue the goals they pursue for EA reasons (trying to make the world better as effectively as possible, e.g. by trying to reduce AI risk), they’ve just updated against that path involving a lot of focus on EA itself. This makes me feel a lot better about both that core and EA than if much of the old core had decided to leave their EA values and goals behind, and I wanted to share it because I don’t think it’s always very externally transparent how many people who have been quieter in EA spaces lately are still working hard and with dedication towards making the world better, as they did in the past.
In addition to what Michael said, there are a number of other barriers:
Compared to many global health interventions, AI is a more rapidly-changing field and many believe we have less time to have an impact, leading to a lot more updates-per-time about cost effectiveness, and making each estimate less useful. E.g. interventions like research on mechanistic interpretability can come into and out of fashion in a small number of years. Organizations focused on working with one political party might drop vastly in expected effectiveness after an election, etc. In contrast, GiveWell relies on studies that took longer to conduct than most of the AI safety field has existed (e.g. my understanding is Cisse et al 2016 took 8 years from start to publication; 8 years ago, about 2.5x longer than ChatGPT has existed in any form)
There is probably a much smaller base of small-to-mid-sized donors responsive to these estimates, making them less valuable
There are a large number of quite serious philosophical and empirical complexities associated with comparing GiveWell and longtermist-relevant charities, like your views about population ethics, total utilitarianism vs preference utilitarianism (vs others), the expected number of moral patients in the far future, acausal trade, etc.
[I work at Open Phil on AI safety and used to work at GiveWell, but my views are my own]
Drift isn’t the issue I was pointing at it my comment
I really appreciate this post! I have a few spots of disagreement, but many more of agreement, and appreciate the huge amount of effort that went into summarizing a very complicated situation with lots of stakeholders over an extended period of time in a way that feels sincere and has many points of resonance with my own experience.
Seconding Ben, I did a similar exercise and got similarly mixed (with stark examples in both directions) results (including in some instances you allude to in the post)
Thanks for sharing this, Tom! I think this is an important topic, and I agree with some of the downsides you mention, and think they’re worth weighing highly; many of them are the kinds of things I was thinking in this post of mine of when I listed these anti-claims:
Anti-claims
(I.e. claims I am not trying to make and actively disagree with)
No one should be doing EA-qua-EA talent pipeline work
I think we should try to keep this onramp strong. Even if all the above is pretty correct, I think the EA-first onramp will continue to appeal to lots of great people. However, my guess is that a medium-sized reallocation away from it would be good to try for a few years.
The terms EA and longtermism aren’t useful and we should stop using them
I think they are useful for the specific things they refer to and we should keep using them in situations where they are relevant and ~ the best terms to use (many such situations exist). I just think we are over-extending them to a moderate degree
It’s implausible that existential risk reduction will come apart from EA/LT goals
E.g. it might come to seem (I don’t know if it will, but it at least is imaginable) that attending to the wellbeing of digital minds is more important from an EA perspective than reducing misalignment risk, and that those things are indeed in tension with one another.
This seems like a reason people who aren’t EA and just prioritize existential risk reduction are less helpful from an EA perspective than if they also shared EA values all else equal, and like something to watch out for, but I don’t think it outweighs the arguments in favor of more existential risk-centric outreach work.
This isn’t mostly a PR thing for me. Like I mentioned in the post, I actually drafted and shared an earlier version of that post in summer 2022 (though I didn’t decide to publish it for quite a while), which I think is evidence against it being mostly a PR thing. I think the post pretty accurately captures my reasoning at the time, that I think often people doing this outreach work on the ground were actually focused on GCRs or AI risk and trying to get others to engage on that and it felt like they were ending up using terms that pointed less well at what they were interested in for path-dependent reasons. Further updates towards shorter AI timelines moved me substantially in terms of the amount I favor the term “GCR” over “longtermism”, since I think it increases the degree to which a lot of people mostly want to engage people about GCRs or AI risk in particular.
Seriously. Someone should make a movie!
Very strongly agree, based on watching the career trajectory of lots of EAs over the past 10 years. I think focusing on what broad kinds of activities you are good at and enjoy, and what skills you have or are well-positioned to obtain (within limits: e.g. “being a really clear and fast writer” is probably helpful in most cause areas, “being a great salsa dancer” maybe less so), then thinking about how to apply them in the cause area you think is most important, is generally much more productive than trying to entangle that exploration with personal cause prio exercises.
Our impression when we started to explore different options was that one can’t place a trustee on a leave of absence; it would conflict with their duties and responsibilities to the org, and so wasn’t a viable route.
Chiming in from the EV UK side of things: First, +1 to Nicole’s thanks :)
As you and Nicole noted, Nick and Will have been recused from all FTX-related decision-making. And, Nicole mentioned the independent investigation we commissioned into that.
Like the EV US board, the EV UK board is also looking into adding more board members (though I think we are slightly behind the US board), and plans to do so soon. The board has been somewhat underwater with all the things happening (speaking for myself, it’s particularly difficult because a lot of these things affect my main job at Open Phil too, so there’s more urgent action needed on multiple fronts simultaneously).
(The board was actually planning and hoping to add additional board members even before the fall of FTX, but unfortunately those initial plans had to be somewhat delayed while we’ve been trying to address the most time-sensitive and important issues, even though having more board capacity would indeed help in responding to issues that crop up; it’s a bit of a chicken-and-egg dynamic we need to push through.)
Hope this is helpful!
My favorite is probably the movie Colossus: the Forbin Project. For this, would also weakly recommend the first section of Life 3.0.
Hey Jack, this comment might help answer your question.
That’s correct. It’s common for large funders of organizations to serve on the boards of organizations they support, and I joined the EVF board partly because we foresaw synergies between the roles (including for me acting as grant investigator on EVF grants). Leadership at both organizations are aware I am in both roles.
Also, though you didn’t ask: I don’t receive any compensation for my work as an EVF board member.
Hey, I wanted to clarify that Open Phil gave most of the funding for the purchase of Wytham Abbey (a small part of the costs were also committed by Owen and his wife, as a signal of “skin in the game”). I run the Longtermist EA Community Growth program at Open Phil (we recently launched a parallel program for EA community growth for global health and wellbeing, which I don’t run) and I was the grant investigator for this grant, so I probably have the most context on it from the side of the donor. I’m also on the board of the Effective Ventures Foundation (EVF).
Why did we make the grant? There are two things I’d like to discuss about this, the process we used/context we were in, and our take on the case for the grant. I’ll start with the former.
Process and context: At the time we committed the funding (November 2021, though the purchase wasn’t completed until April 2022), there was a lot more apparent funding available than there is today, both from Open Phil and from the Future Fund. Existential risk reduction and related efforts seemed to us to have a funding overhang, and we were actively looking for more ways to spend money to support more good work, especially by encouraging more people to dedicate their careers to addressing the associated risks. Also, given the large amounts of funding available at the time, and relatively lower number of grantmakers, we wanted to experiment with decentralizing some influence over funding to other people with experience in the space who understood our long-run priorities but had visions for how to use the funding that were somewhat different from our grantmakers’.
So, we were experimenting with more defaulting to saying “yes” to shovel-ready grant requests that seemed aimed at our core priorities, especially when they didn’t require enormous amounts of funding. (As others have noted and as I’ll explain below, we modeled the amount of funding at stake here as being in the low millions of dollars, rather than the higher sticker price, since it was to purchase a durable asset that can be resold.)
What did we think about this grant? In the abstract, we thought the idea of buying real estate to use for events was a reasonable one.
When evaluating a grant, my team tends to focus on the question “to what extent (per dollar) will this grant result in more promising people focusing their careers on doing as much good as possible in our longtermist focus areas?” (Other Open Philanthropy programs use different criteria.) I and other people on my team have invested a fair amount in collecting data and developing metrics for evaluating the value-for-money of grants aimed at this goal, though also it’s still a work in progress in many ways.
When we surveyed ~200 people involved or interested in longtermist work, we found that many had been strongly influenced by in-person events they’d attended, particularly ones where people relatively new to a topic came into contact with relatively experienced people working full-time in that area. (Some other data and our general experiences in the space are largely supportive of this too). Overall, we feel fairly confident in-person events like workshops, learning retreats, and conferences can be very impactful for people considering big career changes, and have historically often been good value-for-money via helping people interested in our longtermist focus areas make professional connections and deepen their understanding about these topics.
We support projects that cumulatively run dozens of events per year (disproportionately in the Bay Area and UK), and we saw several reasons a venue purchase could be valuable for both increasing the number and quality of impactful events, and potentially saving some money:
The number of events was growing fast.
Suitable venues are often of limited availability and book up far in advance, making more spontaneous events challenging.
Rented venues sometimes have issues that aren’t discovered until an event has begun (and we’ve seen events go substantially worse, or have to execute difficult mid-event moves, because of issues arising with the venues after the event was already underway).
Ops capacity to organize before and after events is a resource we value highly. Organizing and preparing for events takes more time and may go worse on average when you need to orient to the layout of different venues, move in and out all the equipment, learn about new areas (and travel to/from them) and available vendors that the event allows, etc.
Renting venues varies a lot in price, but venues for events near top EA hubs can be very expensive. Often, the cost of the venues was in the $30-100k range (though there were also smaller, shorter events aimed at students with venue costs in the $10k-ish range, and large conferences can be much more expensive).
Other locations are often cheaper, but have a more difficult time attracting busy professionals, relative to ones that take place near where they live, and demand more time from ops staff, senior staff, and (sometimes) instructors helping with events (and, we generally highly value all those folks’ time, even if they are willing to travel).
The fact that the funding was going to be used to purchase a durable asset means that for the purposes of our cost-effectiveness analysis, we modeled the financial stakes as being somewhere in the low millions (rather than the meaningfully higher sticker price of the property). Our grantees also stood to potentially save money through the investment, since they wouldn’t end up paying rental costs to run their events. We still wouldn’t have been surprised if the investment turned out net negative from a financial perspective, but losing most of the investment seemed unlikely. Given this model of the costs, we thought it was consistent with our overall experiment in lowering barriers to longtermist funding to “default to yes” for a shovel-ready grant.
We discussed Owen’s reasoning behind selecting this kind of venue, and we thought it broadly made sense. The main goal was to have a venue relatively near Oxford with sufficient capacity for the types of events we thought could be highly impactful.
Despite the points above, we were relatively uncertain about this specific opportunity, and there was some internal debate over the amount and structure of our funding; we also expressed our uncertainty to Owen.
We were mainly uncertain about:
Owen being the person driving this forward, despite him not wanting to be the main point person actually helping oversee the property and run relevant events over the next few years
Whether refurbishing and maintaining the property would end up being much more time-consuming and expensive than Owen forecasted
Whether the process was moving too quickly and whether more should be done to investigate the state and expected value of the property, and whether a better opportunity might arise in the coming years
Whether the number and kinds of events (and the value of those events) had been sufficiently well-scoped, and whether this venue would prove a good fit for a sufficient fraction of them.
Internally at OP (including with our communications team), we discussed how this would reflect on us somewhat, but mostly from the perspective of us as a funder, rather than the wider ecosystem as a whole. I think that it was a mistake on my part that the funding conversations didn’t focus more on that broader question, and I regret it, though I think we should have a high bar to rejecting otherwise solid-seeming grants on optics grounds because that kind of approach can be hard to limit and then can end up being surprisingly costly to impact. (My recollection is that we thought of it mainly as a longtermist/existential risk reduction field resource and didn’t think enough about how what was then called CEA providing fiscal sponsorship might make it sound more like an EA project.)
We also told Owen that we would want to check in about venue usage and think about the value it was generating in terms of the community growth metrics we generally use, and discuss selling the property if it didn’t seem like things were going well. (Proceeds from a sale would be used as general funding within EVF, and that funding would replace some of our and other funders’ future grants to EVF.)
Where do I stand on this grant now? With the huge decline in available funds since November 2021, I don’t know whether we’d make this grant again today. I still think it could turn out to be importantly effort-saving and event-increasing-and-improving relative to regularly renting one-off event venues, and it could also lead to a higher number of impactful events being run. But it’s currently too soon to say whether the usage will justify the investment. If we were considering a similar grant now, we’d want to get more into the details of modeling the effective financial cost (incorporating the resale possibility), hammering out plans and predictions for future operations and usage, etc. but I think we might consider it to be good enough value-for-money that we’d want to go forward with some possible projects in this vein.
Why isn’t there a published grant page right now? (This isn’t my domain but) we typically aim to publish grants within three months of when we make our initial payment, but we’re currently working through a backlog of older grants. Wytham is one of many grants in that category.
- I’m less approving of the EA community now than before the FTX collapse by (16 Dec 2022 19:40 UTC; 225 points)
- Should EVF consider appointing new board members? by (5 Feb 2023 6:01 UTC; 188 points)
- Keep EA high-trust by (22 Dec 2022 14:58 UTC; 164 points)
- Why did CEA buy Wytham Abbey? by (6 Dec 2022 14:46 UTC; 139 points)
- Reflections on Wytham Abbey by (10 Jan 2023 18:30 UTC; 82 points)
- Overall Karma vs Agreement Karma by (28 Dec 2022 22:04 UTC; 66 points)
- 's comment on CEA Disambiguation by (19 Dec 2022 20:05 UTC; 52 points)
- 's comment on Announcement on the future of Wytham Abbey by (26 Mar 2024 19:46 UTC; 47 points)
- 's comment on Should EVF consider appointing new board members? by (6 Feb 2023 2:29 UTC; 27 points)
- 's comment on Isn’t this a conflict of interest and self-dealing on the part of CEA/EVF leadership? by (14 Dec 2022 23:30 UTC; 22 points)
- 's comment on Wall Street Journal Reporter Looking to Speak to EAs by (15 Dec 2022 17:52 UTC; 20 points)
- The case for transparent spending by (15 Dec 2022 17:42 UTC; 19 points)
- 's comment on Reflections on Wytham Abbey by (11 Jan 2023 15:13 UTC; 13 points)
- 's comment on Isn’t this a conflict of interest and self-dealing on the part of CEA/EVF leadership? by (14 Dec 2022 23:30 UTC; 8 points)
- Isn’t this a conflict of interest and self-dealing on the part of CEA/EVF leadership? by (14 Dec 2022 22:56 UTC; 1 point)
In my (one) experience with kidney stones, opiates and tamulosin did nothing but intravenous ketorolac stopped it right in its tracks (this is also discussed on reddit). Why shouldn’t kidney stone sufferers just carry ketorolac pills on them and take them if they start feeling kidney stone pain? That’s what I’ve done ever since my kidney stone. That seems a lot more targeted than trying to take a relatively unknown medication ongoingly (though I wouldn’t be surprised if the ketorolac pills are less effective than the intravenous and won’t actually fully avert the pain).
I’m also pretty surprised by the assertion that most kidney stone instances result in tens to hundreds of hours of pain. I would have guessed it was more like a single-digit number of hours to low tens… it seems like a lot of people aren’t being properly medicated to reduce their pain.