Ten months ago I met Australia’s Assistant Defence Minister about AI Safety because I sent him one email asking for a meeting. I wrote about that here. In total I sent 21 emails to Politicians and had 4 meetings. AFAICT there is still no organisation with significant funding that does this as their primary activity. AI Safety advocacy is IMO still extremely low hanging fruit. My best theory is EAs don’t want to do it / fund it because EAs are drawn to spreadsheets and google docs (it isn’t their comparative advantage). Hammers like nails etc.
I also think many EAs are still allergic to direct political advocacy, and that this tendency is stronger in more rationalist-ish cause areas such as AI. We shouldn’t forget Yudkowsky’s “politics is the mind-killer”!
I had considered calling the third wave of EA “EA in a World Where People Actually Listen to Us”.
Leopold’s situational awareness memo has become a salient example of this for me. I used to sometimes think that arguments about whether we should avoid discussing the power of AI in order to avoid triggering an arms race were a bit silly and self important because obviously defense leaders aren’t going to be listening to some random internet charity nerds and changing policy as a result.
Hi Ben! You might be interested to know I literally had a meeting with the Assistant Defence Minister in Australia about 10 months ago off the back of one email. I wrote about it here. AI Safety advocacy is IMO still low extremely hanging fruit. My best theory is EAs don’t want to do it because EAs are drawn to spreadsheets etc (it isn’t their comparative advantage).
I don’t want to claim all EAs believe the same things, but if the congressional commission had listened to what you might call the “central” EA position, it would not be recommending an arms race because it would be much more concerned about misalignment risk. The overwhelming majority of EAs involved in AI safety seem to agree that arms races are bad and misalignment risk is the biggest concern (within AI safety). So if anything this is a problem of the commission not listening to EAs, or at least selectively listening to only the parts they want to hear.
Maybe instead of “where people actually listen to us” it’s more like “EA in a world where people filter the most memetically fit of our ideas through their preconceived notions into something that only vaguely resembles what the median EA cares about but is importantly different from the world in which EA didn’t exist.”
In most cases this is a rumors based thing, but I have heard that a substantial chunk of the OP-adjacent EA-policy space has been quite hawkish for many years, and at least the things I have heard is that a bunch of key leaders “basically agreed with the China part of situational awareness”.
Again, people should really take this with a double-dose of salt, I am personally at like 50⁄50 of this being true, and I would love people like lukeprog or Holden or Jason Matheny or others high up at RAND to clarify their positions here. I am not attached to what I believe, but I have heard these rumors from sources that didn’t seem crazy (but also various things could have been lost in a game of telephone, and being very concerned about China doesn’t result in endorsing a “Manhattan project to AGI”, though the rumors that I have heard did sound like they would endorse that)
Less rumor-based, I also know that Dario has historically been very hawkish, and “needing to beat China” was one of the top justifications historically given for why Anthropic does capability research. I have heard this from many people, so feel more comfortable saying it with fewer disclaimers, but am still only like 80% on it being true.
Overall, my current guess is that indeed, a large-ish fraction of the EA policy people would have pushed for things like this, and at least didn’t seem like they would push back on it that much. My guess is “we” are at least somewhat responsible for this, and there is much less of a consensus against a U.S. china arms race in US governance among EAs than one might think, and so the above is not much evidence that there was no listening or only very selective listening to EAs.
I looked thru the congressional commission report’s list of testimonies for plausibly EA-adjacent people. The only EA-adjacent org I saw was CSET, which had two testimonies (1, 2). From a brief skim, neither one looked clearly pro- or anti-arms race. They seemed vaguely pro-arms race on vibes but I didn’t see any claims that look like they were clearly encouraging an arms race—but like I said, I only briefly skimmed them, so I could have missed a lot.
This is inconsistent with my impressions and recollections. Most clearly, my sense is that CSET was (maybe still is, not sure) known for being very anti-escalatory towards China, and did substantial early research debunking hawkish views about AI progress in China, demonstrating it was less far along than ways widely believed in DC (and that EAs were involved in this, because they thought it was true and important, because they thought current false fears in the greater natsec community were enhancing arms race risks) (and this was when Jason was leading CSET, and OP supporting its founding). Some of the same people were also supportive of export controls, which are more ambiguous-sign here.
The export controls seemed like a pretty central example of hawkishness towards China and a reasonable precursor to this report. The central motivation in all that I have written related to them was about beating China in AI capabilities development.
Of course no one likes a symmetric arms race, but the question is did people favor the “quickly establish overwhelming dominance towards China by investing heavily in AI” or the “try to negotiate with China and not set an example of racing towards AGI” strategy. My sense is many people favored the former (though definitely not all, and I am not saying that there is anything like consensus, my sense is it’s a quite divisive topic).
To support your point, I have seen much writing from Helen Toner on trying to dispel hawkishness towards China, and have been grateful for that. Against your point, at the recent “AI Security Forum” in Vegas, many x-risk concerned people expressed very hawkish opinions.
Yeah re the export controls, I was trying to say “I think CSET was generally anti-escalatory, but in contrast, the effect of their export controls work was less so” (though I used the word “ambiguous” because my impression was that some relevant people saw a pro of that work that it also mostly didn’t directly advance AI progress in the US, i.e. it set China back without necessarily bringing the US forward towards AGI). To use your terminology, my impression is some of those people were “trying to establish overwhelming dominance over China” but not by “investing heavily in AI”.
It looks to me like the online EA community, and the EAs I know IRL, have a fairly strong consensus that arms races are bad. Perhaps there’s a divide in opinions with most self-identified EAs on one side, and policy people / company leaders on the other side—which in my view is unfortunate since the people holding the most power are also the most wrong.
(Is there some systematic reason why this would be true? At least one part of it makes sense: people who start AGI companies must believe that building AGI is the right move. It could also be that power corrupts, or something.)
So maybe I should say the congressional commission should’ve spent less time listening to EA policy people and more time reading the EA Forum. Which obviously was never going to happen but it would’ve been nice.
Slightly independent to the point Habryka is making, which may well also be true, my anecdotal impression is that the online EA community / EAs I know IRL were much bigger on ‘we need to beat China’ arguments 2-4 years ago. If so, simple lag can also be part of the story here. In particular I think it was the mainstream position just before ChatGPT was released, and partly as a result I doubt an ‘overwhelming majority of EAs involved in AI safety’ disagree with it even now.
So maybe (the argument goes) we should take a cue from the environmental activists, and be hostile towards AI companies...
This is the most common question I get on AI safety posts: why isn’t the rationalist / EA / AI safety movement doing this more? It’s a great question, and it’s one that the movement asks itself a lot...
Still, most people aren’t doing this. Why not?
Later, talking about why attempting a regulatory approach to avoiding a race is futile:
The biggest problem is China. US regulations don’t affect China. China says that AI leadership is a cornerstone of their national security—both as a massive boon to their surveillance state, and because it would boost their national pride if they could beat America in something so cutting-edge.
So the real question is: which would we prefer? OpenAI gets superintelligence in 2040? Or Facebook gets superintelligence in 2044? Or China gets superintelligence in 2048?
Might we be able to strike an agreement with China on AI, much as countries have previously made arms control or climate change agreements? This is . . . not technically prevented by the laws of physics, but it sounds really hard. When I bring this challenge up with AI policy people, they ask “Harder than the technical AI alignment problem?” Okay, fine, you win this one.
I feel like a generic non-EA policy person reading that post could well end up where the congressional commission landed? It’s right there in the section that most explicitly talks about policy.
Scott’s last sentence seems to be claiming that avoiding an arms race is easier than solving alignment (and it would seem to follow from that that we shouldn’t race). But I can see how a politician reading this article wouldn’t see that implication.
Yep, my impression is that this is an opinion that people mostly adopted after spending a bunch of time in DC and engaging with governance stuff, and so is not something represented in the broader EA population.
My best explanation is that when working in governance, being pro-China is just very costly, and especially combining the belief that AI will be very powerful, and there is no urgency to beat China to it, seems very anti-memetic in DC, and so people working in the space started adopting those stances.
But I am not sure. There are also non-terrible arguments for beating China being really important (though they are mostly premised on alignment being relatively easy, which seems very wrong to me).
(though they are mostly premised on alignment being relatively easy, which seems very wrong to me)
Not just alignment being easy, but alignment being easy with overwhelmingly high probability. It seems to me that pushing for an arms race is bad even if there’s only a 5% chance that alignment is hard.
I think most of those people believe that “having an AI aligned to ‘China’s values’” would be comparably bad to a catastrophic misalignment failure, and if you believe that, 5% is not sufficient, if you think there is a greater than 5% of China ending up with “aligned AI” instead.
I think that’s not a reasonable position to hold but I don’t know how to constructively argue against it in a short comment so I’ll just register my disagreement.
Like, presumably China’s values include humans existing and having mostly good experiences.
I’m not sure to what extent the Situational Awareness Memo or Leopold himself are representatives of ‘EA’
In the pro-side:
Leopold thinks AGI is coming soon, will be a big deal, and that solving the alignment problem is one of the world’s most important priorities
He used to work at GPI & FTX, and formerly identified with EA
He (probably almost certainly) personally knows lots of EA people in the Bay
On the con-side:
EA isn’t just AI Safety (yet), so having short timelines/high importance on AI shouldn’t be sufficient to make someone an EA?[1]
EA shouldn’t also just refer to a specific subset of the Bay Culture (please), or at least we need some more labels to distinguish different parts of it in that case
Many EAs have disagreed with various parts of the memo, e.g. Gideon’s well received post here
Since his EA institutional history he moved to OpenAI (mixed)[2] and now runs an AGI investment firm.
By self-identification, I’m not sure I’ve seen Leopold identify as an EA at all recently.
This again comes down to the nebulousness of what ‘being an EA’ means.[3] I have no doubts at all that, given what Leopold thinks is the way to have the most impact he’ll be very effective at achieving that.
Further, on your point, I think there’s a reason to suspect that something like situational awareness went viral in a way that, say, Rethink Priorities Moral Weight project didn’t—the promise many people see in powerful AI is power itself, and that’s always going to be interesting for people to follow, so I’m not sure that situational awareness becoming influential makes it more likely that other ‘EA’ ideas will
I view OpenAI as tending implicitly/explicitly anti-EA, though I don’t think there was an explicit ‘purge’, I think the culture/vision of the company was changed such that card-carrying EAs didn’t want to work there any more
I think he is pretty clearly an EA given he used to help run the Future Fund, or at most an only very recently ex-EA. Having said that, it’s not clear to me this means that “EAs” are at fault for everything he does.
Yeah again I just think this depends on one’s definition of EA, which is the point I was trying to make above.
Many people have turned away from EA, both the beliefs, institutions, and community in the aftermath of the FTX collapse. Even Ben Todd seems to not be an EA by some definitions any more, be that via association or identification. Who is to say Leopold is any different, or has not gone further? What then is the use of calling him EA, or using his views to represent the ‘Third Wave’ of EA?
I guess from my PoV what I’m saying is that I’m not sure there’s much ‘connective tissue’ between Leopold and myself, so when people use phrases like “listen to us” or “How could we have done” I end up thinking “who the heck is we/us?”
How do you know Leopold or anyone else actually influenced the commission’s report? Not that that seems particularly unlikely to me, but is there any hard evidence? EDIT: I text-searched the report and he is not mentioned by name, although obviously that doesn’t prove much on its own.
Seems plausible the impact of that single individual act is so negative that aggregate impact of EA is negative.
I think people should reflect seriously upon this possibility and not fall prey to wishful thinking (let’s hope speeding up the AI race and making it superpower powered is the best intervention! it’s better if everyone warning about this was wrong and Leopold is right!).
The broader story here is that EA prioritization methodology is really good for finding highly leveraged spots in the world, but there isn’t a good methodology for figuring out what to do in such places, and there also isn’t a robust pipeline for promoting virtues and virtuous actors to such places.
Call me a hater, and believe me, I am, but maybe someone who went to university at 16 and clearly spent most of their time immersed in books is not the most socially developed.
Maybe after they are implicated in a huge scandal that destroyed our movement’s reputation we should gently nudge them to not go on popular podcasts and talk fantastically and almost giddily about how world war 3 is just around the corner. Especially when they are working in a financialcapacity in which they would benefit from said war.
Many of the people we have let be in charge of our movement and speak on behalf of it don’t know the first thing about optics or leadership or politics. I don’t think Elizier Yudowsky could win a middle school class president race with a million dollars.
I know your point was specifically tailored toward optics and thinking carefully about what we say when we have a large platform, but I think looking back and forward bad optics and a lack of real politik messaging are pretty obvious failure modes of a movement filled with chronically online young males who worship intelligence and research output above all else. I’m not trying to sh*t on Leopold and I don’t claim I was out here beating a drum about the risks of these specific papers but yea I do think this is one symptom of a larger problem. I can barely think of anyone high up (publicly) in this movement who has risen via organizing.
The thing about Yudkowsky is that, yes, on the one hand, every time I read him, I think he surely must be coming across as super-weird and dodgy to “normal” people. But on the other hand, actually, it seems like he HAS done really well in getting people to take his ideas seriously? Sam Altman was trolling Yudkowsky on twitter a while back about how many of the people running/founding AGI labs had been inspired to do so by his work. He got invited to write on AI governance for TIME despite having no formal qualifications or significant scientific achievements whatsoever. I think if we actually look at his track record, he has done pretty well at convincing influential people to adopt what were once extremely fringe views, whilst also succeeding in being seen by the wider world as one of the most important proponents of those views, despite an almost complete lack of mainstream, legible credentials.
Hmm, I hear what you are saying but that could easily be attributed to some mix of
(1) he has really good/convincing ideas
(2) he seems to be a a public representative for the EA/LW community for a journalist on the outside.
And I’m responding to someone saying that we are in “phase 3”—that is to say people in the public are listening to us—so I guess I’m not extremely concerned about him not being able to draw attention or convince people. I’m more just generally worried that people like him are not who we should be promoting to positions of power, even if those are de jure positions.
Yeah, I’m not a Yudkowsky fan. But I think the fact that he mostly hasn’t been a PR disaster is striking, surprising and not much remarked upon, including by people who are big fans.
I guess in thinking about this I realize it’s so hard to even know if someone is a “PR disaster” that I probably have just been confirming my biases. What makes you say that he hasn’t been?
Just the stuff I already said about the success he seems to have had. It is also true that many people hate him and think he’s ridiculous, but I think that makes him polarizing rather than disastrous. I suppose you could phrase it as “he was a disaster in some ways but a success in others” if you want to.
(I think the issue with Leopold is somewhat precisely that he seems to be quite politically savvy in a way that seems likely to make him a deca-multi-millionaire and politically influental, possibly at the cost of all of humanity. I agree Eliezer is not the best presenter, but his error modes are clearly enormously different)
I don’t think I was claiming they have the exact same failure modes—do you want to point out where I did that? Rather they both have failure modes that I would expect to happen as a result of selecting them to be talking heads on the basis of wits and research output. Also I feel like you are implying Leopold is evil or something like that and I don’t agree but maybe I’m misinterpretting.
He seems like a smooth operator in some ways and certainly is quite different than Elizier. That being said I showed my dad (who has become an oddly good litmus test for a lot of this stuff for me as someone who is somewhat sympathethic to our movement but also a pretty normal 60 year old man in a completely different headspace) the Dwarkesh episode and he thought Leopold was very, very, very weird (and not because of his ideas). He kind of reminds me of Peter Thiel. I’ll completely admit I wasn’t especially clear in my points and that mostly reflects my own lack of clarity on the exact point I was trying to getting across.
I think I take back like 20% of what I said (basically to the extent I was making a very direct stab at what exactly that failure mode is) but mostly still stand by the original comment, which again I see as being approximately ~ “Selecting people to be the public figureheads of our movement on the basis wits and research output is likely to be bad for us”.
I’d love to see an ‘Animal Welfare vs. AI Safety/Governance Debate Week’ happening on the Forum. The risks from AI cause has grown massively in importance in recent years, and has become a priority career choice for many in the community. At the same time, the Animal Welfare vs Global Health Debate Week demonstrated just how important and neglected the cause of animal welfare remains. I know several people (including myself) who are uncertain/torn about whether to pursue careers focused on reducing animal suffering or mitigating existential risks related to AI. It would help to have rich discussions comparing both causes’s current priorities and bottlenecks, and a debate week would hopefully expose some useful crucial considerations.
How tractable is improving (moral) philosophy education in high schools?
tldr: Do high school still neglect ethics / moral philosophy in their curriculums? Mine did (year 2012). Are there tractable ways to improve the situation, through national/state education policy or reaching out to schools and teachers? Has this been researched / tried before?
The public high school I went to in Rottweil (rural Southern Germany) was overall pretty good, probably top 2-10% globally, except for one thing: Moral philosophy. 90min/week “Christian Religion” was the default for everyone, in which we spent most of the time interpreting stories from the bible, most of which to me felt pretty irrelevant to the present. This was in 2012 in Germany, a country with more atheists than Christians as of 2023, and even in 2012 my best guess is that <20% of my classmates were practicing a religion.
Only in grade 10, we got the option to switch to secular Ethics classes instead, which only <10% of the students did (Religion was considered less work).
Ethics class quickly became one of my favorite classes. For the first time in my life I had a regular group of people equally interested in discussing Vegetarianism and other such questions (almost everyone in my school ate meat, and vegetarians were sometimes made fun of). Still, the curriculum wasn’t great, we spent too much time with ancient Greek philosophers and very little time discussing moral philosophy topics relevant to the present.
How have your experiences been in high school? I’m especially curious about more recent experiences.
Are there tractable ways to improve the situation? Has anyone researched this?
1) Could we get ethics classes in the mandatory/default curriculum in more schools? Which countries or states seem best for that? In Germany, education is state-regulated—which German state might be most open to this? Hamburg? Berlin?
2) Is there a shortage in ethics teachers (compared to religion teachers)? Can we get teachers more interested in teaching ethics?
3) Are any teachers here teaching ethics? Would you like to connect more with other (EA/ethics) teachers? We could open a whatsapp group, if there’s not already one.
In England, secular ethics isn’t really taught until Year 9 (age 13-14) or Year 10, as part of Religious Studies classes. Even then, it might be dependent on the local council, the type of school or even the exam boards/modules that are selected by the school. And by Year 10, students in some schools can opt out of taking religious studies for their GCSEs.
Anecdotally, I got into EA (at least earlier than I would have) because my high school religious studies teacher (c. 2014) could see that I had utilitarian intuitions (e.g. in discussions about animal experimentation and assisted dying) and gave me a copy of Practical Ethics to read. I then read The Life You Can Save.
I went to high school in the USA, in the 2000s, so it has been roughly twenty years. I attended a public highschool, that wasn’t particularly well-funded nor impoverished. There were no ethics or philosophy courses offered. There was not education on moral philosophy, aside from that which is gained through literature in an English class (such as reading Lord of the Flies or Fahrenheit 451 or To Kill a Mockingbird).
There is a Facebook group for EA Education, but my impression is that it isn’t very active.
My (uninformed, naïve) guess is that this isn’t very tractable, because education tends to be controlled by the government and there are a lot of vested interests. The argument would basically be “why should we teach these kids about being a good person when we could instead use that time to teach them computer programming/math/engineering/language/civics?” It is a crowded space with a lot of competing interests already.
Charter schools are a real option in many places. In Chicago if you have money and wherewithal you can open a charter school and basically teach what ever you want. The downside here is you will not be able to get the top students in the city to go to your school because there are already a select few incredible public and private schools.
(Haven’t thought about this really, might be very wrong, but have this thought and seems good to put out there.) I feel like putting 🔸 at the end of social media names might be bad. I’m curious what the strategy was.
The willingness to do this might be anti-correlated with status. It might be a less important part of identity of more important people. (E.g., would you expect Sam Harris, who is a GWWC pledger, to do this?)
I’d guess that ideally, we want people to associate the GWWC pledge with role models (+ know that people similar to them take the pledge, too).
Anti-correlation with status might mean that people will identify the pledge with average though altruistic Twitter users, not with cool people they want to be more like.
You won’t see a lot of e/accs putting the 🔸 in their names. There might be downside effects of perception of a group of people as clearly outlined and having this as an almost political identity; it seems bad to have directionally-political properties that might do mind-killing things both to people with 🔸 and to people who might argue with them.
Recently, I’ve encountered an increasing number of misconceptions, in rationalist and effective altruist spaces, about what Open Philanthropy’s Global Catastrophic Risks (GCR) team does or doesn’t fund and why, especially re: our AI-related grantmaking. So, I’d like to briefly clarify a few things:
Open Philanthropy (OP) and our largest funding partner Good Ventures (GV) can’t be or do everything related to GCRs from AI and biohazards: we have limited funding, staff, and knowledge, and many important risk-reducing activities are impossible for us to do, or don’t play to our comparative advantages.
Like most funders, we decline to fund the vast majority of opportunities we come across, for a wide variety of reasons. The fact that we declined to fund someone says nothing about why we declined to fund them, and most guesses I’ve seen or heard about why we didn’t fund something are wrong. (Similarly, us choosing to fund someone doesn’t mean we endorse everything about them or their work/plans.)
Very often, when we decline to do or fund something, it’s not because we don’t think it’s good or important, but because we aren’t the right team or organization to do or fund it, or we’re prioritizing other things that quarter.
As such, we spend a lot of time working to help create or assist other philanthropies and organizations who work on these issues and are better fits for some opportunities than we are. I hope in the future there will be multiple GV-scale funders for AI GCR work, with different strengths, strategies, and comparative advantages — whether through existing large-scale philanthropies turning their attention to these risks or through new philanthropists entering the space.
While Good Ventures is Open Philanthropy’s largest philanthropic partner, we also regularly advise >20 other philanthropists who are interested to hear about GCR-related funding opportunities. (Our GHW team also does similar work partnering with many other philanthropists.) On the GCR side, we have helped move tens of millions of non-GV money to GCR-related organizations in just the past year, including some organizations that GV recently exited. GV and each of those other funders have their own preferences and restrictions we have to work around when recommending funding opportunities.
Among the AI funders we advise, Good Ventures is among the most open and flexible funders.
We’re happy to see funders enter the space even if they don’t share our priorities or work with us. When more funding is available, and funders pursue a broader mix of strategies, we think this leads to a healthier and more resilient field overall.
Many funding opportunities are a better fit for non-GV funders, e.g. due to funder preferences, restrictions, scale, or speed. We’ve also seen some cases where an organization can have more impact if they’re funded primarily or entirely by non-GV sources. For example, it’s more appropriate for some types of policy organizations outside the U.S. to be supported by local funders, and other organizations may prefer support from funders without GV/OP’s past or present connections to particular grantees, AI companies, etc. Many of the funders we advise are actively excited to make use of their comparative advantages relative to GV, and regularly do so.
We are excited for individuals and organizations that aren’t a fit for GV funding to apply to some of OP’s GCR-related RFPs (e.g. here, for AI governance). If we think the opportunity is strong but a better fit for another funder, we’ll recommend it to other funders.
To be clear, these other funders remain independent of OP and decline most of our recommendations, but in aggregate our recommendations often lead to target grantees being funded.
We believe reducing AI GCRs via public policy is not an inherently liberal or conservative goal. Almost all the work we fund in the U.S. is nonpartisan or bipartisan and engages with policymakers on both sides of the aisle. However, at present, it remains the case that most of the individuals in the current field of AI governance and policy (whether we fund them or not) are personally left-of-center and have more left-of-center policy networks. Therefore, we think AI policy work that engages conservative audiences is especially urgent and neglected, and we regularly recommend right-of-center funding opportunities in this category to several funders.
OP’s AI teams spend almost no time directly advocating for specific policy ideas. Instead, we focus on funding a large ecosystem of individuals and organizations to develop policy ideas, debate them, iterate them, advocate for them, etc. These grantees disagree with each other very often (a few examples here), and often advocate for different (and sometimes ~opposite) policies.
We think it’s fine and normal for grantees to disagree with us, even in substantial ways. We’ve funded hundreds of people who disagree with us in a major way about fundamental premises of our GCRs work, including about whether AI poses GCR-scale risks at all (example).
I think frontier AI companies are creating enormous risks to humanity, I think their safety and security precautions are inadequate, and I think specific reckless behaviors should be criticized. AI company whistleblowers should be celebrated and protected. Several of our grantees regularly criticize leading AI companies in their official communications, as do many senior employees at our grantees, and I think this happens too infrequently.
Relatedly, I think substantial regulatory guardrails on frontier AI companies are needed, and organizations we’ve directed funding to regularly propose or advocate policies that ~all frontier AI companies seem to oppose (alongside some policies they tend to support).
I’ll also take a moment to address a few misconceptions that are somewhat less common in EA or rationalist spaces, but seem to be common elsewhere:
Discussion of OP online and in policy media tends to focus on our AI grantmaking, but AI represents a minority of our work. OP has many focus areas besides AI, and has given far more to global health and development work than to AI work.
We are generally big fans of technological progress. See e.g. my post about the enormous positive impacts from the industrial revolution, or OP’s funding programs for scientific research, global health R&D, innovation policy, and related issues like immigration policy. Most technological progress seems to have been beneficial, sometimes hugely so, even though there are some costs and harms along the way. But some technologies (e.g. nuclear weapons, synthetic pathogens, and superhuman AI) are extremely dangerous and warrant extensive safety and security measures rather than a “move fast and break [the world, in this case]” approach.
We have a lot of uncertainty about how large AI risk is, exactly which risks are most worrying (e.g. loss of control vs. concentration of power), on what timelines the worst-case risks might materialize, and what can be done to mitigate them. As such, most of our funding in the space has been focused on (a) talent development, and (b) basic knowledge production (e.g. Epoch AI) and scientific investigation (example), rather than work that advocates for specific interventions.
I hope these clarifications are helpful, and lead to fruitful discussion, though I don’t expect to have much time to engage with comments here.
[1] Several of our grantees regularly criticize leading AI companies in their official communications
[2] organizations we’ve directed funding to regularly propose or advocate policies that ~all frontier AI companies seem to oppose
I think it might be a good idea to taboo the phrase “OP is funding X” (at least when talking about present day Open Phil).
Historically, OP would have used the phrase “OP is funding X” to mean “referred a grant to X to GV” (which approximately was never rejected). One was also able to roughly assume that if OP decides to not recommend a grant to GV, that most OP staff do not think that grant would be more cost-effective than other grants referred to GV (and as such, the word people used to describe OP not referring a grant to GV was “rejecting X” or “defunding X”).
Of course, now that the relationship between OP and GV has substantially changed, and the trust has broken down somewhat, the term “OP is funding X” is confusing (including IMO in your comment, where in your last few bullet points you talk about “OP has given far more to global health than AI” when I think to not confuse people here, it would be good to say “OP has recommended far more grants to global health”, since OP itself has not actually given away any money directly, and in the rest of your comment you use “recommend”).
I think the key thing for people to understand is why it no longer makes sense to talk about “OP funding X”, and where it makes sense to model OP grant-referrals to GV as still closely matching OPs internal cost-effectiveness estimates.[1]
For organizations and funders trying to orient towards the funding ecosystem, the most important thing is understanding what GV is likely to fund on behalf of an OP recommendation. So when people talk about “OP funding X” or “OP not funding X” that is what they usually refer to (and that is also again how OP has historically used those words, and how you have used those words in your comment). I expect this usage to change over time, but it will take a while (and would ask for you to be gracious and charitable when trying to understand what people mean when they conflate OP and GV in discussions).[2]
Now having gotten that clarification out of the way, my guess is most of the critiques that you have seen about OP funding are basically accurate when seen through this lens (though I don’t know what critiques you are referring to, since you aren’t being specific). As an example, as Jason says in another comment, it does look like GV has a very limited appetite for grants to right-of-center organizations, and since (as you say yourself) the external funders reject the majority of grants you refer to them, this de-facto leads to a large reduction of funding, and a large negative incentive for founders and organizations who are considering working more with the political right.
I think your comment is useful, and helps people understand some of how OP is trying to counteract the ways GV’s withdrawal from many crucial funding areas has affected things, which I am glad about. I do also think your comment has far too much of the vibe of “nothing has changed in the last year” and “you shouldn’t worry too much about which areas GV wants or want to not fund”. De-facto GV was and is likely to continue to be 95%+ of the giving that OP is influencing, and the dynamics between OP and non-GV funders are drastically different than the dynamics historically between OP and GV.
I think a better intutition pump for people trying to understand the funding ecosystem would be a comment that is scope-sensitive in the relevant ways. I think it would start with saying:
Yes, over the last 1-2 years our relationship to GV has changed, and I think it no longer really makes sense to think about OP ‘funding X’. These days, especially in the catastrophic risk space, it makes more sense to think of OP as a middleman between grantees and other foundations and large donors. This is a large shift, and I think understanding how that shift has changed funding allocation is of crucial importance when trying to predict which projects in this space are underfunded, and what new projects might be able to get funding.
95%+ of recommendations we make are to GV. When GV does not want to fund something, it is up to a relatively loose set of external funders we have weaker relationships with to make the grant, and will hinge on whether those external funders have appetite for that kind of grant, which depends heavily on their more idiosyncratic interests and preferences. Most grants that we do not refer to GV, but would like to see funded, do not ultimately get funded by other funders.[3]
[Add the rest of your comment, ideally explaining how GV might differ from OP here[4]]
And another dimension to track is “where OPs cost-effectiveness estimates are likely to be wrong”. I think due to the tricky nature of the OP/GV relationship, I expect OP to systematically be worse at making accurate cost-effectiveness estimates where GV has strong reputation-adjacent opinions, because of course it is of crucial importance for OP to stay “in-sync” with GV, and repeated prolonged disagreements are the kind of thing that tend to cause people and organizations to get out of sync.
Of course, people might also care about the opinions of OP staff, as people who have been thinking about grantmaking for a long time, but my sense is that in as much as those opinions do not translate into funding, that is of lesser importance when trying to identify neglected niches and funding approaches (but still important).
I don’t know how true this is and of course you should write what seems true to you here. I currently think this is true, but also “60% of grants referred get made” would not be that surprising. And also of course this is a two-sided game where OP will take into account whether there are any funders even before deciding whether to evaluate a grant at all, and so the ground truth here is kind of tricky to establish.
For example, you say that OP is happy to work with people who are highly critical of OP. That does seem true! However, my honest best guess is that it’s much less true of GV, and being publicly critical of GV and Dustin is the kind of thing that could very much influence whether OP ends up successfully referring a grant to GV, and to some degree being critical of OP also makes receiving funding from GV less likely, though much less so. That is of crucial importance to know for people when trying to decide how open and transparent to be about their opinions.
I agree about tabooing “OP is funding…”; my team is undergoing that transition now, leading to some inconsistencies in our own usage, let alone that of others.
Re: “large negative incentive for founders and organizations who are considering working more with the political right.” I’ll note that we’ve consistently been able to help such work find funding, because (as noted here), the bottleneck is available right-of-center opportunities rather than available funding. Plus, GV can and does directly fund lots of work that “engages with the right” (your phrasing), e.g. Horizon fellows and many other GV grantees regularly engage with Republicans, and seem likely to do even more of that on the margin given the incoming GOP trifecta.
Re: “nothing has changed in the last year.” No, a lot has changed, but my quick-take post wasn’t about “what has changed,” it was about “correcting some misconceptions I’m encountering.”
Re: “De-facto GV was and is likely to continue to be 95%+ of the giving that OP is influencing.” This isn’t true, including specifically for my team (“AI governance and policy”).
I also don’t think this was ever true: “One was also able to roughly assume that if OP decides to not recommend a grant to GV, that most OP staff do not think that grant would be more cost-effective than other grants referred to GV.” There’s plenty of internal disagreement even among the AI-focused staff about which grants are above our bar for recommending, and funding recommendation decisions have never been made by majority vote.
Re: “nothing has changed in the last year.” No, a lot has changed, but my quick-take post wasn’t about “what has changed,” it was about “correcting some misconceptions I’m encountering.”
Makes sense. I think it’s easy to point out ways things are off, but in this case, IMO the most important thing that needs to happen in the funding ecosystem is people grappling with the huge changes that have occurred, and I think a lot of OP communication has been actively pushing back on that (not necessarily intentionally, I just think it’s a tempting and recurring error mode for established institutions to react to people freaking out with a “calm down” attitude, even when that’s inappropriate, cf. CDC and pandemics and many past instances of similar dynamics)
In particular, I am confident the majority of readers of your original comment interpreted what you said as meaning that GV has no substantial dispreference for right-of-center grants, which I think was substantially harmful to the epistemic landscape (though I am glad that further prodding by me and Jason cleared that up).
I’ll note that we’ve consistently been able to help such work find funding, because (as noted here), the bottleneck is available right-of-center opportunities rather than available funding.
I don’t currently believe this, and think you are mostly not exposed to most people who could be doing good work in the space (which is downstream of a bunch of other choices OP and GV made), and also overestimate the degree to which OP is helpful in getting the relevant projects funding (I know of 1-2 projects in this space which did ultimately get funding, where OP was a bit involved, but my sense is was overall slightly anti-helpful).
If you know people who could do good work in the space, please point them to our RFP! As for being anti-helpful in some cases, I’m guessing that was cases where we thought the opportunity wasn’t a great opportunity despite it being right-of-center (which is a point in favor, in my opinion), but I’m not sure.
Re: “De-facto GV was and is likely to continue to be 95%+ of the giving that OP is influencing.” This isn’t true, including specifically for my team (“AI governance and policy”).
I would take bets on this! It is of course important to assess counterfactualness of recommendations from OP. If you recommend a grant a funder would have made anyways, it doesn’t make any sense to count that as something OP “influenced”.
With that adjustment, I would take bets that more than 90% of influence-adjusted grants from OP in 2024 will have been made by GV (I don’t think it’s true in “AI governance and policy” where I can imagine it being substantially lower, I have much less visibility into that domain. My median for all of OP is 95%, but that doesn’t imply my betting odds, since I want at least a bit of profit margin).
Happy to refer to some trusted third-party arbiter for adjudicating.
Sure, my guess is OP gets around 50%[1] of the credit for that and GV is about 20% of the funding in the pool, making the remaining portion a ~$10M/yr grant ($20M/yr for 4 years of non-GV funding[2]). GV gives out ~$600M[3] grants per year recommended by OP, so to get to >5% you would need the equivalent of 3 projects of this size per year, which I haven’t seen (and don’t currently think exist).
Even at 100% credit, which seems like a big stretch, my guess is you don’t get over 5%.
To substantially change the implications of my sentence I think you need to get closer to 10%, which I think seems implausible from my viewpoint. It seems pretty clear the right number is around 95% (and IMO it’s bad form given that to just respond with a “this was never true” when it’s clearly and obviously been true in some past years, and it’s at the very least very close to true this year).
Mostly chosen for schelling-ness. I can imagine it being higher or lower. It seems like lots of other people outside of OP have been involved, and the choice of area seems heavily determined by what OP could get buy-in for from other funders, seeming somewhat more constrained than other grants, so I think a lower number seems more reasonable.
I have also learned to really not count your chickens before they are hatched with projects like this, so I think one should discount this funding by an expected 20-30% for a 4-year project like this, since funders frequently drop out and leadership changes, but we can ignore that for now
I also don’t think this was ever true: “One was also able to roughly assume that if OP decides to not recommend a grant to GV, that most OP staff do not think that grant would be more cost-effective than other grants referred to GV.” There’s plenty of internal disagreement even among the AI-focused staff about which grants are above our bar for recommending, and funding recommendation decisions have never been made by majority vote.
I used the double negative here very intentionally. Funding recommendations don’t get made by majority vote, and there isn’t such a thing as “the Open Phil view” on a grant, but up until 2023 I had long and intense conversations with staff at OP who said that it would be very weird and extraordinary if OP rejected a grant that most of its staff considered substantially more cost-effective than your average grant.
That of course stopped being true recently (and I also think past OP staff overstated a bit the degree to which it was true previously, but it sure was something that OP staff actively reached out to me about and claimed was true when I disputed it). You saying “this was never true” is in direct contradiction to statements made by OP staff to me up until late 2023 (bar what people claimed were very rare exceptions).
Therefore, we think AI policy work that engages conservative audiences is especially urgent and neglected, and we regularly recommend right-of-center funding opportunities in this category to several funders.
Should the reader infer anything from the absence of a reference to GV here? The comment thread that came to mind when reading this response was significantly about GV (although there was some conflation of OP and GV within it). So if OP felt it could recommend US “right-of-center”[1] policy work to GV, I would be somewhat surprised that this well-written post didn’t say that.
Conditional on GV actually being closed to right-of-center policy work, I express no criticism of that decision here. It’s generally not cool to criticize donors for declining to donate to stuff that is in tension or conflict with their values, and it seems that would be the case. However, where the funder is as critical to an ecosystem as GV is here, I think fairly high transparency about the unwillingness to fund a particular niche is necessary to allow the ecosystem to adjust. For example, learning that GV is closed to a niche area that John Doe finds important could switch John from object-level work to earning to give. And people considering moving to object-level work need to clearly understand if the 800-pound gorilla funder will be closed to them.
Good Ventures did indicate to us some time ago that they don’t think they’re the right funder for some kinds of right-of-center AI policy advocacy, though (a) the boundaries are somewhat fuzzy and pretty far from the linked comment’s claim about an aversion to opportunities that are “even slightly right of center in any policy work,” (b) I think the boundaries might shift in the future, and (c) as I said above, OP regularly recommends right-of-center policy opportunities to other funders.
Also, I don’t actually think this should affect people’s actions much because: my team has been looking for right-of-center policy opportunities for years (and is continuing to do so), and the bottleneck is “available opportunities that look high-impact from an AI GCR perspective,” not “available funding.” If you want to start or expand a right-of-center policy group aimed at AI GCR mitigation, you should do it and apply here! I can’t guarantee we’ll think it’s promising enough to recommend to the funders we advise, but there are millions (maybe tens of millions) available for this kind of work; we’ve simply found only a few opportunities that seem above-our-bar for expected impact on AI GCR, despite years of searching.
What’s a realistic, positive vision of the future worth fighting for? I feel lost when it comes how to do altruism lately. I keep starting and dropping various little projects. I think the problem is that I just don’t have a grand vision of the future I am trying to contribute to. There are so many different problems and uncertainty about what the future will look like. Thinking about the world in terms of problems just leads to despair for me lately. As if humanity is continuously not living up to my expectations. Trump’s victory, the war in Ukraine, increasing scale of factory farming, lack of hope on AI. Maybe insects suffer too, which would just create more problems. My expectations for humanity were too high and I am mourning that but I don’t know what’s on the other side. There are so many things that I don’t want to happen, that I’ve lost the sight of what I do want to happen. I don’t want to be motivated solely by fear. I want some sort of a realistic positive vision for the future that I could fight for. Can anyone recommend something on that? Preferably something that would take less than 30 minutes to watch or read. It can be about animal advocacy, AI, or global politics.
One possible way of thinking about this, which might tie your work in smaller battles into a ‘big picture’, is if you believe that your work on the smaller battles is indirectly helping the wider project. e.g. by working to solve one altruistic cause you are sparing other altruistic individuals and altruistic resources from being spent on that cause, increasing the resources available for wider altruistic projects, and potentially by increasing altruistic resources available in the future.[1]
Note that I’m only saying this is a possible way of thinking about this, not necessarily that you should think this (for one thing, the extent to which this is true probably varies across areas, depending on the inter-connectedness of different cause areas in different ways and their varying flowthrough effects).
As in this passage from one of Yudkowsky’s short stories:
“But time passed,” the Confessor said, “time moved forward, and things changed.” The eyes were no longer focused on Akon, looking now at something far away. “There was an old saying, to the effect that while someone with a single bee sting will pay much for a remedy, to someone with five bee stings, removing just one sting seems less attractive. That was humanity in the ancient days. There was so much wrong with the world that the small resources of altruism were splintered among ten thousand urgent charities, and none of it ever seemed to go anywhere. And yet… and yet...”
“There was a threshold crossed somewhere,” said the Confessor, “without a single apocalypse to mark it. Fewer wars. Less starvation. Better technology. The economy kept growing. People had more resource to spare for charity, and the altruists had fewer and fewer causes to choose from. They came even to me, in my time, and rescued me. Earth cleaned itself up, and whenever something threatened to go drastically wrong again, the whole attention of the planet turned in that direction and took care of it. Humanity finally got its act together.”
I find videos about space colonization pretty inspiring. Of course, space colonization would ideally be paired with some level of suffering abolition, so we aren’t spreading needless suffering to other planets. Space colonization could help with political discord, since people with different ideas of a “good society” can band together and peacefully disperse through the solar system. If you think traveling the world to experience different cultures is fun, I expect visiting other planets to experience different cultures will be even better. On the AI front, rumor has it that scaling is slowing down… that could grant more time for alignment work, and increase the probability that an incredible future will come to pass.
On some level I think the answer is always the same, regardless of the headwinds or tailwinds: you do what you can with your limited resources to improve the world as much as you can. In some sense I think slowing the growth of factory farming in a world where it was growing is the same as a world where it is stagnant and we reduce the number of animals raised. In both worlds there’s a reduction in suffering. I wrote a creative piece on this exact topic here if that is at all appealing.
I also think on the front of factory farming we focus too much on the entire problem, and not enough on how good the wins are in and of themselves.
I don’t have a suggestion, but I’ve been encouraged and “heartwarmed” by the diverse range of responses below. Cool to see people with different ways of holding their hope and motivation, whether its enough for us to buy a bed net tomorrow or we do indeed have grander plans and visions, or we’re skeptical abut whether “future designing” is a good idea at all.
It might be too hard to envision an entire grand future, but it’s possible to envision specific wins in the short and medium-term. A short-term win could be large cage-free eggs campaigns succeeding, a medium-term win could be a global ban on caged layer hens. Similarly a short-term win for AI safety could be a specific major technical advance or significant legislation passed, a medium-term win could be AGIs coexisting with humans without the world going to chaos, while still having massive positive benefits (e.g. a cure to Alzheimer’s).
I think the problem is that I just don’t have a grand vision of the future I am trying to contribute to.
For what it’s worth, I’m skeptical of approaches that try to design the perfect future from first principles and make it happen. I’m much more optimistic about marginal improvements that try to mitigate specific problems (e.g. eradicating smallpox didn’t cure all illness.)
How much we can help doesn’t depend on how awful or how great the world is, we can save the drowning child whether there’s a billion more that are drowning or a billion more that are thriving. To the drowning child the drowning is just as real, as is our opportunity to help.
If you feel emotionally down and unable to complete projects, I would encourage to try things that work on priors (therapy, exercise, diet, sleep, making sure you have healthy relationships) instead of “EA specific” things.
There are plenty of lives we can help no matter who won the US election and whether factory farming keeps getting worse, their lives are worth it to them, no matter what the future will be.
And just to be clear, I am doing quite well generally. I think I used to repress my empathy because it just feels too painful. But it was controlling me subconsciously by constantly nagging me to do altruistic things. Nowadays, I sometimes connect to my empathy and it can feel overwhelming like yesterday. But I think it’s for the better long-term.
Thanks. Yeah, I now agree that it’s better to focus on what I can do personally. Someone made a good point in a private message that having a single vision leads to a utopian thinking which has many disadvantages. It reminded me of stories of my parents about the Soviet Union where great atrocities to currently living humans where justified in the name if creating a great communist future.
Grand ideologies and religions are alluring though, because they give a sense of being a part of something bigger. Like you have your place in the world, your community, which gives a clear meaning to life. Being a part of Effective Altruism and animal advocacy movements fulfil this need in my life somewhat but incompletely.
the person in the private message also told me about this serenity prayer: “grant me the serenity to accept the things I cannot change; courage to change the things I can; and wisdom to know the difference.”
Maybe this is a cop-out but I am thinking more and more of a pluralistic and mutually respectful future. Some people might take off on a spaceship to settle a nearby solar system. Some others might live lower-tech in eco villages. Animals will be free to pursue their goals. And each of these people will pursue their version of a worthwhile future with minimal reduction in the potential of others to fulfill theirs. I think anything else will just lead to oppressions of everyone that is not onboard with some specific wild project—I think most people’s dreams of a future are pretty wild and not something I would want for myself!
I think the sort of world that could be achieved by the massive funding of effective charities is a rather inspiring vision. Natalie Cargill, Longview Philanthropy’s CEO, lays out a rather amazing set of outcomes that could be achieved in her TED Talk.
I think that a realistic method of achieving these levels of funding are Profit for Good businesses, as I lay out in my TEDx Talk. I think it is realistic because most people don’t want to give something up to fund charities -as donation would require- but if they could help solve world problems by buying products or services they want or need of similar quality at the same price, they would.
I love the idea in your talk! I can imagine it changing the world a lot and that feels motivating. I wonder if more Founders Pledge members could be convinced to do this.
I think eventually, working on changing the EA introductory program is important. I think it is an extremely good thing to do well, and I think it could be improved. I’m running a 6 week version right now, and I’ll see if I feel the same way at the end.
I mostly shortened it, I think the main reasons I have are university level specific. I feel like there are a not insignificant number of people who would commit to a 6 week fellowship, but not 8, and there is not enough focus on the wider EA community; I feel like this should be more emphasized.
I’ve had a couple of organisations ask me to clarify the Donation Election’s vote-brigading rules. Understandably, they want to promote the donation election amongst their supporters, but they aren’t sure to what extent this is vote-brigading. The answer is- it depends.
We want to avoid the Donation Election being a popularity contest/ favouring the candidates with bigger networks. Neither popularity, nor size of network, is perfectly correlated with impact.
If you’d like to reach out to your audience, feel free, but please don’t tell them to vote for you. You can explain the event, and mention that you are a candidate, but we want the votes to inform us of the Forum audience’s opinions of marginal impact of money donated to these charities, not to the strength of their networks.
I’m aware this exortation won’t do all the work- we will also be looking into voting patterns, and new accounts (made after October 22, when the election was announced) won’t be eligible to vote.
Interesting lawsuit; thanks for sharing! A few hot (unresearched, and very tentative) takes, mostly on the Musk contract/fraud type claims rather than the unfair-competition type claims related to x.ai:
One of the overarching questions to consider when reading any lawsuit is that of remedy. For instance, the classic remedy for breach of contract is money damages . . . and the potential money damages here don’t look that extensive relative to OpenAI’s money burn.
Broader “equitable” remedies are sometimes available, but they are more discretionary and there may be some significant barriers to them here. Specifically, a court would need to consider the effects of any equitable relief on third parties who haven’t done anything wrongful (like the bulk of OpenAI employees, or investors who weren’t part of an alleged conspiracy, etc.), and consider whether Musk unreasonably delayed bringing this lawsuit (especially in light of those third-party interests). On hot take, I am inclined to think these factors would weigh powerfully against certain types of equitable remedies.
Stated more colloquially, the adverse effects on third parties and the delay (“laches”) would favor a conclusion that Musk will have to be content with money damages, even if they fall short of giving him full relief.
Third-party interests and delay may be less of a barrier to equitable relief against Altman himself.
Musk is an extremely sophisticated party capable of bargaining for what he wanted out of his grants (e.g., a board seat), and he’s unlikely to get the same sort of solicitude on an implied contract theory that an ordinary individual might. For example, I think it was likely foreseeable in 2015 to January 2017 -- when he gave the bulk of the funds in question—that pursuing AGI could be crazy expensive and might require more commercial relationships than your average non-profit would ever consider. So I’d be hesitant to infer much in the way of implied-contractual constraints on OpenAI’s conduct than section 501(c)(3) of the Internal Revenue Code and California non-profit law require.
The fraud theories are tricky because the temporal correspondence between accepting the bulk of the funds and the alleged deceit feels shaky here. By way of rough analogy, running up a bunch of credit card bills you never intended to pay back is fraud. Running up bills and then later deciding that you aren’t going to pay them back is generally only a contractual violation. I’m not deep into OpenAI drama, but a version of the story in which the heel turn happened later in the game than most/all of Musk’s donations and assistance seems plausible to me.
Is there a maximum effective membership size for EA?
@Joey 🔸 spoke at EAGx last night and one of my biggest take-aways was the (controversial maybe) take that more projects should decline money.
This resonates with my experience; constraint is a powerful driver of creativity and with less constraint you do not necessarily create more creativity (or positive output).
Does the EA movement in terms of number of people have a similar dynamic within society? What growth rate is optimal for a group of members to expand, before it becomes sub-optimal? Zillions of factors to consider of course but… something maybe fun to ponder.
I had it hammered into me during training as a crisis supporter and I still burnt out.
Now I train others, have seen it hammered into them and still watch countless of them burn out.
I think we need to switch at least 60% of compassion fatigue focus to compassion satisfaction.
Compassion satisfaction is the warm feeling you receive when you give something meaningful to someone, if you’re ‘doing good work’ I think that feeling (and its absence) ought to be spoken about much more.
This is a cold take that’s probably been said before, but I thought it bears repeating occasionally, if only for the reminder:
The longtermist viewpoint has gotten a lot of criticism for prioritizing “vast hypothetical future populations” over the needs of “real people,” alive today. The mistake, so the critique goes, is the result of replacing ethics with math, or utilitarianism, or something cold and rigid like that. And so it’s flawed because it lacks the love or duty or “ethics of care” or concern for justice that lead people to alternatives like mutual aid and political activism.
My go-to reaction to this critique has become something like “well you don’t need to prioritize vast abstract future generations to care about pandemics or nuclear war, those are very real things that could, with non-trivial probability, face us in our lifetimes.” I think this response has taken hold in general among people who talk about X-risk. This probably makes sense for pragmatic reasons. It’s a very good rebuttal to the “cold and heartless utilitarianism/pascal’s mugging” critique.
But I think it unfortunately neglects the critical point that longtermism, when taken really seriously — at least the sort of longtermism that MacAskill writes about in WWOTF, or Joe Carlsmith writes about in his essays — is full of care and love and duty. Reading the thought experiment that opens the book about living every human life in sequential order reminded me of this. I wish there were more people responding to the “longtermism is cold and heartless” critique by making the case that no, longtermism at face value is worth preserving because it’s the polar opposite of heartless. Caring about the world we leave for the real people, with emotions and needs and experiences as real as our own, who very well may inherit our world but who we’ll never meet, is an extraordinary act of empathy and compassion — one that’s way harder to access than the empathy and warmth we might feel for our neighbors by default. It’s the ultimate act of care. And it’s definitely concerned with justice.
(I mean, you can also find longtermism worthy because of something something math and cold utilitarianism. That’s not out of the question. I just don’t think it’s the only way to reach that conclusion.)
I want to slightly push back against this post in two ways:
I do not think longtermism is any sort of higher form of care or empathy. Many longtermist EAs are motivated by empathy, but they are also driven by a desire for philosophical consistency, beneficentrism and scope-sensitivity that is uncommon among the general public. Many are also not motivated by empathy—I think empathy plays some role for me but is not the primary motivator? Cold utilitarianism is more important but not the primary motivator either [1]. I feel much more caring when I cook dinner for my friends than when I do CS research, and it is only because I internalize scope sensitivity more than >99% of people that I can turn empathy into any motivation whatsoever to work on longtermist projects. I think that for most longtermists, it is not more empathy, nor a better form of empathy, but the interaction of many normal (often non-empathy) altruistic motivators and other personality traits that makes them longtermists.
Longtermists make tradeoffs between other common values and helping vast future populations that most people disagree with, and without ideosyncratic EA values there is no reason that a caring person should make the same tradeoffs as longtermists. I think the EA value of “doing a lot more good matters a lot more” is really important, but it is still trading off against other values.
Helping people closer to you / in your community: many people think this has inherent value
Beneficentrism: most people think there is inherent value in being directly involved in helping people. Habitat for Humanity is extremely popular among caring and empathic people, and they would mostly not think it is better to make more of an overall difference by e.g. subsidizing eyeglasses in Bangladesh.
Justice: most people think it is more important to help one human trafficking victim than one tuberculosis victim or one victim of omnicidal AI if you create the same welfare, because they place inherent value on justice. Both longtermists and GiveWell think they’re similarly good modulo secondary consequences and decision theory.
Discount rate, risk aversion, etc.: There is no reason that having a 10% chance of saving 100 lives in 6,000 years is better than a 40% chance of saving 5 lives tomorrow, if you don’t already believe in zero-discount expected value as the metric to optimize. The reason to believe in zero-discount expected value is a thought experiment involving the veil of ignorance, or maybe the VNM theorem. It is not caring doing the work here because both can be very caring acts, it is your belief in the thought experiment connecting your caring to the expected value.
In conclusion, I think that while care and empathy can be an important motivator to longtermists, and it is valid for us to think of longtermist actions as the ultimate act of care, we are motivated by a conjunction of empathy/care and other attributes, and it is the other attributes that are by far more important. For someone who has empathy/care and values beneficentrism and scope-sensitivity, preventing an extinction-level pandemic is an important act of care; for someone like me or a utilitarian, pandemic prevention is also an important act. But for someone who values justice more, applying more care does not make them prioritize pandemic prevention over helping a sex trafficking victim, and in the larger altruistically-inclined population, I think a greater focus on care and empathy conflict with longtermist values more than they contribute.
[1] More important for me are: feeling moral obligation to make others’ lives better rather than worse, wanting to do my best when it matters, wanting future glory and social status for producing so much utility.
Thanks for this reply — it does resonate with me. It actually got me thinking back to Paul Bloom’s Against Empathy book, and how when I read that I thought something like: “oh yeah empathy really isn’t the best guide to acting morally,” and whether that view contradicts what I was expressing in my quick take above.
I think I probably should have framed the post more as “longtermism need not be totally cold and utilitarian,” and that there’s an emotional, caring psychological relationship we can have to hypothetical future people because we can imaginatively put ourselves in their shoes. And that it might even incorporate elements of justice or fairness if we consider them a disenfranchised group without representation in today’s decision making who we are potentially throwing under the bus for our own benefit, or something like that. So justice and empathy can easily be folded into longtermist thinking. This sounds like what you are saying here, except maybe I do want to stand by the fact that EA values aren’t necessarily trading off against justice, depending on how you define it.
Caring about the world we leave for the real people, with emotions and needs and experiences as real as our own, who very well may inherit our world but who we’ll never meet, is an extraordinary act of empathy and compassion — one that’s way harder to access than the empathy and warmth we might feel for our neighbors by default. It’s the ultimate act of care. And it’s definitely concerned with justice.
If we go extinct, they won’t exist, so won’t be real people or have any valid moral claims. I also consider compassion, by definition, to be concerned with suffering, harms or losses. People who don’t come to exist don’t experience suffering or harm and have lost nothing. They also don’t experience injustice.
Longtermists tend to seem focused on ensuring future moral patients exist, i.e. through extinction risk reduction. But, as above, ensuring moral patients come to exist is not a matter of compassion or justice for those moral patients. Still, they may help or (harm!) other moral patients, including other humans who would exist anyway, animals, aliens or artificial sentience.
On the other hand, longtermism is still compatible with a primary concern for compassion or justice, including through asymmetric person-affecting views and wide person-affecting views (e.g. Thomas, 2019, probably focus on s-risks and quality improvements), negative utilitarianism (focus on s-risks) and perhaps even narrow person-affecting views. However, utilitarian versions of most of these views still seem prone, at least in principle, to endorsing killing everyone to replace us and our descendants with better off individuals, even if each of us and our descendants would have had an apparently good life and object. I think some (symmetric and perhaps asymmetric) narrow person-affecting views can avoid this, and maybe these are the ones that fit best with compassion and justice. See my post here.
That being said, empathy could mean more than just compassion or justice and could endorse bringing happy people into existence for their own sake, e.g. Carlsmith, 2021. I disagree that we should create people for their own sake, though, and my intuitions are person-affecting.
Other issues people have with longtermism are fanaticism and ambiguity; the probability that any individual averts an existential catastrophe is usually quite low at best (e.g. 1 in a million), and the numbers are also pretty speculative.
Yeah, I meant to convey this in my post but framing it a bit differently — that they are real people with valid moral claims who may exist. I suppose framing it this way is just moving the hypothetical condition elsewhere to emphasize that, if they do exist, they would be real people with real moral claims, and that matters. Maybe that’s confusing though.
BTW, my personal views lean towards a suffering-focused ethics that isn’t seeking to create happy people for their own sake. But I still think that, in coming to that view, I’m concerned with the experience of those hypothetical people in the fuzzy, caring way that utilitarians are charged with disregarding. That’s my main point here. But maybe I just get off the crazy train at my unique stop. I wouldn’t consider tiling the universe with hedonium to be the ultimate act of care/justice, but I suppose someone could feel that way, and thereby make an argument along the same lines.
Agreed there are other issues with longtermism — just wanted to respond to the “it’s not about care or empathy” critique.
I have some hesitations about supporting Richard Hanania given what I understand of his views and history. But in the same way I would say I support *example economic policy* of *example politician I don’t like* if I believed it was genuinely good policy, I think I should also say that I found this article of Richard’s quite warming.
Paying candidates to complete a test task likely increases inequality, credentialism and decreases candidate quality. If you pay candidates for their time, you’re likely to accept less candidates and lower variance candidates into the test task stage. Orgs can continue to pay top candidates to complete the test task, if they believe it measurably decreases the attrition rate, but give all candidates that pass an anonymised screening bar the chance to complete a test task.
Orgs can continue to pay top candidates to complete the test task, if they believe it measurably decreases the attrition rate, but give all candidates that pass an anonymised screening bar the chance to complete a test task.
My guess is that, for many orgs, the time cost of assessing the test task is larger than the financial cost of paying candidates to complete the test task, and that significant reasons for wanting to compensate applicants are (i) a sense of justice, (ii) wanting to avoid the appearance of unreasonably demanding lots of unpaid labour from applicants, not just wanting to encourage applicants to complete the tasks[1].
So I agree that there are good reasons for wanting more people to be able to complete test tasks. But I think that doing so would potentially significantly increase costs to orgs, and that not compensating applicants would reduce costs to orgs by less than one might imagine.
I also think the justice-implications of compensating applicants are unclear (offering pay for longer tasks may make them more accessible to poorer applicants)
It takes a significant amount of time to mark a test task. But this can be fixed by just adjusting the height of the screening bar, as opposed to using credentialist and biased methods (like looking at someone’s LinkedIn profile or CV).
My guess is that, for many orgs, the time cost of assessing the test task is larger than the financial cost of paying candidates to complete the test task
This is an empirical question, and I suspect is not true. For example, it took me 10 minutes to mark each candidates 1 hour test task. So my salary would need to be 6* higher (per unit time) than the test task payment for this to be true.
I also think the justice-implications of compensating applicants are unclear (offering pay for longer tasks may make them more accessible to poorer applicants)
So my salary would need to be 6* higher (per unit time) than the test task payment for this to be true.
Strictly speaking your salary is the wrong number here. At a minimum, you want to use the cost to the org of your work, which is your salary + other costs of employing you (and I’ve seen estimates of the other costs at 50-100% of salary). In reality, the org of course values your work more highly than the amount they pay to acquire it (otherwise… why would they acquire it at that rate) so your value per hour is higher still. Keeping in mind that the pay for work tasks generally isn’t that high, it seems pretty plausible to me that the assessment cost is primarily staff time and not money.
I see a dynamic playing out here, where a user has made a falsifiable claim, I have attempted to falsify it, and you’ve attempted to deny that the claim is falsifiable at all.
I recognise it’s easy to stumble into these dynamics, but we must acknowledge that this is epistemically destructive.
Strictly speaking your salary is the wrong number here.
I don’t think we should dismiss empirical data so quickly when it’s brought to the table—that sets a bad precedent.
other costs of employing you (and I’ve seen estimates of the other costs at 50-100% of salary
I can also provide empirical data on this if that is the crux here?
Notice that we are discussing a concrete empirical data point, that represents a 600% difference, while you’ve given a theoretical upper bound of 100%. That leaves a 500% delta.
Keeping in mind that the pay for work tasks generally isn’t that high
Would you be able to provide any concrete figures here?
In reality, the org of course values your work more highly than the amount they pay to acquire it
I view pointing to opportunity cost in the abstract as essentially an appeal to ignorance.
Not to say that opportunity costs do not exist, but you’ve failed to concretise them in a way, and that makes it hard to find the truth here.
I could make similar appeals to ignorance in support of my argument, like the idea the benefit of getting a better candidate is very high, as candidate performance is fat-tailed ect. - but I believe this is similarly epistemically destructive. If I were to make a similar claim, I would likely attempt to concretise it.
I see a dynamic playing out here, where a user has made a falsifiable claim, I have attempted to falsify it, and you’ve attempted to deny that the claim is falsifiable at all.
My claim is that the org values your time at a rate that is significantly higher than the rate they pay you for it, because the cost of employment is higher than just salary and because the employer needs to value your work above its cost for them to want to hire you. I don’t see how this is unfalsifiable. Mostly you could falsify them by asking orgs how they think about the cost of staff time, though I guess some wouldn’t model it as explicitly as this.
They do mean that we’re forced to estimate the relevant threshold instead of having a precise number, but a precise wrong number isn’t better than an imprecise (closer to) correct number.
Notice that we are discussing a concrete empirical data point, that represents a 600% difference, while you’ve given a theoretical upper bound of 100%. That leaves a 500% delta.
No, if you’re comparing the cost of doing 10 minutes of work at salary X and 60 minutes of work compensated by Y, but I argue that salary X underestimates the cost of your work by a factor of 2, your salary now only needs to be more than 3 times larger than the work trial compensation, not 5 times.
When it comes to concretising “how much does employee value exceed employee costs”, it probably varies a lot from organisation to organisation. I think there are several employers in EA who believe that after a point, paying more doesn’t really get you better people. This allows their estimates of value of staff time to exceed employee costs by enormous margins, because there’s no mechanism to couple the two together. I think when these differences are very extreme we should be suspicious if they’re really true, but as someone who has multiple times had to compare earning to give with direct work, I’ve frequently asked an org “how much in donations would you need to prefer the money over hiring me?” and for difficult-to-hire roles they frequently say numbers dramatically larger than the salary they are offering.
This means that your argument is not going to be uniform across organisations, but I don’t know why you’d expect it to be: surely you weren’t saying that no organisation should ever pay for a test task, but only that organisations shouldn’t pay for test tasks when doing so increases their costs of assessment to the point where they choose to assess fewer people.
My expectation is that if you asked orgs about this, they would say that they already don’t choose to assess fewer people based on cost of paying them. This seems testable, and if true, it seems to me that it makes pretty much all of the other discussion irrelevant.
It takes a significant amount of time to mark a test task. But this can be fixed by just adjusting the height of the screening bar, as opposed to using credentialist and biased methods (like looking at someone’s LinkedIn profile or CV).
Whether or not to use “credentialist and biased methods (like looking at someone’s LinkedIn profile or CV)” seems orthogonal to the discussion at hand?
The key issue seems to be that if you raise the screening bar, then you would be admitting fewer applicants to the task (the opposite of the original intention).
This is an empirical question, and I suspect is not true. For example, it took me 10 minutes to mark each candidates 1 hour test task. So my salary would need to be 6* higher (per unit time) than the test task payment for this to be true.
This will definitely vary by org and by task. But many EA orgs report valuing their staff’s time extremely highly. And my impression is that both grading longer tasks and then processing the additional applicants (many orgs will also feel compelled to offer at least some feedback if a candidate has completed a multi-hour task) will often take much longer than 10 minutes total.
For Pause AI or Stop AI to succeed, pausing / stopping needs to be a viable solution. I think some AI capabilities people who believe in existential risk may (perhaps?) be motivated by the thought that the risk of civilisational collapse is high without AI, so it’s worth taking the risk of misaligned AI to prevent that outcome.
If this really is cruxy for some people, it’s possible this doesn’t get noticed because people take it as a background assumption and don’t tend to discuss it directly, so they don’t realize how much they disagree and how crucial that disagreement is.
EA tends to be anti-revolution, for a variety of reasons. The recent trump appointments have had me wondering if people here have a “line” in their head. By line I mean something like, I need to drop everything and start protesting or do something fast.
Like I don’t think appointing RFK jr. to health secretary is that line for me, but I also realize I don’t have a clear “line” in my head. If trump appointed a nazi who credibly claimed they were going to commit mass scale war crimes to the sec defense, is that enough for the people here to drop their current work?
I’m definitely generally on the side of engaging in reactionary politics being worthless, and further I don’t feel like the US is about to fall apart or completely go off the rails, but it would be really interesting to see if we could teleport some EAs back in time right before the rise of hitler or pre chinese revolution etc. (while wiping their brains of the knowledge of what would come) and see if they would say stuff like “politics is the mind killer and I need to focus on xyz”
Some musings about experience and coaching. I saw another announcement relating to mentorship/coaching/career advising recently. It looked like the mentors/coaches/advisors were all relatively junior/young/inexperienced. This isn’t the first time I’ve seen this. Most of this type of thing I’ve seen in and around EA involves the mentors/advisors/coaches being only a few years into their career. This isn’t necessarily bad. A person can be very well-read without having gone to school, or can be very strong without going to a gym, or can speak excellent Japanese without having ever been to Japan. A person being two or three or four years into their career doesn’t mean that it is impossible for them to have have good ideas and good advice.[1] But it does seem a little… odd. The skepticism I feel is similar to having a physically frail person as a fitness trainer: I am assessing the individual on a proxy (fitness) rather than on the true criteria (ability to advise me regarding fitness). Maybe that thinking is a bit too sloppy on my part.
This doesn’t mean that if you are 24 and you volunteer as a mentor that you should stop; you aren’t doing anything wrong. And I wouldn’t want some kind a silly and arbitrary rule, such as “only people age 40+ are allowed to be career coaches.” And there are some people doing this kind of work that have a decade or more of professional experience; I don’t want to make it sound like all of the people doing coaching and advising are fresh grads.
I wonder if there are any specific advantages or disadvantages to this ‘junior skew.’ Is there a meaningful correlation between length of career and ability to help other people with their careers?
EA already skews somewhat young, but from the last EA community survey it looks like the average age was around 29. So I wonder why are the vast majority of people doing mentorship/coaching/career advising are younger than that? Maybe the older people involved in EA are disproportionality not employed for EA organizations and are thus less focused on funneling people into impactful careers? I do have the vague impression that many 35+ EAs lean more toward earn-to-give. Maybe older EAs tend to be a little more private and less focused on the EA community? Maybe older people simply are less interested, or don’t view it as a priority? Maybe the organizations that employ/hire coaches all prefer young people? Maybe this is a false perception and I’m engaging in sloppy generalization from only a few anecdotes?
And the other huge caveat is that you can’t really know what a person’s professional background is from a quick glance at their LinkedIn Profile and the blurb that they share on a website, any more than you can accurately guess age from a profile photo. People sometimes don’t list everything. I can see that someone earned a bachelor’s degree in 2019 or 2020 or 2021, but maybe they didn’t follow a “standard” path: maybe they had a 10-year career prior to that, so guesses about being fairly young or junior are totally off. As always, drawing conclusions based on tiny snippets of information with minimal context is treacherous territory.
EA already skews somewhat young, but from the last EA community survey it looks like the average age was around 29. So I wonder why are the vast majority of people doing mentorship/coaching/career advising are younger than that? Maybe the older people involved in EA are disproportionality not employed for EA organizations and are thus less focused on funneling people into impactful careers?
I checked and people who currently work in an EA org are only slightly older on average (median 29 vs median 28).
Ten months ago I met Australia’s Assistant Defence Minister about AI Safety because I sent him one email asking for a meeting. I wrote about that here. In total I sent 21 emails to Politicians and had 4 meetings. AFAICT there is still no organisation with significant funding that does this as their primary activity. AI Safety advocacy is IMO still extremely low hanging fruit. My best theory is EAs don’t want to do it / fund it because EAs are drawn to spreadsheets and google docs (it isn’t their comparative advantage). Hammers like nails etc.
I also think many EAs are still allergic to direct political advocacy, and that this tendency is stronger in more rationalist-ish cause areas such as AI. We shouldn’t forget Yudkowsky’s “politics is the mind-killer”!
EA in a World Where People Actually Listen to Us
I had considered calling the third wave of EA “EA in a World Where People Actually Listen to Us”.
Leopold’s situational awareness memo has become a salient example of this for me. I used to sometimes think that arguments about whether we should avoid discussing the power of AI in order to avoid triggering an arms race were a bit silly and self important because obviously defense leaders aren’t going to be listening to some random internet charity nerds and changing policy as a result.
Well, they are and they are. Let’s hope it’s for the better.
Hi Ben! You might be interested to know I literally had a meeting with the Assistant Defence Minister in Australia about 10 months ago off the back of one email. I wrote about it here. AI Safety advocacy is IMO still low extremely hanging fruit. My best theory is EAs don’t want to do it because EAs are drawn to spreadsheets etc (it isn’t their comparative advantage).
I don’t want to claim all EAs believe the same things, but if the congressional commission had listened to what you might call the “central” EA position, it would not be recommending an arms race because it would be much more concerned about misalignment risk. The overwhelming majority of EAs involved in AI safety seem to agree that arms races are bad and misalignment risk is the biggest concern (within AI safety). So if anything this is a problem of the commission not listening to EAs, or at least selectively listening to only the parts they want to hear.
Maybe instead of “where people actually listen to us” it’s more like “EA in a world where people filter the most memetically fit of our ideas through their preconceived notions into something that only vaguely resembles what the median EA cares about but is importantly different from the world in which EA didn’t exist.”
On that framing, I agree that that’s something that happens and that we should be able to anticipate will happen.
In most cases this is a rumors based thing, but I have heard that a substantial chunk of the OP-adjacent EA-policy space has been quite hawkish for many years, and at least the things I have heard is that a bunch of key leaders “basically agreed with the China part of situational awareness”.
Again, people should really take this with a double-dose of salt, I am personally at like 50⁄50 of this being true, and I would love people like lukeprog or Holden or Jason Matheny or others high up at RAND to clarify their positions here. I am not attached to what I believe, but I have heard these rumors from sources that didn’t seem crazy (but also various things could have been lost in a game of telephone, and being very concerned about China doesn’t result in endorsing a “Manhattan project to AGI”, though the rumors that I have heard did sound like they would endorse that)
Less rumor-based, I also know that Dario has historically been very hawkish, and “needing to beat China” was one of the top justifications historically given for why Anthropic does capability research. I have heard this from many people, so feel more comfortable saying it with fewer disclaimers, but am still only like 80% on it being true.
Overall, my current guess is that indeed, a large-ish fraction of the EA policy people would have pushed for things like this, and at least didn’t seem like they would push back on it that much. My guess is “we” are at least somewhat responsible for this, and there is much less of a consensus against a U.S. china arms race in US governance among EAs than one might think, and so the above is not much evidence that there was no listening or only very selective listening to EAs.
I looked thru the congressional commission report’s list of testimonies for plausibly EA-adjacent people. The only EA-adjacent org I saw was CSET, which had two testimonies (1, 2). From a brief skim, neither one looked clearly pro- or anti-arms race. They seemed vaguely pro-arms race on vibes but I didn’t see any claims that look like they were clearly encouraging an arms race—but like I said, I only briefly skimmed them, so I could have missed a lot.
This is inconsistent with my impressions and recollections. Most clearly, my sense is that CSET was (maybe still is, not sure) known for being very anti-escalatory towards China, and did substantial early research debunking hawkish views about AI progress in China, demonstrating it was less far along than ways widely believed in DC (and that EAs were involved in this, because they thought it was true and important, because they thought current false fears in the greater natsec community were enhancing arms race risks) (and this was when Jason was leading CSET, and OP supporting its founding). Some of the same people were also supportive of export controls, which are more ambiguous-sign here.
The export controls seemed like a pretty central example of hawkishness towards China and a reasonable precursor to this report. The central motivation in all that I have written related to them was about beating China in AI capabilities development.
Of course no one likes a symmetric arms race, but the question is did people favor the “quickly establish overwhelming dominance towards China by investing heavily in AI” or the “try to negotiate with China and not set an example of racing towards AGI” strategy. My sense is many people favored the former (though definitely not all, and I am not saying that there is anything like consensus, my sense is it’s a quite divisive topic).
To support your point, I have seen much writing from Helen Toner on trying to dispel hawkishness towards China, and have been grateful for that. Against your point, at the recent “AI Security Forum” in Vegas, many x-risk concerned people expressed very hawkish opinions.
Yeah re the export controls, I was trying to say “I think CSET was generally anti-escalatory, but in contrast, the effect of their export controls work was less so” (though I used the word “ambiguous” because my impression was that some relevant people saw a pro of that work that it also mostly didn’t directly advance AI progress in the US, i.e. it set China back without necessarily bringing the US forward towards AGI). To use your terminology, my impression is some of those people were “trying to establish overwhelming dominance over China” but not by “investing heavily in AI”.
It looks to me like the online EA community, and the EAs I know IRL, have a fairly strong consensus that arms races are bad. Perhaps there’s a divide in opinions with most self-identified EAs on one side, and policy people / company leaders on the other side—which in my view is unfortunate since the people holding the most power are also the most wrong.
(Is there some systematic reason why this would be true? At least one part of it makes sense: people who start AGI companies must believe that building AGI is the right move. It could also be that power corrupts, or something.)
So maybe I should say the congressional commission should’ve spent less time listening to EA policy people and more time reading the EA Forum. Which obviously was never going to happen but it would’ve been nice.
Slightly independent to the point Habryka is making, which may well also be true, my anecdotal impression is that the online EA community / EAs I know IRL were much bigger on ‘we need to beat China’ arguments 2-4 years ago. If so, simple lag can also be part of the story here. In particular I think it was the mainstream position just before ChatGPT was released, and partly as a result I doubt an ‘overwhelming majority of EAs involved in AI safety’ disagree with it even now.
Example from August 2022:
https://www.astralcodexten.com/p/why-not-slow-ai-progress
Later, talking about why attempting a regulatory approach to avoiding a race is futile:
I feel like a generic non-EA policy person reading that post could well end up where the congressional commission landed? It’s right there in the section that most explicitly talks about policy.
Scott’s last sentence seems to be claiming that avoiding an arms race is easier than solving alignment (and it would seem to follow from that that we shouldn’t race). But I can see how a politician reading this article wouldn’t see that implication.
Yep, my impression is that this is an opinion that people mostly adopted after spending a bunch of time in DC and engaging with governance stuff, and so is not something represented in the broader EA population.
My best explanation is that when working in governance, being pro-China is just very costly, and especially combining the belief that AI will be very powerful, and there is no urgency to beat China to it, seems very anti-memetic in DC, and so people working in the space started adopting those stances.
But I am not sure. There are also non-terrible arguments for beating China being really important (though they are mostly premised on alignment being relatively easy, which seems very wrong to me).
Not just alignment being easy, but alignment being easy with overwhelmingly high probability. It seems to me that pushing for an arms race is bad even if there’s only a 5% chance that alignment is hard.
I think most of those people believe that “having an AI aligned to ‘China’s values’” would be comparably bad to a catastrophic misalignment failure, and if you believe that, 5% is not sufficient, if you think there is a greater than 5% of China ending up with “aligned AI” instead.
I think that’s not a reasonable position to hold but I don’t know how to constructively argue against it in a short comment so I’ll just register my disagreement.
Like, presumably China’s values include humans existing and having mostly good experiences.
Yep, I agree with this, but it appears nevertheless a relatively prevalent opinion among many EAs working in AI policy.
I’m not sure to what extent the Situational Awareness Memo or Leopold himself are representatives of ‘EA’
In the pro-side:
Leopold thinks AGI is coming soon, will be a big deal, and that solving the alignment problem is one of the world’s most important priorities
He used to work at GPI & FTX, and formerly identified with EA
He (
probablyalmost certainly) personally knows lots of EA people in the BayOn the con-side:
EA isn’t just AI Safety (yet), so having short timelines/high importance on AI shouldn’t be sufficient to make someone an EA?[1]
EA shouldn’t also just refer to a specific subset of the Bay Culture (please), or at least we need some more labels to distinguish different parts of it in that case
Many EAs have disagreed with various parts of the memo, e.g. Gideon’s well received post here
Since his EA institutional history he moved to OpenAI (mixed)[2] and now runs an AGI investment firm.
By self-identification, I’m not sure I’ve seen Leopold identify as an EA at all recently.
This again comes down to the nebulousness of what ‘being an EA’ means.[3] I have no doubts at all that, given what Leopold thinks is the way to have the most impact he’ll be very effective at achieving that.
Further, on your point, I think there’s a reason to suspect that something like situational awareness went viral in a way that, say, Rethink Priorities Moral Weight project didn’t—the promise many people see in powerful AI is power itself, and that’s always going to be interesting for people to follow, so I’m not sure that situational awareness becoming influential makes it more likely that other ‘EA’ ideas will
Plenty of e/accs have these two beliefs as well, they just expect alignment by default, for instance
I view OpenAI as tending implicitly/explicitly anti-EA, though I don’t think there was an explicit ‘purge’, I think the culture/vision of the company was changed such that card-carrying EAs didn’t want to work there any more
The 3 big defintions I have (self-identification, beliefs, actions) could all easily point in different directions for Leopold
I think he is pretty clearly an EA given he used to help run the Future Fund, or at most an only very recently ex-EA. Having said that, it’s not clear to me this means that “EAs” are at fault for everything he does.
Yeah again I just think this depends on one’s definition of EA, which is the point I was trying to make above.
Many people have turned away from EA, both the beliefs, institutions, and community in the aftermath of the FTX collapse. Even Ben Todd seems to not be an EA by some definitions any more, be that via association or identification. Who is to say Leopold is any different, or has not gone further? What then is the use of calling him EA, or using his views to represent the ‘Third Wave’ of EA?
I guess from my PoV what I’m saying is that I’m not sure there’s much ‘connective tissue’ between Leopold and myself, so when people use phrases like “listen to us” or “How could we have done” I end up thinking “who the heck is we/us?”
How do you know Leopold or anyone else actually influenced the commission’s report? Not that that seems particularly unlikely to me, but is there any hard evidence? EDIT: I text-searched the report and he is not mentioned by name, although obviously that doesn’t prove much on its own.
Seems plausible the impact of that single individual act is so negative that aggregate impact of EA is negative.
I think people should reflect seriously upon this possibility and not fall prey to wishful thinking (let’s hope speeding up the AI race and making it superpower powered is the best intervention! it’s better if everyone warning about this was wrong and Leopold is right!).
The broader story here is that EA prioritization methodology is really good for finding highly leveraged spots in the world, but there isn’t a good methodology for figuring out what to do in such places, and there also isn’t a robust pipeline for promoting virtues and virtuous actors to such places.
I spent all day in tears when I read the congressional report. This is a nightmare. I was literally hoping to wake up from a bad dream.
I really hope people don’t suffer for our sins.
How could we have done something so terrible. Starting an arms race and making literal war more likely.
| and there also isn’t a robust pipeline for promoting virtues and virtuous actors to such places.
this ^
Call me a hater, and believe me, I am, but maybe someone who went to university at 16 and clearly spent most of their time immersed in books is not the most socially developed.
Maybe after they are implicated in a huge scandal that destroyed our movement’s reputation we should gently nudge them to not go on popular podcasts and talk fantastically and almost giddily about how world war 3 is just around the corner. Especially when they are working in a financial capacity in which they would benefit from said war.
Many of the people we have let be in charge of our movement and speak on behalf of it don’t know the first thing about optics or leadership or politics. I don’t think Elizier Yudowsky could win a middle school class president race with a million dollars.
I know your point was specifically tailored toward optics and thinking carefully about what we say when we have a large platform, but I think looking back and forward bad optics and a lack of real politik messaging are pretty obvious failure modes of a movement filled with chronically online young males who worship intelligence and research output above all else. I’m not trying to sh*t on Leopold and I don’t claim I was out here beating a drum about the risks of these specific papers but yea I do think this is one symptom of a larger problem. I can barely think of anyone high up (publicly) in this movement who has risen via organizing.
The thing about Yudkowsky is that, yes, on the one hand, every time I read him, I think he surely must be coming across as super-weird and dodgy to “normal” people. But on the other hand, actually, it seems like he HAS done really well in getting people to take his ideas seriously? Sam Altman was trolling Yudkowsky on twitter a while back about how many of the people running/founding AGI labs had been inspired to do so by his work. He got invited to write on AI governance for TIME despite having no formal qualifications or significant scientific achievements whatsoever. I think if we actually look at his track record, he has done pretty well at convincing influential people to adopt what were once extremely fringe views, whilst also succeeding in being seen by the wider world as one of the most important proponents of those views, despite an almost complete lack of mainstream, legible credentials.
Hmm, I hear what you are saying but that could easily be attributed to some mix of
(1) he has really good/convincing ideas
(2) he seems to be a a public representative for the EA/LW community for a journalist on the outside.
And I’m responding to someone saying that we are in “phase 3”—that is to say people in the public are listening to us—so I guess I’m not extremely concerned about him not being able to draw attention or convince people. I’m more just generally worried that people like him are not who we should be promoting to positions of power, even if those are de jure positions.
Yeah, I’m not a Yudkowsky fan. But I think the fact that he mostly hasn’t been a PR disaster is striking, surprising and not much remarked upon, including by people who are big fans.
I guess in thinking about this I realize it’s so hard to even know if someone is a “PR disaster” that I probably have just been confirming my biases. What makes you say that he hasn’t been?
Just the stuff I already said about the success he seems to have had. It is also true that many people hate him and think he’s ridiculous, but I think that makes him polarizing rather than disastrous. I suppose you could phrase it as “he was a disaster in some ways but a success in others” if you want to.
(I think the issue with Leopold is somewhat precisely that he seems to be quite politically savvy in a way that seems likely to make him a deca-multi-millionaire and politically influental, possibly at the cost of all of humanity. I agree Eliezer is not the best presenter, but his error modes are clearly enormously different)
I don’t think I was claiming they have the exact same failure modes—do you want to point out where I did that? Rather they both have failure modes that I would expect to happen as a result of selecting them to be talking heads on the basis of wits and research output. Also I feel like you are implying Leopold is evil or something like that and I don’t agree but maybe I’m misinterpretting.
He seems like a smooth operator in some ways and certainly is quite different than Elizier. That being said I showed my dad (who has become an oddly good litmus test for a lot of this stuff for me as someone who is somewhat sympathethic to our movement but also a pretty normal 60 year old man in a completely different headspace) the Dwarkesh episode and he thought Leopold was very, very, very weird (and not because of his ideas). He kind of reminds me of Peter Thiel. I’ll completely admit I wasn’t especially clear in my points and that mostly reflects my own lack of clarity on the exact point I was trying to getting across.
I think I take back like 20% of what I said (basically to the extent I was making a very direct stab at what exactly that failure mode is) but mostly still stand by the original comment, which again I see as being approximately ~ “Selecting people to be the public figureheads of our movement on the basis wits and research output is likely to be bad for us”.
I’d love to see an ‘Animal Welfare vs. AI Safety/Governance Debate Week’ happening on the Forum. The risks from AI cause has grown massively in importance in recent years, and has become a priority career choice for many in the community. At the same time, the Animal Welfare vs Global Health Debate Week demonstrated just how important and neglected the cause of animal welfare remains. I know several people (including myself) who are uncertain/torn about whether to pursue careers focused on reducing animal suffering or mitigating existential risks related to AI. It would help to have rich discussions comparing both causes’s current priorities and bottlenecks, and a debate week would hopefully expose some useful crucial considerations.
I would like to see this. I have considerable uncertainty about whether to prioritize (longtermism-oriented) animal welfare or AI safety.
How tractable is improving (moral) philosophy education in high schools?
tldr: Do high school still neglect ethics / moral philosophy in their curriculums? Mine did (year 2012). Are there tractable ways to improve the situation, through national/state education policy or reaching out to schools and teachers? Has this been researched / tried before?
The public high school I went to in Rottweil (rural Southern Germany) was overall pretty good, probably top 2-10% globally, except for one thing: Moral philosophy. 90min/week “Christian Religion” was the default for everyone, in which we spent most of the time interpreting stories from the bible, most of which to me felt pretty irrelevant to the present. This was in 2012 in Germany, a country with more atheists than Christians as of 2023, and even in 2012 my best guess is that <20% of my classmates were practicing a religion.
Only in grade 10, we got the option to switch to secular Ethics classes instead, which only <10% of the students did (Religion was considered less work).
Ethics class quickly became one of my favorite classes. For the first time in my life I had a regular group of people equally interested in discussing Vegetarianism and other such questions (almost everyone in my school ate meat, and vegetarians were sometimes made fun of). Still, the curriculum wasn’t great, we spent too much time with ancient Greek philosophers and very little time discussing moral philosophy topics relevant to the present.
How have your experiences been in high school? I’m especially curious about more recent experiences.
Are there tractable ways to improve the situation? Has anyone researched this?
1) Could we get ethics classes in the mandatory/default curriculum in more schools? Which countries or states seem best for that? In Germany, education is state-regulated—which German state might be most open to this? Hamburg? Berlin?
2) Is there a shortage in ethics teachers (compared to religion teachers)? Can we get teachers more interested in teaching ethics?
3) Are any teachers here teaching ethics? Would you like to connect more with other (EA/ethics) teachers? We could open a whatsapp group, if there’s not already one.
In England, secular ethics isn’t really taught until Year 9 (age 13-14) or Year 10, as part of Religious Studies classes. Even then, it might be dependent on the local council, the type of school or even the exam boards/modules that are selected by the school. And by Year 10, students in some schools can opt out of taking religious studies for their GCSEs.
Anecdotally, I got into EA (at least earlier than I would have) because my high school religious studies teacher (c. 2014) could see that I had utilitarian intuitions (e.g. in discussions about animal experimentation and assisted dying) and gave me a copy of Practical Ethics to read. I then read The Life You Can Save.
I went to high school in the USA, in the 2000s, so it has been roughly twenty years. I attended a public highschool, that wasn’t particularly well-funded nor impoverished. There were no ethics or philosophy courses offered. There was not education on moral philosophy, aside from that which is gained through literature in an English class (such as reading Lord of the Flies or Fahrenheit 451 or To Kill a Mockingbird).
There is a Facebook group for EA Education, but my impression is that it isn’t very active.
My (uninformed, naïve) guess is that this isn’t very tractable, because education tends to be controlled by the government and there are a lot of vested interests. The argument would basically be “why should we teach these kids about being a good person when we could instead use that time to teach them computer programming/math/engineering/language/civics?” It is a crowded space with a lot of competing interests already.
Charter schools are a real option in many places. In Chicago if you have money and wherewithal you can open a charter school and basically teach what ever you want. The downside here is you will not be able to get the top students in the city to go to your school because there are already a select few incredible public and private schools.
(Haven’t thought about this really, might be very wrong, but have this thought and seems good to put out there.) I feel like putting 🔸 at the end of social media names might be bad. I’m curious what the strategy was.
The willingness to do this might be anti-correlated with status. It might be a less important part of identity of more important people. (E.g., would you expect Sam Harris, who is a GWWC pledger, to do this?)
I’d guess that ideally, we want people to associate the GWWC pledge with role models (+ know that people similar to them take the pledge, too).
Anti-correlation with status might mean that people will identify the pledge with average though altruistic Twitter users, not with cool people they want to be more like.
You won’t see a lot of e/accs putting the 🔸 in their names. There might be downside effects of perception of a group of people as clearly outlined and having this as an almost political identity; it seems bad to have directionally-political properties that might do mind-killing things both to people with 🔸 and to people who might argue with them.
Recently, I’ve encountered an increasing number of misconceptions, in rationalist and effective altruist spaces, about what Open Philanthropy’s Global Catastrophic Risks (GCR) team does or doesn’t fund and why, especially re: our AI-related grantmaking. So, I’d like to briefly clarify a few things:
Open Philanthropy (OP) and our largest funding partner Good Ventures (GV) can’t be or do everything related to GCRs from AI and biohazards: we have limited funding, staff, and knowledge, and many important risk-reducing activities are impossible for us to do, or don’t play to our comparative advantages.
Like most funders, we decline to fund the vast majority of opportunities we come across, for a wide variety of reasons. The fact that we declined to fund someone says nothing about why we declined to fund them, and most guesses I’ve seen or heard about why we didn’t fund something are wrong. (Similarly, us choosing to fund someone doesn’t mean we endorse everything about them or their work/plans.)
Very often, when we decline to do or fund something, it’s not because we don’t think it’s good or important, but because we aren’t the right team or organization to do or fund it, or we’re prioritizing other things that quarter.
As such, we spend a lot of time working to help create or assist other philanthropies and organizations who work on these issues and are better fits for some opportunities than we are. I hope in the future there will be multiple GV-scale funders for AI GCR work, with different strengths, strategies, and comparative advantages — whether through existing large-scale philanthropies turning their attention to these risks or through new philanthropists entering the space.
While Good Ventures is Open Philanthropy’s largest philanthropic partner, we also regularly advise >20 other philanthropists who are interested to hear about GCR-related funding opportunities. (Our GHW team also does similar work partnering with many other philanthropists.) On the GCR side, we have helped move tens of millions of non-GV money to GCR-related organizations in just the past year, including some organizations that GV recently exited. GV and each of those other funders have their own preferences and restrictions we have to work around when recommending funding opportunities.
Among the AI funders we advise, Good Ventures is among the most open and flexible funders.
We’re happy to see funders enter the space even if they don’t share our priorities or work with us. When more funding is available, and funders pursue a broader mix of strategies, we think this leads to a healthier and more resilient field overall.
Many funding opportunities are a better fit for non-GV funders, e.g. due to funder preferences, restrictions, scale, or speed. We’ve also seen some cases where an organization can have more impact if they’re funded primarily or entirely by non-GV sources. For example, it’s more appropriate for some types of policy organizations outside the U.S. to be supported by local funders, and other organizations may prefer support from funders without GV/OP’s past or present connections to particular grantees, AI companies, etc. Many of the funders we advise are actively excited to make use of their comparative advantages relative to GV, and regularly do so.
We are excited for individuals and organizations that aren’t a fit for GV funding to apply to some of OP’s GCR-related RFPs (e.g. here, for AI governance). If we think the opportunity is strong but a better fit for another funder, we’ll recommend it to other funders.
To be clear, these other funders remain independent of OP and decline most of our recommendations, but in aggregate our recommendations often lead to target grantees being funded.
We believe reducing AI GCRs via public policy is not an inherently liberal or conservative goal. Almost all the work we fund in the U.S. is nonpartisan or bipartisan and engages with policymakers on both sides of the aisle. However, at present, it remains the case that most of the individuals in the current field of AI governance and policy (whether we fund them or not) are personally left-of-center and have more left-of-center policy networks. Therefore, we think AI policy work that engages conservative audiences is especially urgent and neglected, and we regularly recommend right-of-center funding opportunities in this category to several funders.
OP’s AI teams spend almost no time directly advocating for specific policy ideas. Instead, we focus on funding a large ecosystem of individuals and organizations to develop policy ideas, debate them, iterate them, advocate for them, etc. These grantees disagree with each other very often (a few examples here), and often advocate for different (and sometimes ~opposite) policies.
We think it’s fine and normal for grantees to disagree with us, even in substantial ways. We’ve funded hundreds of people who disagree with us in a major way about fundamental premises of our GCRs work, including about whether AI poses GCR-scale risks at all (example).
I think frontier AI companies are creating enormous risks to humanity, I think their safety and security precautions are inadequate, and I think specific reckless behaviors should be criticized. AI company whistleblowers should be celebrated and protected. Several of our grantees regularly criticize leading AI companies in their official communications, as do many senior employees at our grantees, and I think this happens too infrequently.
Relatedly, I think substantial regulatory guardrails on frontier AI companies are needed, and organizations we’ve directed funding to regularly propose or advocate policies that ~all frontier AI companies seem to oppose (alongside some policies they tend to support).
I’ll also take a moment to address a few misconceptions that are somewhat less common in EA or rationalist spaces, but seem to be common elsewhere:
Discussion of OP online and in policy media tends to focus on our AI grantmaking, but AI represents a minority of our work. OP has many focus areas besides AI, and has given far more to global health and development work than to AI work.
We are generally big fans of technological progress. See e.g. my post about the enormous positive impacts from the industrial revolution, or OP’s funding programs for scientific research, global health R&D, innovation policy, and related issues like immigration policy. Most technological progress seems to have been beneficial, sometimes hugely so, even though there are some costs and harms along the way. But some technologies (e.g. nuclear weapons, synthetic pathogens, and superhuman AI) are extremely dangerous and warrant extensive safety and security measures rather than a “move fast and break [the world, in this case]” approach.
We have a lot of uncertainty about how large AI risk is, exactly which risks are most worrying (e.g. loss of control vs. concentration of power), on what timelines the worst-case risks might materialize, and what can be done to mitigate them. As such, most of our funding in the space has been focused on (a) talent development, and (b) basic knowledge production (e.g. Epoch AI) and scientific investigation (example), rather than work that advocates for specific interventions.
I hope these clarifications are helpful, and lead to fruitful discussion, though I don’t expect to have much time to engage with comments here.
Could you give examples of these?
I think it might be a good idea to taboo the phrase “OP is funding X” (at least when talking about present day Open Phil).
Historically, OP would have used the phrase “OP is funding X” to mean “referred a grant to X to GV” (which approximately was never rejected). One was also able to roughly assume that if OP decides to not recommend a grant to GV, that most OP staff do not think that grant would be more cost-effective than other grants referred to GV (and as such, the word people used to describe OP not referring a grant to GV was “rejecting X” or “defunding X”).
Of course, now that the relationship between OP and GV has substantially changed, and the trust has broken down somewhat, the term “OP is funding X” is confusing (including IMO in your comment, where in your last few bullet points you talk about “OP has given far more to global health than AI” when I think to not confuse people here, it would be good to say “OP has recommended far more grants to global health”, since OP itself has not actually given away any money directly, and in the rest of your comment you use “recommend”).
I think the key thing for people to understand is why it no longer makes sense to talk about “OP funding X”, and where it makes sense to model OP grant-referrals to GV as still closely matching OPs internal cost-effectiveness estimates.[1]
For organizations and funders trying to orient towards the funding ecosystem, the most important thing is understanding what GV is likely to fund on behalf of an OP recommendation. So when people talk about “OP funding X” or “OP not funding X” that is what they usually refer to (and that is also again how OP has historically used those words, and how you have used those words in your comment). I expect this usage to change over time, but it will take a while (and would ask for you to be gracious and charitable when trying to understand what people mean when they conflate OP and GV in discussions).[2]
Now having gotten that clarification out of the way, my guess is most of the critiques that you have seen about OP funding are basically accurate when seen through this lens (though I don’t know what critiques you are referring to, since you aren’t being specific). As an example, as Jason says in another comment, it does look like GV has a very limited appetite for grants to right-of-center organizations, and since (as you say yourself) the external funders reject the majority of grants you refer to them, this de-facto leads to a large reduction of funding, and a large negative incentive for founders and organizations who are considering working more with the political right.
I think your comment is useful, and helps people understand some of how OP is trying to counteract the ways GV’s withdrawal from many crucial funding areas has affected things, which I am glad about. I do also think your comment has far too much of the vibe of “nothing has changed in the last year” and “you shouldn’t worry too much about which areas GV wants or want to not fund”. De-facto GV was and is likely to continue to be 95%+ of the giving that OP is influencing, and the dynamics between OP and non-GV funders are drastically different than the dynamics historically between OP and GV.
I think a better intutition pump for people trying to understand the funding ecosystem would be a comment that is scope-sensitive in the relevant ways. I think it would start with saying:
And another dimension to track is “where OPs cost-effectiveness estimates are likely to be wrong”. I think due to the tricky nature of the OP/GV relationship, I expect OP to systematically be worse at making accurate cost-effectiveness estimates where GV has strong reputation-adjacent opinions, because of course it is of crucial importance for OP to stay “in-sync” with GV, and repeated prolonged disagreements are the kind of thing that tend to cause people and organizations to get out of sync.
Of course, people might also care about the opinions of OP staff, as people who have been thinking about grantmaking for a long time, but my sense is that in as much as those opinions do not translate into funding, that is of lesser importance when trying to identify neglected niches and funding approaches (but still important).
I don’t know how true this is and of course you should write what seems true to you here. I currently think this is true, but also “60% of grants referred get made” would not be that surprising. And also of course this is a two-sided game where OP will take into account whether there are any funders even before deciding whether to evaluate a grant at all, and so the ground truth here is kind of tricky to establish.
For example, you say that OP is happy to work with people who are highly critical of OP. That does seem true! However, my honest best guess is that it’s much less true of GV, and being publicly critical of GV and Dustin is the kind of thing that could very much influence whether OP ends up successfully referring a grant to GV, and to some degree being critical of OP also makes receiving funding from GV less likely, though much less so. That is of crucial importance to know for people when trying to decide how open and transparent to be about their opinions.
Replying to just a few points…
I agree about tabooing “OP is funding…”; my team is undergoing that transition now, leading to some inconsistencies in our own usage, let alone that of others.
Re: “large negative incentive for founders and organizations who are considering working more with the political right.” I’ll note that we’ve consistently been able to help such work find funding, because (as noted here), the bottleneck is available right-of-center opportunities rather than available funding. Plus, GV can and does directly fund lots of work that “engages with the right” (your phrasing), e.g. Horizon fellows and many other GV grantees regularly engage with Republicans, and seem likely to do even more of that on the margin given the incoming GOP trifecta.
Re: “nothing has changed in the last year.” No, a lot has changed, but my quick-take post wasn’t about “what has changed,” it was about “correcting some misconceptions I’m encountering.”
Re: “De-facto GV was and is likely to continue to be 95%+ of the giving that OP is influencing.” This isn’t true, including specifically for my team (“AI governance and policy”).
I also don’t think this was ever true: “One was also able to roughly assume that if OP decides to not recommend a grant to GV, that most OP staff do not think that grant would be more cost-effective than other grants referred to GV.” There’s plenty of internal disagreement even among the AI-focused staff about which grants are above our bar for recommending, and funding recommendation decisions have never been made by majority vote.
Makes sense. I think it’s easy to point out ways things are off, but in this case, IMO the most important thing that needs to happen in the funding ecosystem is people grappling with the huge changes that have occurred, and I think a lot of OP communication has been actively pushing back on that (not necessarily intentionally, I just think it’s a tempting and recurring error mode for established institutions to react to people freaking out with a “calm down” attitude, even when that’s inappropriate, cf. CDC and pandemics and many past instances of similar dynamics)
In particular, I am confident the majority of readers of your original comment interpreted what you said as meaning that GV has no substantial dispreference for right-of-center grants, which I think was substantially harmful to the epistemic landscape (though I am glad that further prodding by me and Jason cleared that up).
I don’t currently believe this, and think you are mostly not exposed to most people who could be doing good work in the space (which is downstream of a bunch of other choices OP and GV made), and also overestimate the degree to which OP is helpful in getting the relevant projects funding (I know of 1-2 projects in this space which did ultimately get funding, where OP was a bit involved, but my sense is was overall slightly anti-helpful).
If you know people who could do good work in the space, please point them to our RFP! As for being anti-helpful in some cases, I’m guessing that was cases where we thought the opportunity wasn’t a great opportunity despite it being right-of-center (which is a point in favor, in my opinion), but I’m not sure.
I would take bets on this! It is of course important to assess counterfactualness of recommendations from OP. If you recommend a grant a funder would have made anyways, it doesn’t make any sense to count that as something OP “influenced”.
With that adjustment, I would take bets that more than 90% of influence-adjusted grants from OP in 2024 will have been made by GV (I don’t think it’s true in “AI governance and policy” where I can imagine it being substantially lower, I have much less visibility into that domain. My median for all of OP is 95%, but that doesn’t imply my betting odds, since I want at least a bit of profit margin).
Happy to refer to some trusted third-party arbiter for adjudicating.
I’d rather not spend more time engaging here, but see e.g. this.
Sure, my guess is OP gets around 50%[1] of the credit for that and GV is about 20% of the funding in the pool, making the remaining portion a ~$10M/yr grant ($20M/yr for 4 years of non-GV funding[2]). GV gives out ~$600M[3] grants per year recommended by OP, so to get to >5% you would need the equivalent of 3 projects of this size per year, which I haven’t seen (and don’t currently think exist).
Even at 100% credit, which seems like a big stretch, my guess is you don’t get over 5%.
To substantially change the implications of my sentence I think you need to get closer to 10%, which I think seems implausible from my viewpoint. It seems pretty clear the right number is around 95% (and IMO it’s bad form given that to just respond with a “this was never true” when it’s clearly and obviously been true in some past years, and it’s at the very least very close to true this year).
Mostly chosen for schelling-ness. I can imagine it being higher or lower. It seems like lots of other people outside of OP have been involved, and the choice of area seems heavily determined by what OP could get buy-in for from other funders, seeming somewhat more constrained than other grants, so I think a lower number seems more reasonable.
I have also learned to really not count your chickens before they are hatched with projects like this, so I think one should discount this funding by an expected 20-30% for a 4-year project like this, since funders frequently drop out and leadership changes, but we can ignore that for now
https://www.goodventures.org/our-portfolio/grantmaking-approach/
I’m confused by the wording of your bet—I thought you had been arguing than more than 90% are by GV, not ‘more than 90% are by a non-GV funder’
Sorry, just a typo!
I used the double negative here very intentionally. Funding recommendations don’t get made by majority vote, and there isn’t such a thing as “the Open Phil view” on a grant, but up until 2023 I had long and intense conversations with staff at OP who said that it would be very weird and extraordinary if OP rejected a grant that most of its staff considered substantially more cost-effective than your average grant.
That of course stopped being true recently (and I also think past OP staff overstated a bit the degree to which it was true previously, but it sure was something that OP staff actively reached out to me about and claimed was true when I disputed it). You saying “this was never true” is in direct contradiction to statements made by OP staff to me up until late 2023 (bar what people claimed were very rare exceptions).
(Fwiw, the crowd prediction on the Metaculus question ‘Will there be another donor on the scale of 2020 Good Ventures in the Effective Altruist space in 2026?’ currently sits at 43%.)
Should the reader infer anything from the absence of a reference to GV here? The comment thread that came to mind when reading this response was significantly about GV (although there was some conflation of OP and GV within it). So if OP felt it could recommend US “right-of-center”[1] policy work to GV, I would be somewhat surprised that this well-written post didn’t say that.
Conditional on GV actually being closed to right-of-center policy work, I express no criticism of that decision here. It’s generally not cool to criticize donors for declining to donate to stuff that is in tension or conflict with their values, and it seems that would be the case. However, where the funder is as critical to an ecosystem as GV is here, I think fairly high transparency about the unwillingness to fund a particular niche is necessary to allow the ecosystem to adjust. For example, learning that GV is closed to a niche area that John Doe finds important could switch John from object-level work to earning to give. And people considering moving to object-level work need to clearly understand if the 800-pound gorilla funder will be closed to them.
I place this in quotes because the term is ambiguous.
Good Ventures did indicate to us some time ago that they don’t think they’re the right funder for some kinds of right-of-center AI policy advocacy, though (a) the boundaries are somewhat fuzzy and pretty far from the linked comment’s claim about an aversion to opportunities that are “even slightly right of center in any policy work,” (b) I think the boundaries might shift in the future, and (c) as I said above, OP regularly recommends right-of-center policy opportunities to other funders.
Also, I don’t actually think this should affect people’s actions much because: my team has been looking for right-of-center policy opportunities for years (and is continuing to do so), and the bottleneck is “available opportunities that look high-impact from an AI GCR perspective,” not “available funding.” If you want to start or expand a right-of-center policy group aimed at AI GCR mitigation, you should do it and apply here! I can’t guarantee we’ll think it’s promising enough to recommend to the funders we advise, but there are millions (maybe tens of millions) available for this kind of work; we’ve simply found only a few opportunities that seem above-our-bar for expected impact on AI GCR, despite years of searching.
Can you say what the “some kinds” are?
What’s a realistic, positive vision of the future worth fighting for?
I feel lost when it comes how to do altruism lately. I keep starting and dropping various little projects. I think the problem is that I just don’t have a grand vision of the future I am trying to contribute to. There are so many different problems and uncertainty about what the future will look like. Thinking about the world in terms of problems just leads to despair for me lately. As if humanity is continuously not living up to my expectations. Trump’s victory, the war in Ukraine, increasing scale of factory farming, lack of hope on AI. Maybe insects suffer too, which would just create more problems. My expectations for humanity were too high and I am mourning that but I don’t know what’s on the other side. There are so many things that I don’t want to happen, that I’ve lost the sight of what I do want to happen. I don’t want to be motivated solely by fear. I want some sort of a realistic positive vision for the future that I could fight for. Can anyone recommend something on that? Preferably something that would take less than 30 minutes to watch or read. It can be about animal advocacy, AI, or global politics.
I like Bostrom’s Letter from Utopia
One possible way of thinking about this, which might tie your work in smaller battles into a ‘big picture’, is if you believe that your work on the smaller battles is indirectly helping the wider project. e.g. by working to solve one altruistic cause you are sparing other altruistic individuals and altruistic resources from being spent on that cause, increasing the resources available for wider altruistic projects, and potentially by increasing altruistic resources available in the future.[1]
Note that I’m only saying this is a possible way of thinking about this, not necessarily that you should think this (for one thing, the extent to which this is true probably varies across areas, depending on the inter-connectedness of different cause areas in different ways and their varying flowthrough effects).
As in this passage from one of Yudkowsky’s short stories:
Edge Esmeralda seems like a great bottom up experiment in a nontrivially better way of living together: https://www.edgeesmeralda.com/
A marginal rather than transformative revolution if you will.
I find videos about space colonization pretty inspiring. Of course, space colonization would ideally be paired with some level of suffering abolition, so we aren’t spreading needless suffering to other planets. Space colonization could help with political discord, since people with different ideas of a “good society” can band together and peacefully disperse through the solar system. If you think traveling the world to experience different cultures is fun, I expect visiting other planets to experience different cultures will be even better. On the AI front, rumor has it that scaling is slowing down… that could grant more time for alignment work, and increase the probability that an incredible future will come to pass.
On some level I think the answer is always the same, regardless of the headwinds or tailwinds: you do what you can with your limited resources to improve the world as much as you can. In some sense I think slowing the growth of factory farming in a world where it was growing is the same as a world where it is stagnant and we reduce the number of animals raised. In both worlds there’s a reduction in suffering. I wrote a creative piece on this exact topic here if that is at all appealing.
I also think on the front of factory farming we focus too much on the entire problem, and not enough on how good the wins are in and of themselves.
I don’t have a suggestion, but I’ve been encouraged and “heartwarmed” by the diverse range of responses below. Cool to see people with different ways of holding their hope and motivation, whether its enough for us to buy a bed net tomorrow or we do indeed have grander plans and visions, or we’re skeptical abut whether “future designing” is a good idea at all.
It might be too hard to envision an entire grand future, but it’s possible to envision specific wins in the short and medium-term. A short-term win could be large cage-free eggs campaigns succeeding, a medium-term win could be a global ban on caged layer hens. Similarly a short-term win for AI safety could be a specific major technical advance or significant legislation passed, a medium-term win could be AGIs coexisting with humans without the world going to chaos, while still having massive positive benefits (e.g. a cure to Alzheimer’s).
For what it’s worth, I’m skeptical of approaches that try to design the perfect future from first principles and make it happen. I’m much more optimistic about marginal improvements that try to mitigate specific problems (e.g. eradicating smallpox didn’t cure all illness.)
How much we can help doesn’t depend on how awful or how great the world is, we can save the drowning child whether there’s a billion more that are drowning or a billion more that are thriving. To the drowning child the drowning is just as real, as is our opportunity to help.
If you feel emotionally down and unable to complete projects, I would encourage to try things that work on priors (therapy, exercise, diet, sleep, making sure you have healthy relationships) instead of “EA specific” things.
There are plenty of lives we can help no matter who won the US election and whether factory farming keeps getting worse, their lives are worth it to them, no matter what the future will be.
And just to be clear, I am doing quite well generally. I think I used to repress my empathy because it just feels too painful. But it was controlling me subconsciously by constantly nagging me to do altruistic things. Nowadays, I sometimes connect to my empathy and it can feel overwhelming like yesterday. But I think it’s for the better long-term.
Thanks. Yeah, I now agree that it’s better to focus on what I can do personally. Someone made a good point in a private message that having a single vision leads to a utopian thinking which has many disadvantages. It reminded me of stories of my parents about the Soviet Union where great atrocities to currently living humans where justified in the name if creating a great communist future.
Grand ideologies and religions are alluring though, because they give a sense of being a part of something bigger. Like you have your place in the world, your community, which gives a clear meaning to life. Being a part of Effective Altruism and animal advocacy movements fulfil this need in my life somewhat but incompletely.
the person in the private message also told me about this serenity prayer: “grant me the serenity to accept the things I cannot change; courage to change the things I can; and wisdom to know the difference.”
Sorry to hear that you’re having a rough time!
When I’m feeling like this, I find that the only thing that helps is actually finishing a project end-to-end so I feel momentum.
Something I intrinsically think is valuable but wasn’t going to get done otherwise. (Like improving wikis or cleaning up a mess in a park).
Going as small as possible while still being satisfying helps remind me that there are things within my control and people around me that I can help.
I also liked this post from FarmKind
https://www.linkedin.com/posts/aidan-alexander_𝐌𝐲-𝐌𝐚𝐬𝐭𝐞𝐫𝐩𝐥𝐚𝐧-𝐭𝐨-𝐄𝐧𝐝-activity-7262449165924712451-lb7T?utm_source=share&utm_medium=member_android
Maybe this is a cop-out but I am thinking more and more of a pluralistic and mutually respectful future. Some people might take off on a spaceship to settle a nearby solar system. Some others might live lower-tech in eco villages. Animals will be free to pursue their goals. And each of these people will pursue their version of a worthwhile future with minimal reduction in the potential of others to fulfill theirs. I think anything else will just lead to oppressions of everyone that is not onboard with some specific wild project—I think most people’s dreams of a future are pretty wild and not something I would want for myself!
I think the sort of world that could be achieved by the massive funding of effective charities is a rather inspiring vision. Natalie Cargill, Longview Philanthropy’s CEO, lays out a rather amazing set of outcomes that could be achieved in her TED Talk.
I think that a realistic method of achieving these levels of funding are Profit for Good businesses, as I lay out in my TEDx Talk. I think it is realistic because most people don’t want to give something up to fund charities -as donation would require- but if they could help solve world problems by buying products or services they want or need of similar quality at the same price, they would.
I love the idea in your talk! I can imagine it changing the world a lot and that feels motivating. I wonder if more Founders Pledge members could be convinced to do this.
FWIW: definitely not a world vision, but Ozy’s blog is the most heart-warming thing I’ve read after the recent US elections.
I think eventually, working on changing the EA introductory program is important. I think it is an extremely good thing to do well, and I think it could be improved. I’m running a 6 week version right now, and I’ll see if I feel the same way at the end.
Why do you think changing it is important? In the version that you’re running right now, did you just shorten it, or did you change anything else?
I mostly shortened it, I think the main reasons I have are university level specific. I feel like there are a not insignificant number of people who would commit to a 6 week fellowship, but not 8, and there is not enough focus on the wider EA community; I feel like this should be more emphasized.
I missed a meeting
I’ve had a couple of organisations ask me to clarify the Donation Election’s vote-brigading rules. Understandably, they want to promote the donation election amongst their supporters, but they aren’t sure to what extent this is vote-brigading. The answer is- it depends.
We want to avoid the Donation Election being a popularity contest/ favouring the candidates with bigger networks. Neither popularity, nor size of network, is perfectly correlated with impact.
If you’d like to reach out to your audience, feel free, but please don’t tell them to vote for you. You can explain the event, and mention that you are a candidate, but we want the votes to inform us of the Forum audience’s opinions of marginal impact of money donated to these charities, not to the strength of their networks.
I’m aware this exortation won’t do all the work- we will also be looking into voting patterns, and new accounts (made after October 22, when the election was announced) won’t be eligible to vote.
🎧 We’ve created a Spotify playlist with this years marginal funding posts.
Posts with <30 karma don’t get narrated so aren’t included in the playlist.
Re: a recent quick take in which I called on OpenPhil to sue OpenAI: a new document in Musk’s lawsuit mentions this explicitly (page 91)
Interesting lawsuit; thanks for sharing! A few hot (unresearched, and very tentative) takes, mostly on the Musk contract/fraud type claims rather than the unfair-competition type claims related to x.ai:
One of the overarching questions to consider when reading any lawsuit is that of remedy. For instance, the classic remedy for breach of contract is money damages . . . and the potential money damages here don’t look that extensive relative to OpenAI’s money burn.
Broader “equitable” remedies are sometimes available, but they are more discretionary and there may be some significant barriers to them here. Specifically, a court would need to consider the effects of any equitable relief on third parties who haven’t done anything wrongful (like the bulk of OpenAI employees, or investors who weren’t part of an alleged conspiracy, etc.), and consider whether Musk unreasonably delayed bringing this lawsuit (especially in light of those third-party interests). On hot take, I am inclined to think these factors would weigh powerfully against certain types of equitable remedies.
Stated more colloquially, the adverse effects on third parties and the delay (“laches”) would favor a conclusion that Musk will have to be content with money damages, even if they fall short of giving him full relief.
Third-party interests and delay may be less of a barrier to equitable relief against Altman himself.
Musk is an extremely sophisticated party capable of bargaining for what he wanted out of his grants (e.g., a board seat), and he’s unlikely to get the same sort of solicitude on an implied contract theory that an ordinary individual might. For example, I think it was likely foreseeable in 2015 to January 2017 -- when he gave the bulk of the funds in question—that pursuing AGI could be crazy expensive and might require more commercial relationships than your average non-profit would ever consider. So I’d be hesitant to infer much in the way of implied-contractual constraints on OpenAI’s conduct than section 501(c)(3) of the Internal Revenue Code and California non-profit law require.
The fraud theories are tricky because the temporal correspondence between accepting the bulk of the funds and the alleged deceit feels shaky here. By way of rough analogy, running up a bunch of credit card bills you never intended to pay back is fraud. Running up bills and then later deciding that you aren’t going to pay them back is generally only a contractual violation. I’m not deep into OpenAI drama, but a version of the story in which the heel turn happened later in the game than most/all of Musk’s donations and assistance seems plausible to me.
Is there a maximum effective membership size for EA?
@Joey 🔸 spoke at EAGx last night and one of my biggest take-aways was the (controversial maybe) take that more projects should decline money.
This resonates with my experience; constraint is a powerful driver of creativity and with less constraint you do not necessarily create more creativity (or positive output).
Does the EA movement in terms of number of people have a similar dynamic within society? What growth rate is optimal for a group of members to expand, before it becomes sub-optimal? Zillions of factors to consider of course but… something maybe fun to ponder.
Compassion fatigue should be focused on less.
I had it hammered into me during training as a crisis supporter and I still burnt out.
Now I train others, have seen it hammered into them and still watch countless of them burn out.
I think we need to switch at least 60% of compassion fatigue focus to compassion satisfaction.
Compassion satisfaction is the warm feeling you receive when you give something meaningful to someone, if you’re ‘doing good work’ I think that feeling (and its absence) ought to be spoken about much more.
This is a cold take that’s probably been said before, but I thought it bears repeating occasionally, if only for the reminder:
The longtermist viewpoint has gotten a lot of criticism for prioritizing “vast hypothetical future populations” over the needs of “real people,” alive today. The mistake, so the critique goes, is the result of replacing ethics with math, or utilitarianism, or something cold and rigid like that. And so it’s flawed because it lacks the love or duty or “ethics of care” or concern for justice that lead people to alternatives like mutual aid and political activism.
My go-to reaction to this critique has become something like “well you don’t need to prioritize vast abstract future generations to care about pandemics or nuclear war, those are very real things that could, with non-trivial probability, face us in our lifetimes.” I think this response has taken hold in general among people who talk about X-risk. This probably makes sense for pragmatic reasons. It’s a very good rebuttal to the “cold and heartless utilitarianism/pascal’s mugging” critique.
But I think it unfortunately neglects the critical point that longtermism, when taken really seriously — at least the sort of longtermism that MacAskill writes about in WWOTF, or Joe Carlsmith writes about in his essays — is full of care and love and duty. Reading the thought experiment that opens the book about living every human life in sequential order reminded me of this. I wish there were more people responding to the “longtermism is cold and heartless” critique by making the case that no, longtermism at face value is worth preserving because it’s the polar opposite of heartless. Caring about the world we leave for the real people, with emotions and needs and experiences as real as our own, who very well may inherit our world but who we’ll never meet, is an extraordinary act of empathy and compassion — one that’s way harder to access than the empathy and warmth we might feel for our neighbors by default. It’s the ultimate act of care. And it’s definitely concerned with justice.
(I mean, you can also find longtermism worthy because of something something math and cold utilitarianism. That’s not out of the question. I just don’t think it’s the only way to reach that conclusion.)
I want to slightly push back against this post in two ways:
I do not think longtermism is any sort of higher form of care or empathy. Many longtermist EAs are motivated by empathy, but they are also driven by a desire for philosophical consistency, beneficentrism and scope-sensitivity that is uncommon among the general public. Many are also not motivated by empathy—I think empathy plays some role for me but is not the primary motivator? Cold utilitarianism is more important but not the primary motivator either [1]. I feel much more caring when I cook dinner for my friends than when I do CS research, and it is only because I internalize scope sensitivity more than >99% of people that I can turn empathy into any motivation whatsoever to work on longtermist projects. I think that for most longtermists, it is not more empathy, nor a better form of empathy, but the interaction of many normal (often non-empathy) altruistic motivators and other personality traits that makes them longtermists.
Longtermists make tradeoffs between other common values and helping vast future populations that most people disagree with, and without ideosyncratic EA values there is no reason that a caring person should make the same tradeoffs as longtermists. I think the EA value of “doing a lot more good matters a lot more” is really important, but it is still trading off against other values.
Helping people closer to you / in your community: many people think this has inherent value
Beneficentrism: most people think there is inherent value in being directly involved in helping people. Habitat for Humanity is extremely popular among caring and empathic people, and they would mostly not think it is better to make more of an overall difference by e.g. subsidizing eyeglasses in Bangladesh.
Justice: most people think it is more important to help one human trafficking victim than one tuberculosis victim or one victim of omnicidal AI if you create the same welfare, because they place inherent value on justice. Both longtermists and GiveWell think they’re similarly good modulo secondary consequences and decision theory.
Discount rate, risk aversion, etc.: There is no reason that having a 10% chance of saving 100 lives in 6,000 years is better than a 40% chance of saving 5 lives tomorrow, if you don’t already believe in zero-discount expected value as the metric to optimize. The reason to believe in zero-discount expected value is a thought experiment involving the veil of ignorance, or maybe the VNM theorem. It is not caring doing the work here because both can be very caring acts, it is your belief in the thought experiment connecting your caring to the expected value.
In conclusion, I think that while care and empathy can be an important motivator to longtermists, and it is valid for us to think of longtermist actions as the ultimate act of care, we are motivated by a conjunction of empathy/care and other attributes, and it is the other attributes that are by far more important. For someone who has empathy/care and values beneficentrism and scope-sensitivity, preventing an extinction-level pandemic is an important act of care; for someone like me or a utilitarian, pandemic prevention is also an important act. But for someone who values justice more, applying more care does not make them prioritize pandemic prevention over helping a sex trafficking victim, and in the larger altruistically-inclined population, I think a greater focus on care and empathy conflict with longtermist values more than they contribute.
[1] More important for me are: feeling moral obligation to make others’ lives better rather than worse, wanting to do my best when it matters, wanting future glory and social status for producing so much utility.
Thanks for this reply — it does resonate with me. It actually got me thinking back to Paul Bloom’s Against Empathy book, and how when I read that I thought something like: “oh yeah empathy really isn’t the best guide to acting morally,” and whether that view contradicts what I was expressing in my quick take above.
I think I probably should have framed the post more as “longtermism need not be totally cold and utilitarian,” and that there’s an emotional, caring psychological relationship we can have to hypothetical future people because we can imaginatively put ourselves in their shoes. And that it might even incorporate elements of justice or fairness if we consider them a disenfranchised group without representation in today’s decision making who we are potentially throwing under the bus for our own benefit, or something like that. So justice and empathy can easily be folded into longtermist thinking. This sounds like what you are saying here, except maybe I do want to stand by the fact that EA values aren’t necessarily trading off against justice, depending on how you define it.
If we go extinct, they won’t exist, so won’t be real people or have any valid moral claims. I also consider compassion, by definition, to be concerned with suffering, harms or losses. People who don’t come to exist don’t experience suffering or harm and have lost nothing. They also don’t experience injustice.
Longtermists tend to seem focused on ensuring future moral patients exist, i.e. through extinction risk reduction. But, as above, ensuring moral patients come to exist is not a matter of compassion or justice for those moral patients. Still, they may help or (harm!) other moral patients, including other humans who would exist anyway, animals, aliens or artificial sentience.
On the other hand, longtermism is still compatible with a primary concern for compassion or justice, including through asymmetric person-affecting views and wide person-affecting views (e.g. Thomas, 2019, probably focus on s-risks and quality improvements), negative utilitarianism (focus on s-risks) and perhaps even narrow person-affecting views. However, utilitarian versions of most of these views still seem prone, at least in principle, to endorsing killing everyone to replace us and our descendants with better off individuals, even if each of us and our descendants would have had an apparently good life and object. I think some (symmetric and perhaps asymmetric) narrow person-affecting views can avoid this, and maybe these are the ones that fit best with compassion and justice. See my post here.
That being said, empathy could mean more than just compassion or justice and could endorse bringing happy people into existence for their own sake, e.g. Carlsmith, 2021. I disagree that we should create people for their own sake, though, and my intuitions are person-affecting.
Other issues people have with longtermism are fanaticism and ambiguity; the probability that any individual averts an existential catastrophe is usually quite low at best (e.g. 1 in a million), and the numbers are also pretty speculative.
Yeah, I meant to convey this in my post but framing it a bit differently — that they are real people with valid moral claims who may exist. I suppose framing it this way is just moving the hypothetical condition elsewhere to emphasize that, if they do exist, they would be real people with real moral claims, and that matters. Maybe that’s confusing though.
BTW, my personal views lean towards a suffering-focused ethics that isn’t seeking to create happy people for their own sake. But I still think that, in coming to that view, I’m concerned with the experience of those hypothetical people in the fuzzy, caring way that utilitarians are charged with disregarding. That’s my main point here. But maybe I just get off the crazy train at my unique stop. I wouldn’t consider tiling the universe with hedonium to be the ultimate act of care/justice, but I suppose someone could feel that way, and thereby make an argument along the same lines.
Agreed there are other issues with longtermism — just wanted to respond to the “it’s not about care or empathy” critique.
Well known EA sympathizer Richard Hanania writes about his donation to the Shrimp Welfare Project.
I have some hesitations about supporting Richard Hanania given what I understand of his views and history. But in the same way I would say I support *example economic policy* of *example politician I don’t like* if I believed it was genuinely good policy, I think I should also say that I found this article of Richard’s quite warming.
Paying candidates to complete a test task likely increases inequality, credentialism and decreases candidate quality. If you pay candidates for their time, you’re likely to accept less candidates and lower variance candidates into the test task stage. Orgs can continue to pay top candidates to complete the test task, if they believe it measurably decreases the attrition rate, but give all candidates that pass an anonymised screening bar the chance to complete a test task.
My guess is that, for many orgs, the time cost of assessing the test task is larger than the financial cost of paying candidates to complete the test task, and that significant reasons for wanting to compensate applicants are (i) a sense of justice, (ii) wanting to avoid the appearance of unreasonably demanding lots of unpaid labour from applicants, not just wanting to encourage applicants to complete the tasks[1].
So I agree that there are good reasons for wanting more people to be able to complete test tasks. But I think that doing so would potentially significantly increase costs to orgs, and that not compensating applicants would reduce costs to orgs by less than one might imagine.
I also think the justice-implications of compensating applicants are unclear (offering pay for longer tasks may make them more accessible to poorer applicants)
I think that many applicants are highly motivated to complete tasks, in order to have a chance of getting the job.
It takes a significant amount of time to mark a test task. But this can be fixed by just adjusting the height of the screening bar, as opposed to using credentialist and biased methods (like looking at someone’s LinkedIn profile or CV).
This is an empirical question, and I suspect is not true. For example, it took me 10 minutes to mark each candidates 1 hour test task. So my salary would need to be 6* higher (per unit time) than the test task payment for this to be true.
This is a good point.
Strictly speaking your salary is the wrong number here. At a minimum, you want to use the cost to the org of your work, which is your salary + other costs of employing you (and I’ve seen estimates of the other costs at 50-100% of salary). In reality, the org of course values your work more highly than the amount they pay to acquire it (otherwise… why would they acquire it at that rate) so your value per hour is higher still. Keeping in mind that the pay for work tasks generally isn’t that high, it seems pretty plausible to me that the assessment cost is primarily staff time and not money.
I see a dynamic playing out here, where a user has made a falsifiable claim, I have attempted to falsify it, and you’ve attempted to deny that the claim is falsifiable at all.
I recognise it’s easy to stumble into these dynamics, but we must acknowledge that this is epistemically destructive.
I don’t think we should dismiss empirical data so quickly when it’s brought to the table—that sets a bad precedent.
I can also provide empirical data on this if that is the crux here?
Notice that we are discussing a concrete empirical data point, that represents a 600% difference, while you’ve given a theoretical upper bound of 100%. That leaves a 500% delta.
Would you be able to provide any concrete figures here?
I view pointing to opportunity cost in the abstract as essentially an appeal to ignorance.
Not to say that opportunity costs do not exist, but you’ve failed to concretise them in a way, and that makes it hard to find the truth here.
I could make similar appeals to ignorance in support of my argument, like the idea the benefit of getting a better candidate is very high, as candidate performance is fat-tailed ect. - but I believe this is similarly epistemically destructive. If I were to make a similar claim, I would likely attempt to concretise it.
My claim is that the org values your time at a rate that is significantly higher than the rate they pay you for it, because the cost of employment is higher than just salary and because the employer needs to value your work above its cost for them to want to hire you. I don’t see how this is unfalsifiable. Mostly you could falsify them by asking orgs how they think about the cost of staff time, though I guess some wouldn’t model it as explicitly as this.
They do mean that we’re forced to estimate the relevant threshold instead of having a precise number, but a precise wrong number isn’t better than an imprecise (closer to) correct number.
No, if you’re comparing the cost of doing 10 minutes of work at salary X and 60 minutes of work compensated by Y, but I argue that salary X underestimates the cost of your work by a factor of 2, your salary now only needs to be more than 3 times larger than the work trial compensation, not 5 times.
When it comes to concretising “how much does employee value exceed employee costs”, it probably varies a lot from organisation to organisation. I think there are several employers in EA who believe that after a point, paying more doesn’t really get you better people. This allows their estimates of value of staff time to exceed employee costs by enormous margins, because there’s no mechanism to couple the two together. I think when these differences are very extreme we should be suspicious if they’re really true, but as someone who has multiple times had to compare earning to give with direct work, I’ve frequently asked an org “how much in donations would you need to prefer the money over hiring me?” and for difficult-to-hire roles they frequently say numbers dramatically larger than the salary they are offering.
This means that your argument is not going to be uniform across organisations, but I don’t know why you’d expect it to be: surely you weren’t saying that no organisation should ever pay for a test task, but only that organisations shouldn’t pay for test tasks when doing so increases their costs of assessment to the point where they choose to assess fewer people.
My expectation is that if you asked orgs about this, they would say that they already don’t choose to assess fewer people based on cost of paying them. This seems testable, and if true, it seems to me that it makes pretty much all of the other discussion irrelevant.
Whether or not to use “credentialist and biased methods (like looking at someone’s LinkedIn profile or CV)” seems orthogonal to the discussion at hand?
The key issue seems to be that if you raise the screening bar, then you would be admitting fewer applicants to the task (the opposite of the original intention).
This will definitely vary by org and by task. But many EA orgs report valuing their staff’s time extremely highly. And my impression is that both grading longer tasks and then processing the additional applicants (many orgs will also feel compelled to offer at least some feedback if a candidate has completed a multi-hour task) will often take much longer than 10 minutes total.
For Pause AI or Stop AI to succeed, pausing / stopping needs to be a viable solution. I think some AI capabilities people who believe in existential risk may (perhaps?) be motivated by the thought that the risk of civilisational collapse is high without AI, so it’s worth taking the risk of misaligned AI to prevent that outcome.
If this really is cruxy for some people, it’s possible this doesn’t get noticed because people take it as a background assumption and don’t tend to discuss it directly, so they don’t realize how much they disagree and how crucial that disagreement is.
EA tends to be anti-revolution, for a variety of reasons. The recent trump appointments have had me wondering if people here have a “line” in their head. By line I mean something like, I need to drop everything and start protesting or do something fast.
Like I don’t think appointing RFK jr. to health secretary is that line for me, but I also realize I don’t have a clear “line” in my head. If trump appointed a nazi who credibly claimed they were going to commit mass scale war crimes to the sec defense, is that enough for the people here to drop their current work?
I’m definitely generally on the side of engaging in reactionary politics being worthless, and further I don’t feel like the US is about to fall apart or completely go off the rails, but it would be really interesting to see if we could teleport some EAs back in time right before the rise of hitler or pre chinese revolution etc. (while wiping their brains of the knowledge of what would come) and see if they would say stuff like “politics is the mind killer and I need to focus on xyz”
Some musings about experience and coaching. I saw another announcement relating to mentorship/coaching/career advising recently. It looked like the mentors/coaches/advisors were all relatively junior/young/inexperienced. This isn’t the first time I’ve seen this. Most of this type of thing I’ve seen in and around EA involves the mentors/advisors/coaches being only a few years into their career. This isn’t necessarily bad. A person can be very well-read without having gone to school, or can be very strong without going to a gym, or can speak excellent Japanese without having ever been to Japan. A person being two or three or four years into their career doesn’t mean that it is impossible for them to have have good ideas and good advice.[1] But it does seem a little… odd. The skepticism I feel is similar to having a physically frail person as a fitness trainer: I am assessing the individual on a proxy (fitness) rather than on the true criteria (ability to advise me regarding fitness). Maybe that thinking is a bit too sloppy on my part.
This doesn’t mean that if you are 24 and you volunteer as a mentor that you should stop; you aren’t doing anything wrong. And I wouldn’t want some kind a silly and arbitrary rule, such as “only people age 40+ are allowed to be career coaches.” And there are some people doing this kind of work that have a decade or more of professional experience; I don’t want to make it sound like all of the people doing coaching and advising are fresh grads.
I wonder if there are any specific advantages or disadvantages to this ‘junior skew.’ Is there a meaningful correlation between length of career and ability to help other people with their careers?
EA already skews somewhat young, but from the last EA community survey it looks like the average age was around 29. So I wonder why are the vast majority of people doing mentorship/coaching/career advising are younger than that? Maybe the older people involved in EA are disproportionality not employed for EA organizations and are thus less focused on funneling people into impactful careers? I do have the vague impression that many 35+ EAs lean more toward earn-to-give. Maybe older EAs tend to be a little more private and less focused on the EA community? Maybe older people simply are less interested, or don’t view it as a priority? Maybe the organizations that employ/hire coaches all prefer young people? Maybe this is a false perception and I’m engaging in sloppy generalization from only a few anecdotes?
And the other huge caveat is that you can’t really know what a person’s professional background is from a quick glance at their LinkedIn Profile and the blurb that they share on a website, any more than you can accurately guess age from a profile photo. People sometimes don’t list everything. I can see that someone earned a bachelor’s degree in 2019 or 2020 or 2021, but maybe they didn’t follow a “standard” path: maybe they had a 10-year career prior to that, so guesses about being fairly young or junior are totally off. As always, drawing conclusions based on tiny snippets of information with minimal context is treacherous territory.
I checked and people who currently work in an EA org are only slightly older on average (median 29 vs median 28).