Don’t trust lost-canony individuals? Don’t revere a single individual and trust him with deciding the fate of a such an important org?
Vaipan
But absolutely, and yet a big part of EAs seem to be pro-Altman! That was my point, I might not have been clear enough, thanks for calling this to attention
Reading McAskill’s AMA from 4 years ago about what would kill EA, I can’t help but find his predictions chillingly realistic!
The brand or culture becomes regarded as toxic, and that severely hampers long-run growth. (Think: New Atheism) = OpenAI reshuffling and general focus on AI safety has increased the caution of the mainstream public towards EA
A PR disaster, esp among some of the leadership. (Think: New Atheism and Elevatorgate) = SBF debacle!
Fizzle—it just ekes along, but doesn’t grow very much, loses momentum and goes out of fashion = This one hasn’t happened yet but is the obviously structural one, still has a chance to happen
When will we learn? I feel that we haven’t taken seriously the lessons from SBF given what happened at OpenAI and the split in the community concerning support for Altman and his crazy projects. Also, as a community builder who talks to a lot of people and who does outreach, I hear a lot of bad criticism concerning EA (‘self-obsessed tech bros wasting money’), and while it’s easy to think that these people speak out of ignorance, ignoring the criticism won’t make it go away.
I would love to see more worry and more action around this.
Interesting! This is a very similar reasoning to what CE suggested from the start, nice to see this going forward and more supported financially.
Your insights have been incredibly valuable. I’d like to share a few thoughts that might offer a balanced perspective going forward.
It’s worth noting the need to approach the call for increased funding critically. While animal welfare and global health organizations might express similar needs, the current emphasis on AI risks often takes center stage. There’s a clear desire for more support within these organizations, but it’s important for OpenPhil and private donors to assess these requests thoughtfully to ensure their alignment with genuine justification.
The observation that AI safety professionals anticipate more attention within Effective Altruism for AI safety compared to AI governance confirms a suspicion I’ve had. There seems to be a tendency among AI safety experts to prioritize their field above others, urging a redirection of resources solely to AI safety. It’s crucial to maintain a cautious approach to such suggestions. Given the current landscape in AI safety—characterized by disagreements among professionals and limited demonstrable impact—pursuing such a high-risk strategy might not be the most prudent choice.
In discussions with AI safety experts about the potential for minimal progress despite significant investment in the wrong direction over five years, their perspective often revolves around the need to explore diverse approaches. However, this approach seems to diverge considerably from the principles embraced within Effective Altruism. I can understand why a community builder might feel uneasy about a strategy that, after five years of intense investment, offers little tangible progress and potentially detracts from other pressing causes.
This phrase is highly sexist and doesn’t mean anything, especially since the demographics have barely changed (from 26% to 29% of women...wouldn’t call it a shift in demographics), and what does that mean that women cannot use strong quantitative evidence? I don’t need to say how ridiculous.
I don’t see the point of this text. It doesn’t touch upon anything specific, remaining very vague as to what are the ‘old values’. The thing about charities is also surprising giving OpenPhil’s switch to GCR, funding less and less neartermist charities (human at least, animal-based charities might get more funding given the current call for that now).
Not only is it killing—if the fetus is sentient, it’s likely quite painful.
So what? Do we forbid abortion and condemn women to have these children? Or should we rather talk about policies to ensure that we don’t need abortions anymore—that is, making contraceptives widely available and costless and educating men and women from the youngest age about the need of having efficient contraceptives?
You talk about the risk of conception being known—men know, but some pay very little attention to the consequences nevertheless. So, should we find binding ways to force men to care?
I hope this conversation sounds as interesting as regulating women’s bodies in the first place, because it’s a conversation we must have if we start talking about removing the ability of giving women a choice.
Straight to the point
The framework is naïve and hypocritical. American terms with ‘pro-life’ and ‘pro-choice’ is already a political choice being made, and it sure doesn’t give the same vibes if you use the term abortion.
The chicken comparison made in the comments effectively highlights how hypocritical this is. Will we stop killing chicken? I don’t think so.
Maybe you should wonder in the first place why these women get there, and maybe we should tackle the cause of why they get there. Are abortions the issue, or the fact that men put women in these situations? And now men want to regulate what happens once other men have done the deed? That sounds harsh.
Have you thought about the consequences of advocating for this? How much this kind of post legitimate right-wing, conservative discourses while EA is supposed to be “apolitical” (I should know, it’s what I get every time I talk about structural change)? does moral uncertainty makes up for that kind of bad consequences?
Appendix one is about religion. This has nothing to do here.
This kind of topic deserves an awareness on all accounts—not just examining one aspect like does the foetus feel pain, but also are we ready to destroy a right that has been painfully gained and is under threat? This post comes as incredibly insensitive in a context of RoeVsWade, packed with misogyny as it doesn’t give any credit to the argument that it is before all a choice that women should be able to make. One line is not enough, what honesty is to say ’I know it matters but here’s how I’ll demonstrate how it doesn’t?
Hey Charlie, great post—EAs tend to forget to look at the big picture, or when they do it’s very skewed or simplistic (assertions that technology has always been for the best, etc). So it’s good to get a detailed perspective on what worked and what did not.
I would simply add that there is a historical/cultural component that determines the chances of success for each protest that should not be forgotten. For example, in Sweden, a highly-functioning democracy, it’s no surprise that the government would pay attention to the protests; I’m not sure this would work for example in Iran if people were pushing against nuclear. In Kazakhstan, the political climate at the time was freer than it is now, it was the end of the Soviet Union and there was a wind of freedom and detachment from the Soviet Union that made more things possible.
Adding this component by understanding the political dynamics and the level of freedom or responsiveness of the governments towards a bottom-up level of contest would probably help in advocating or not for protests and even put money into it. EAs might not be sold on activism but if they’re shown that there’s a decent possibility of impact they might change their minds.
Hi Vasco, thanks for your comment it’s really interesting, but I can’t see the first two pictures, maybe others can’t as well?
“That worry is particularly pronounced when the actions and fortunes of a handful of mega-donors weigh heavily on the whole movement’s future”
This is the most relevant part, and the most dangerous as well. It’s hard not to share these worries. I would love to see them addressed for good by the overheads. Diversifying funding is hard, but seems absolutely necessary given the current strings that come with it.
Sure! So far there are arguments showing how General AI could be created, or even superintelligence, but the timelines vary immensely from researcher to researcher and there is no data-based evidence that would justify pouring all this money into it? EA is supposed to be evidenced-based and yet all we seem to have are arguments, but not evidences? I understand this is the nature of these things, but it’s striking to see how efficient EA tries to be when it comes to measuring GiveWell’s impact VS the impact created by pouring money into AI safety for longtermist causes? It feels that impact evaluation when it comes to AI safety is non-existent at worse and not good at best (see useful post of RP about impact-based evidence regarding x-risks).
EA people might have written thousands of posts on the reason behind the dangers of AI, but these posts were made for the community, assuming people already believed in Bostrom’s work etc. These posts have a certain jargon and assume many things that the mainstream audience doesn’t assume at all. And talking about advocacy about AI is still very much debated and far from granted (see the recent post on advocacy on the forum, I can link it if people are confused). Also the way OpenPhil advocates money and its clear concentration on CGR detrimentally to more neartermist, globah health causes is a fact, and their influence on Congress is also a fact. Facts can be skewed towards a certain thinking perspective, true, but they’re there.
Despite the feeling some might have—that most in EA consider existential risks related to AI as THE most pressing issue, I’m not sure how true that is. The forum is a nice screen of smoke given that people posting and commenting on these posts are always the same, and Rethink Priorities survey is NOT representative—I ran the number of my own EA group with the trends that were evoked for my group on the RP priorities and it doesn’t align, it’s clearly skewed towards a minority that is always on the forum and AI afficionados.
So we can be mad all we want and rant that these journalists are dense (I don’t deny Politico bad coverage of EA btw, it’s just not the only journal making these conclusions), as long as we don’t take advocacy seriously and try to get these arguments out there, nothing better will happen. So let’s take these articles as an opportunity to do better, instead of taking our arguments for granted. There is work to do about this inside and outside the community.
And let me anticipate the downvotes that these opinions usually get me (quite bad btw for the a community that is supposed to seek truth and not just cede to the human impulsion of ’I don’t like it, I’ll downvote it without arguing): if you disagree on these specific points, let me know why. Be constructive. It’s also an issue: imagine a journalist creating an account to understand better the EA community and comment on the posts who gets downvoted every time he dares raising negative opinions/asking uncomfortable questions about AI safety. Well, so much for our ability to be constructive.
This post might need to be updated as China made a big pace in terms of regulations : At the third Belt and Road Forum for International Cooperation, President Xi announced the ‘Global AI Governance Initiative’ (GAIGI) for participating countries of the Belt and Road Initiative (i.e. China’s $1 trillion global infrastructure program).
Thank you for doing this! Highly helpful, and transparent, we need more of this. I have many questions, mostly on a meta-level, but the part about AI safety is what I’d preferred to be answered.
About AI safety :
What kind of impact or successes do you expect by hiring these 3 seniors roles in AI safety? Can you detail a bit the impact value expected by the creation of these roles?
Do you think that the AI safety field is talent-constrained at the senior level, but has its fair share of junior positions already filled?
About the ratio of hires between AI safety and biorisks:
Given the high number of positions in biosafety, should we conclude that the field is more talent-constrained than AI safety that seem to need less workforce?
More diverse consideration about GCR
Do you intend to dedicate any of these roles to nuclear risks to help a bit the lack of funding in the field of nuclear risk, or is it rated rather low in your prioritization cause ranking?
About cause-prioritization positions
What kind of projects do you intend to launch/can you be more specific about the topics that will be researched in this area? Also, what kind of background knowledge is needed for such a job?
Thank you so much for your answers!
Thank you for this extensive overview! Definitely relevant and I’ll be sure to keep it for my coachees who want to switch careers into nuclear-related paths. Do you have any knowledge of a repertory of experts working in these institutions that could be contacted by my coachees in case of questions related to these jobs?
Thank your for seriously tackling a topic that seems to be overlooked despite its huge significance.
I agree with you Nick, when you say that we should present AI risks in a much more human way, I just don’t think that it’s the path taken by the loudest voices concerning AI risks right now, and that’s a shame. And I see no incompatibility between good epistemics and wanting to make the field of AI safety more inclusive and kind so that it includes everybody and not just software engineers who went into EA because there was money (see the post on the great amount of funding going to AI safety positions that are paid x3 compared to researchers working in hospitals etc), and prestige (they’ve been into ML for so long and now is their chance to get opportunities and recognition). I want to dive deeper into how much EA-oriented are these new EAs if we talk about the core-values that have created the EA movement.
On a constructive note, as a community builder, I am raising projects from the ground whose aim to focus on the role of AI risks in regards to soaring inequalities or possibility of increasing the likelihood of AI being used by a tyrannic power, themes that have a clear signalling into impact for everyone, rather than staying in the realm of singletons and other abstract figures because it’s just intellectually satisfying to think about these things.
The board did great, I’m very happy we had Tasha and Helen on board to make AI safety concerns prevail.
What I’ve been saying from the start is that this opinion isn’t what I’ve seen on Twitter threads within the EA/rationalist community (I don’t give credits to Tweets but I can’t deny the role they play in AI safety cultural framework), or even on the EA forum, reddit, etc. Quite the opposite, actually: people advocating for Altman’s return and heavily criticizing the board for their decision (I don’t agree with the shadiness that surrounds the board’s decision, but I nevertheless think it’s a good decision).