To extend your comment about lower standards for EA criticism, I thought the remainder of Venkatasubramanian’s quote was quite interesting (bolding mine):
″...Terminator, blah blah blah,’” Venkatasubramanian said. “I think it’s important to ask, what is the basis for these claims? What is the likelihood of these claims coming to pass? And how certain are we about all this?”
The EA community has written quite a bit about the basis, likelihood, and certainty of its claims. A few minutes of Googling and reading EA content could have disabused the reporter of the notion that EA has unserious, careless opinions about long-term AI risk.
I wonder if either of these things are happening:
The reporter or the outlet or both have a level of antipathy for EA that precludes due diligence.
The reporter could have attempted a basic due diligence but mainly encountered sources that have a very negative take on EA. After all, most popular articles about EA are negative. If you Google “effective altruism AI,” the first result is this Wired article with a very negative take on EA and AI. There are a few top-level results from 80K and EA.org, but most of the remaining first-page results are skeptical or negative takes on EA.
Either way, EA’s public image (specifically regarding AI) is not ideal. I like your suggestion about making a greater effort to visibly signal cooperativeness!
the article nevertheless portrays the EA community as if it is pushing frivolous, ill-considered ideas instead of supporting the Real, Serious concerns held by Thoughtful and Reasonable people.
What bothers me is that many criticisms of EA that hinge on “EA is neglecting this angle in a careless and malicious manner” could have been addressed with basic Googling.
I don’t expect the average Joe to actively research EA, but someone who’s creating a longform written or video essay with multiple sources should be held to higher standards.
Thus, we must be wary of the power behind a mindset focused solely on the hypothetical future and allow space and empathy for the short term needs of society. A tyranny of the quantifiably rational majority would lead to more quantifying of human suffering than policy change.
This conclusion somehow manages to completely ignore the neartermist cause areas, the frequent discussions about prioritising neartermism vs longtermism and the research on neartermism that does focus on qualitative wellbeing. I genuinely don’t know how someone can read dozens of pages about EA and not come across any reference to neartermism.
Ultimately, I just read so many EA criticism pieces where it feels like the writer hasn’t talked to EAs, or conveniently ignores that most EAs spend most of their work time thinking about how to solve real problems that affect people.
These articles paint a picture of EAs so completely divorced from my actual interactions with EAs. The actual object-level work done by EA orgs is often described in 1-2 sentences, while an entire article is devoted to organisational drama/conflict. Like … how do you talk about EA without mentioning the work EAs do to … solve problems???
EA people might have written thousands of posts on the reason behind the dangers of AI, but these posts were made for the community, assuming people already believed in Bostrom’s work etc. These posts have a certain jargon and assume many things that the mainstream audience doesn’t assume at all. And talking about advocacy about AI is still very much debated and far from granted (see the recent post on advocacy on the forum, I can link it if people are confused). Also the way OpenPhil advocates money and its clear concentration on CGR detrimentally to more neartermist, globah health causes is a fact, and their influence on Congress is also a fact. Facts can be skewed towards a certain thinking perspective, true, but they’re there.
Despite the feeling some might have—that most in EA consider existential risks related to AI as THE most pressing issue, I’m not sure how true that is. The forum is a nice screen of smoke given that people posting and commenting on these posts are always the same, and Rethink Priorities survey is NOT representative—I ran the number of my own EA group with the trends that were evoked for my group on the RP priorities and it doesn’t align, it’s clearly skewed towards a minority that is always on the forum and AI afficionados.
So we can be mad all we want and rant that these journalists are dense (I don’t deny Politico bad coverage of EA btw, it’s just not the only journal making these conclusions), as long as we don’t take advocacy seriously and try to get these arguments out there, nothing better will happen. So let’s take these articles as an opportunity to do better, instead of taking our arguments for granted. There is work to do about this insideand outside the community.
And let me anticipate the downvotes that these opinions usually get me (quite bad btw for the a community that is supposed to seek truth and not just cede to the human impulsion of ’I don’t like it, I’ll downvote it without arguing): if you disagree on these specific points, let me know why. Be constructive. It’s also an issue: imagine a journalist creating an account to understand better the EA community and comment on the posts who gets downvoted every time he dares raising negative opinions/asking uncomfortable questions about AI safety. Well, so much for our ability to be constructive.
Sure! So far there are arguments showing how General AI could be created, or even superintelligence, but the timelines vary immensely from researcher to researcher and there is no data-based evidence that would justify pouring all this money into it? EA is supposed to be evidenced-based and yet all we seem to have are arguments, but not evidences? I understand this is the nature of these things, but it’s striking to see how efficient EA tries to be when it comes to measuring GiveWell’s impact VS the impact created by pouring money into AI safety for longtermist causes? It feels that impact evaluation when it comes to AI safety is non-existent at worse and not good at best (see useful post of RP about impact-based evidence regarding x-risks).
Yeah, I think it’s quite likely that these reporters are basing their opinions on others’ opinions, rather than what people from the EA and AI safety communities are saying. I wonder if some search engine optimization could help with this?
This part was also interesting & frustrating:
“Regulations targeting these frontier models would create unique hurdles and costs specifically for companies that already have vast resources, like OpenAI and Anthropic, thus giving an advantage to less-resourced start-ups and independent researchers who need not be subject to such requirements (because they are building less dangerous systems),” Levine wrote (emphasis original).
I think this is a really good point that needs to be signal boosted more (and obviously policy needs to actually do this). But then it immediately follows with this refutation without any arguments 😭:
Many AI experts dispute Levine’s claim that well-resourced AI firms will be hardest hit by licensing rules. Venkatasubramanian said the message to lawmakers from researchers, companies and organizations aligned with Open Philanthropy’s approach to AI is simple — “‘You should be scared out of your mind, and only I can help you.’” And he said any rules placing limits on who can work on “risky” AI would put today’s leading companies in the pole position.
To extend your comment about lower standards for EA criticism, I thought the remainder of Venkatasubramanian’s quote was quite interesting (bolding mine):
The EA community has written quite a bit about the basis, likelihood, and certainty of its claims. A few minutes of Googling and reading EA content could have disabused the reporter of the notion that EA has unserious, careless opinions about long-term AI risk.
I wonder if either of these things are happening:
The reporter or the outlet or both have a level of antipathy for EA that precludes due diligence.
The reporter could have attempted a basic due diligence but mainly encountered sources that have a very negative take on EA. After all, most popular articles about EA are negative. If you Google “effective altruism AI,” the first result is this Wired article with a very negative take on EA and AI. There are a few top-level results from 80K and EA.org, but most of the remaining first-page results are skeptical or negative takes on EA.
Either way, EA’s public image (specifically regarding AI) is not ideal. I like your suggestion about making a greater effort to visibly signal cooperativeness!
What bothers me is that many criticisms of EA that hinge on “EA is neglecting this angle in a careless and malicious manner” could have been addressed with basic Googling.
I don’t expect the average Joe to actively research EA, but someone who’s creating a longform written or video essay with multiple sources should be held to higher standards.
One example: Effective Altruism and the Cult of Rationality: Shaping the Political Future from FTX to AI — COLUMBIA POLITICAL REVIEW (cpreview.org)
This conclusion somehow manages to completely ignore the neartermist cause areas, the frequent discussions about prioritising neartermism vs longtermism and the research on neartermism that does focus on qualitative wellbeing. I genuinely don’t know how someone can read dozens of pages about EA and not come across any reference to neartermism.
Ultimately, I just read so many EA criticism pieces where it feels like the writer hasn’t talked to EAs, or conveniently ignores that most EAs spend most of their work time thinking about how to solve real problems that affect people.
These articles paint a picture of EAs so completely divorced from my actual interactions with EAs. The actual object-level work done by EA orgs is often described in 1-2 sentences, while an entire article is devoted to organisational drama/conflict. Like … how do you talk about EA without mentioning the work EAs do to … solve problems???
EA people might have written thousands of posts on the reason behind the dangers of AI, but these posts were made for the community, assuming people already believed in Bostrom’s work etc. These posts have a certain jargon and assume many things that the mainstream audience doesn’t assume at all. And talking about advocacy about AI is still very much debated and far from granted (see the recent post on advocacy on the forum, I can link it if people are confused). Also the way OpenPhil advocates money and its clear concentration on CGR detrimentally to more neartermist, globah health causes is a fact, and their influence on Congress is also a fact. Facts can be skewed towards a certain thinking perspective, true, but they’re there.
Despite the feeling some might have—that most in EA consider existential risks related to AI as THE most pressing issue, I’m not sure how true that is. The forum is a nice screen of smoke given that people posting and commenting on these posts are always the same, and Rethink Priorities survey is NOT representative—I ran the number of my own EA group with the trends that were evoked for my group on the RP priorities and it doesn’t align, it’s clearly skewed towards a minority that is always on the forum and AI afficionados.
So we can be mad all we want and rant that these journalists are dense (I don’t deny Politico bad coverage of EA btw, it’s just not the only journal making these conclusions), as long as we don’t take advocacy seriously and try to get these arguments out there, nothing better will happen. So let’s take these articles as an opportunity to do better, instead of taking our arguments for granted. There is work to do about this inside and outside the community.
And let me anticipate the downvotes that these opinions usually get me (quite bad btw for the a community that is supposed to seek truth and not just cede to the human impulsion of ’I don’t like it, I’ll downvote it without arguing): if you disagree on these specific points, let me know why. Be constructive. It’s also an issue: imagine a journalist creating an account to understand better the EA community and comment on the posts who gets downvoted every time he dares raising negative opinions/asking uncomfortable questions about AI safety. Well, so much for our ability to be constructive.
Can you give some examples here? What are some uncomfortable questions about AI safety (that a journalist might ask)?
Sure! So far there are arguments showing how General AI could be created, or even superintelligence, but the timelines vary immensely from researcher to researcher and there is no data-based evidence that would justify pouring all this money into it? EA is supposed to be evidenced-based and yet all we seem to have are arguments, but not evidences? I understand this is the nature of these things, but it’s striking to see how efficient EA tries to be when it comes to measuring GiveWell’s impact VS the impact created by pouring money into AI safety for longtermist causes? It feels that impact evaluation when it comes to AI safety is non-existent at worse and not good at best (see useful post of RP about impact-based evidence regarding x-risks).
Yeah, I think it’s quite likely that these reporters are basing their opinions on others’ opinions, rather than what people from the EA and AI safety communities are saying. I wonder if some search engine optimization could help with this?
This part was also interesting & frustrating:
I think this is a really good point that needs to be signal boosted more (and obviously policy needs to actually do this). But then it immediately follows with this refutation without any arguments 😭: