Because their leaders are openly enthusiastic about AI regulation and saying things like “better that the standard is set by American companies that can work with our government to shape these models on important issues” or “we need a referee”, rather than arguing that their tech is too far away from AGI to need any regulation or arguing the risks of AI are greatly exaggerated, as you might expect if they saw AI safety lobbying as a threat rather than an opportunity.
David T
I’m not sure that I buy that critics lack motivation. At least in the space of AI, there will be (and already are) people with immense financial incentive to ensure that x-risk concerns don’t become very politically powerful.
The current situation still feels like the incentives are relatively small compared with the incentive to create the appearance that the existence of anthropogenic climate change is still uncertain. Over decades advocates have succeeded in actually reducing fossil fuel consumption in many countries as well as securing less-likely-to-be-honoured commitments to Net Zero, and direct and indirect energy costs are a significant part of everyone’ household budget.
Not to mention that Big Tech companies whose business plans might be most threatened by “AI pause” advocacy are currently seeing more general “AI safety” arguments as an opportunity to achieve regulatory capture...
I don’t believe that the people who are currently doing high quality Xrisk advocacy would counter-factually be writing nasty newspaper hit pieces; these just seem like totally different activities, or that Timnit would write more rigourously if people gave her more money.
I don’t think that’s what the OP argues though.[1] The argument is that the people motivated to seek funding to assess X-risk as a full time job tend to be disproportionately people that think X-risk and the ability to mitigate it significant. So of course advocates produce more serious research, and of course people who don’t think it’s that big a deal don’t tend to choose it as a research topic (and on the rare occasions they put actual effort in, it’s relatively likely to be motivated by animus against x-risk advocates).
If those x-risk advocates had to do something other than x-risk research for their day job, they might not write hit pieces, but there would be blogs instead of a body of high quality research to point to, and some people would still tweet angrily and insubstantially about Sam Altman and FAANG.
Gebru’s an interesting example looked at the other way, because she does write rigorous papers on her actual research interests as well as issue shallow, hostile dismissals of groups in tech she doesn’t like. But funnily enough, nobody’s producing high quality rebuttals of those papers[2] - they’re happy to dismiss her entire body of work based on disagreeing with her shallower comments. Less outspoken figures than Gebru write papers on similar lines, but these don’t get the engagement at all.
I do agree people love to criticize.
- ^
the bar chart for x-risk believers without funding actually stops short of the “hit piece” FWIW
- ^
EAs may not necessarily actually disagree with her when she’s writing about implicit biases in LLMs or concentration of ownership in tech rather than tweeting angrily about TESCREALs, but obviously some people and organizations have reason to disagree with her papers as well.
- ^
I also think that it’s far from given that the option which would minimise consumer harm from monopoly would also minimise pressure to race.
An AI research institute spun off by the regulator under pressure to generate business models to stay viable is plausibly a lot more inclined to ‘race’, than an AI research institute swimming in ad money which can earn its keep by incrementally improving search, ads and phone UX and generating good PR with its more abstract research along the way. Monopolies are often complacent about exploiting their research findings, and Google’s corporate culture has historically not been particularly compatible with launching sort of military or enterprise tooling that represents the most obviously risky use of ‘AI’.
There are of course arguments the other way (Google has a lot more money and data than putative spinouts) but people need to predict what a divested DeepMind would do before concluding breaking up Google is a safety win.
I don’t think the “3% credence in utilitarianism” is particularly meaningful; doubting the merits of a particular philosophical framework someone uses isn’t an obvious reason to be suspicious of them. Particularly not when Sam ostensibly reached similar conclusions to Will about global priorities, and MacAskill himself has obviously been profoundly influenced by utilitarian philosophers in his goals too.
But I do think there’s one specific area where SBF’s public philosophical statements were extremely alarming even at the time, and he was doing so whilst in “explain EA” mode. That’s when Sam made it quite clear that if he had a 51% chance of doubling world happiness vs a 49% of ending it, he’d accept the bet.… a train to crazytown not many utilitarians would jump on and also one which sounds a lot like how he actually approached everything.
Then again, SBF isn’t a professional philosopher and never claimed to be, other people have said equally dumb stuff and not gambled away billions of other people’s money, and I’m not sure MacAskill himself would even have read or heard Sam utter those words.
I also didn’t vote but would be very surprised if that particular paper—a policy proposal for a biosecurity institute in the context of a pandemic—was an example of the sort of thing Oxford would be concerned about affiliating with (I can imagine some academics being more sceptical of some of the FHI’s other research topics). Social science faculty academics write papers making public policy recommendations on a routine basis, many of them far more controversial.
The postmortem doc says “several times we made serious missteps in our communications with other parts of the university because we misunderstood how the message would be received” which suggests it might be internal messaging that lost them friends and alienated people. It’d be interesting if there are any specific lessons to be learned, but it might well boil down to academics being rude to each other, and the FHI seems to want to emphasize it was more about academic politics than anything else.
I think a dedicated area would minimise the negative impact on people that aren’t interested whilst potentially adding value (to prospective applicants in understanding what did and didn’t get accepted, and possibly also to grant assessors if there was occasional additional insight offered by commenters)
I ’d expect there would be some details of some applications that wouldn’t be appropriate to share on a public forum though
I think the combination of bottom-up approach of local communities proposing their own improvements with EA-style rigorous quantitative evaluation (which, like you say would be best undertaken by evaluators based in similar LMICs) is potentially really powerful, and I’m not sure the extent to which it’s already been tried in mainstream aid.
Or possibly even better from a funding perspective, turn that round and have an organization that helps local social entrepreneurs secure institutional funding for their projects (a little bit like Charity Entrepreneurship). Existing aid spend is enormous, but I don’t think it’s easy for people like Antony to access.
I also think there’s the potential for interesting online interaction between the different local social entrepreneurs (especially those who have already part-completed projects with stories to share), putative future donors and other generally interested Westerners who might bring other perspectives to the table. I’m not sure to what extent and where that happens at the moment.
I’d also extend this to people who have strong skills and expertise that’s not obviously convertable into ‘working in the main EA cause areas’.
I think this is a key part. “Main EA cause areas” does centre a lot on a small minority of people with very specific technical skills and the academic track record to participate in (especially if you’re taking 80k Hours for guidance on that front)
But people can have a lot of impact in areas like fundraising with a completely different skillset (one that is less likely to benefit from a quantitative degree from an elite university) or earn well enough to give a lot without having any skills in research report writing, epidemiology or computer science.
And if your background isn’t one that the “do cutting edge research or make lots of money to give away” advice is tailored to at all, there are a lot of organizations doing a lot of effective good that really really, really need people with the right motivations allied to less niche skillsets. So I don’t think people should feel they’re not a ‘success’ if they end up doing GHD work rather than paying for it, and if their organization isn’t particularly adjacent to EA they might have more scope to positively influence its impactfulness.
Also, people shouldn’t label themselves mediocre :)
I think everyone agrees that it’s harder to do cost effectiveness analysis for speculative projects than it is to do it for disease prevention, and that any longtermist cost/benefit analysis is going to have a lot more scope for debate on the numbers. But it is also harder to do cost effectiveness analysis in terms of lives saved for other GHD measures like rural poverty alleviation (though if this project affects malnutrition it might actually be amenable to GiveWell style analysis. )
I think ultimately if every marginal dollar proposed to be spent on GHD has to demonstrate reasoning as to why its as good as or better than AMF at the margin, it’s only fair to demand similar transparency for community building and longtermist initiatives (with an acceptance of wider error bars).[1] Especially since there’s a marked tendency for the former to be outsider organizations and the latter to be organizations within the EA network...
I make no comment either way about the particular viability of this project. And I’d actually be quite interested in your more detailed thoughts on it, as whilst you’re not an expert on farming you clearly have in depth knowledge of Uganda.
- ^
At the risk of boring on about Wytham, the bar seemed to be that it was net positive given lots of OpenPhil money was being directed to conference venues, not that it was better than buying a marginally inferior venue for a lot less money and donating the rest to initiatives that could save lives
- ^
This feels like a good example of how GPT can generate coherent and topic-relevant prose which on a deeper level, doesn’t actually make much sense.
If there are lots of “bidders” wanting to fund something, a charity or research project will normally want to accept funding from all of them, not just a “winner”.
And OpenPhil exists to donate money to causes it believes are most effective and neglected, so picking projects that already have funding secured [in competition with the original funders] seems like a strange way to go about it.
OK, I guess the tone of my original reply wasn’t popular (which is fair enough I guess).
The OP raised the subject of a non-trivial proportion of people perceiving EA as being a ‘phyg’ as a problem, and suggested with moderately high confidence that the transition to a “professional association” would radically reduce this. I’m not seeing this. Plenty of groups recruiting students brand themselves “movements” for “doing good” in some general way whilst being relatively unlikely to be accused of being a cult (climate change and civil/animal rights activists, fair-traders, volunteering groups etc)
And I suspect far more people would say the International Association of Scientologists and Association of Professional Independent Scientologists which both adopt the structure and optics of professional membership bodies are definitely cults (Obviously there are many more reasons to consider Scientology as a cult, but if anything I’d think the belief-system-under-a-professional-veneer approach looks more suspicious rather than less. At any rate, forming professional membership bodies definitely isn’t something actual cults don’t do)
So if people are perceiving EA as a cult it’s probably their reaction—justified or otherwise—to other things, some of which might be far too important to dispense with like Giving Pledges and concern about x-risk, and some of which might be easily avoided like reading from scripts (and yes, substituting ordinary words for insider jargon like ‘phyg’). Other ways to dispel accusations that EA is a cult (if it is indeed a problem) feels like the subject for an entirely different debate, but I’d genuinely be interested in counter-arguments from anyone who thinks I’m wrong and changing the organization structure is the key.
Even the Hanania article you linked to entitled “Diversity Is Our Strength” contains as one of its core arguments the suggestion that Hispanic immigrants might be won over to his support for “war with civil rights law” by “comparing them favorably to genderfluid liberals and urban blacks”.
The next sentence links to one of his own tweets about how “selling immigrants on hating liberals would be the easiest thing in the world”, featuring a video of Muslims protesting in favour of LGBT book bans.
Perhaps you don’t find this style of politics repugnant, perhaps it even represents a marginal improvement on his prior beliefs, but I don’t think it’s one EA should be endorsing.
Is continued membership of the local group at all necessary for the career? Ultimately if you don’t like people, you probably don’t want to network with them to have the best possible chance of working alongside them. I know some EA cause areas are niche, but I think there are generally more people working in them who won’t be attending your local group than are, and ultimately developing your technical skill and getting good references from your colleagues is going to matter more.
The Germany argument works better the other way round: there were plenty of non-communist alternatives to Hitler (and the communists weren’t capable of winning at the ballot box), but a lot of Germans who didn’t share his race obsession thought he had some really good ideas worth listening to, and then many moderate rivals eventually concluded they were better off working with him.
I don’t think it’s “punishing” people not to give them keynote addresses and citations as allies. I doubt Leif Wenar is getting invitations to speak at EA events any time soon, not because he’s an intolerable human being but simply because his core messaging is completely incompatible with what EA is trying to do...
I think you’re absolutely right about the evidence strongly supporting capitalism being less than ideal from a utilitarian perspective but also not supporting any putative drop in replacement system (and providing a lot of evidence that revolutions are a terrible idea in most places). As Churchill once said of Parliamentary democracy, it’s the worst system apart from all the others that have been tried.
But I would think for critics of capitalism there are plenty of feasible options short of completely eliminating it. The Nordic model is (for better and for worse) notably less capitalistic than the United States model, for example. Many well established problems with market economies like externalities and lack of public goods (and utilitarian issues like wealth inequality) are feasibly solvable to a much greater extent than under the present system, and seem to fall into the category of “system[s] involving power structures for which there is a lot of attention on the left but almost no attention within EA”
I think there are other reasons why EA doesn’t get involved (there is has a lot of attention as a problem already, achieving change is extremely difficult and costly, and given uncertainty and different starting premises EAs are highly unlikely to agree with each other. though the latter hasn’t stopped them exploring other fields). I’m not sure getting more actively involved in the politics of economic distribution would actually improve EA as a movement, pursue the ‘right’ goals or achieve any success though.
Who is this candidate, what are their policies, and what is it about them that will get >80 million people distributed effectively across the country to vote for them?
Even if a significant proportion of Congresspeople were in theory willing to bear the political consequences of giving primary voters and party hierarchy the proverbial middle finger by participating in some backroom scheme to hamstring their own candidate, you’re not going to sign anyone up without an answer to that question.
Yeah. Effective Altruism isn’t a profession, and nothing would scream “cult” more than trying to rebrand as a professional organization primarily to try to convince people it isn’t a cult! (Not even unnecessary use of ingroup jargon like the LessWrongish ROT13ing of the word into “phyg”!) Even more so if like most actual professional organizations, people were assigned compulsory training and CPD and sanctioned for deviating from the official position. The “cult” impressions/accusations are nothing to do with lack of formal membership structure, so introducing one won’t make them go away. Scientology has “professional organizations”.
Also agree there’s possibly more scope for actual professional organizations in some specific areas (e.g charity founders, grantmakers and AI safety professionals), but more for potential opportunities to share knowledge with other people working in the field outside EA than as a rebranding exercise.
Agreed. GiveWell has revised their estimates numerous times based on public feedback, including dropping entire programmes after evidence emerged that their initial reasons for funding were excessively optimistic, and is nevertheless generally well-regarded including outside EA. Most people understand its analysis will not be bug free.
OpenPhil’s decision to fund Wytham Abbey, on the other hand, was hotly debated before they’d published even the paragraph summary. I don’t think declining to make any metrics available except the price tag increased people’s confidence in the decision making process, and participants in it appear to admit that with hindsight they would have been better off doing more research and/or more consideration of external opinion. If the intent is to shield leadership from criticism, it isn’t working.
Obviously GiveWell exists to advise the public so sharing detail is their raison d’etre, whereas OpenPhil exists to advise Dustin Moskovitz and Cari Tuna, who will have access to all the detail they need to decide on a recommendation. But I think there are wider considerations to publicising more on the project and the rationale behind decisions even if OpenPhil doesn’t expect to find corrections to its calculations useful
Increased clarity about funding criteria would reduce time spent (on both sides) on proposals for projects OpenPhil would be highly unlikely to fund, and probably improve the relevance and quality of the average submission.
There are a lot of other funders out there and many OpenPhil supported causes have room for additional funding.
Publicly-shared OpenPhil analysis could help other donors conclude particular organizations are worth funding (just as I imagine OpenPhil itself is happy to use assessments by organizations it trusts), ultimately leading to its favoured causes having more funds at their disposal
Or EA methodologies could in theory be adopted by other grantmakers doing their own analysis. It seems private foundations are much happier borrowing more recent methodological ideas from Mackenzie Scott, but generally have a negative perception of EA. Adoption of TBF might be mainly down to its relative simplicity, but you don’t exactly make a case for the virtues of the ITN framework by hiding the analysis...
Lastly, whilst OpenPhil’s primary purpose is to help Dustin and Cari give their money away it’s also the flagship grantmaker of EA, so the signals it sends about effectiveness, rigour, transparency and willingness to update has an outsized effect on whether people believe the movement overall is living up to its own hype. I think that alone is a bigger reputational issue than a grantmaker using a disputed figure or getting their sums wrong.
The non-reputational costs matter too and it’d be unreasonable to expect enormously time-consuming GiveWell and CE style analysis for every grant, especially with the grants already made and recipients sometimes not even considering additional funding sources. But there’s a happy medium between elaborate reasoning/spreadsheets and a single paragraph. Even publishing sections from the original application (essentially zero additional work) would be an improvement in transparency.
I don’t disagree that these are also factors, but if tech leaders are pretty openly stating they want the regulation to happen and they want to guide the regulators, I think it’s accurate to say that they’re currently more motivated to achieve regulatory capture (for whatever reason) than they are to ensure that x-risk concerns don’t become a powerful political argument as suggested by the OP, which was the fairly modest claim I made.
(Obviously far more explicit and cynical claims about, say, Sam Altman’s intentions in founding OpenAI exist, but the point I made doesn’t rest on them)