I’m linking and excerpting a submission to the EA criticism contest published by a pseudonymous author on August 31, 2022 (i.e. before the collapse of FTX).
The submission did not win a prize, but was highlighted by a panelist:
I was unsure about including this post, but I think this post highlights an important risk of the EA community receiving a significant share of its funding from a few sources, both for internal community epistemics/culture considerations as well as for external-facing and movement-building considerations. I don’t agree with all of the object-level claims, but I think these issues are important to highlight and plausibly relevant outside of the specific case of SBF / crypto. That it wasn’t already on the forum (afaict) also contributed to its inclusion here.
Due to concerns about copyright, I’m excerpting the post besides the summary and disclaimer, but I recommend reading the whole piece.
Summary
Sam Bankman-Fried, founder of the cryptocurrency exchange FTX, is a major donator to the Effective Altruism ecosystem and has pledged to eventually donate his entire fortune to causes aligned with Effective Altruism.
By relying heavily on ultra-wealthy individuals like Sam Bankman-Fried for funding, the Effective Altruism community is incentivized to accept political stances and moral judgments based on their alignment with the interests of its wealthy donators, instead of relying on a careful and rational examination of the quality and merits of these ideas. Yet, the Effective Altruism community does not appear to recognize that this creates potential conflicts with its stated mission of doing the most good by adhering to high standards of rationality and critical thought.
In practice, Sam Bankman-Fried has enjoyed highly-favourable coverage from 80,000 Hours, an important actor in the Effective Altruism ecosystem. Given his donations to Effective Altruism, 80,000 Hours is, almost by definition, in a conflict of interest when it comes to communicating about Sam Bankman-Fried and his professional activities. This raises obvious questions regarding the trustworthiness of 80,000 Hours’ coverage of Sam Bankman-Fried and of topics his interests are linked with (quantitative trading, cryptocurrency, the FTX firm…).
In this post, I argue that the Effective Altruism movement has failed to identify and publicize its own potential conflicts of interests. This failure reflects poorly on the quality of the standards the Effective Altruism movement holds itself to. Therefore, I invite outsiders and Effective Altruists alike to keep a healthy level of skepticism in mind when examining areas of the discourse and action of the Effective Altruism community that are susceptible to be affected by incentives conflicting with its stated mission. These incentives are not just financial in nature, they can also be linked to influence, prestige, or even emerge from personal friendships or other social dynamics. The Effective Altruism movement is not above being influenced by such incentives, and it seems urgent that it acts to minimize conflicts of interest.
Introduction — Cryptocurrency is not neutral (neither morally nor politically)
… Cryptocurrency is not simply an attempt to provide a set of technical solutions to improve existing currency systems. It is an attempt to replace existing monetary institutions by a new political system, it is therefore political at its core...
My point here is not to debate on the virtues of the societal model promoted by cryptocurrency actors, but rather to convince readers unfamiliar with the cryptocurrency industry that it is deeply infused with political ideology and is certainly not a purely-technological response to a technical problem. The cryptocurrency response to monetary policy questions manifests a specific worldview accompanied by a specific set of moral values.
EA’s reliance on funding from the cryptocurrency industry
...These incentives are not just monetary. As the crypto industry grows and SBF gains in wealth, influence and prestige, EA benefits by receiving more funding but also by extending its own area of influence and its prestige. On the contrary, attacks on the image of SBF, FTX and even crypto as a whole carry the risk of tarnishing EA’s reputation. Were SBF to be involved in an ethical or legal scandal (whether in his personal or profesional life), the EA ecosystem would inevitably be damaged as well. As a result, the EA community has an incentive to protect SBF’s reputation, to counter critics against him from the outside and to stifle critics from the inside of the community (this incentive can act between EA members, by voiced criticisms being ignored, downplayed, treated with defiance, but can even act via self-censorship, conscious or not).
How EA views cryptocurrency
...Given that the adoption of cryptocurrency has massive political implications for the future of our societies and carries with it very strong ideological foundations, it should, at first glance, seem slightly surprising that the EA community does not visibly engage critically with this topic on a deeper political level. But this is not as surprising when considering EA’s reliance on crypto wealth for funding. As explained above, the EA community is powerfully primed to view cryptocurrency positively, if only by the direct financial benefits it collects from the industry. Moreover, the incentives at play are likely effective inhibitors of contrarian views (notably by means of self-censorship)...
EA’s ineffective mechanisms to protect itself against conflicts of interest
… EA claims to aim to do “the most good” using the tools of rationality and critical thinking. So what does the EA ecosystem do to mitigate the risk that EA members act according to bias-inducing incentives?
As far as I can tell, the systemic safeguards against conflicts of interests in the EA ecosystem are very limited...
These two main forms of promotion of debates (internal forums and invitations to criticisms) are not nearly sufficient as mechanisms to prevent the establishment of conflicts of interests. Yet, they appear to be the only ones the Effective Altruism community relies on.
Conflicts of interest need to be addressed whether they have real effects or not
...Fundamentally, the problem I want to highlight is not even whether the EA ecosystem is effectively influenced by the incentives it is subject to. These incentives exist, and are left unchecked. This is the primary issue.
Whether these incentives have actual effects on EA is almost secondary to the fact that EA seems to be unable to recognize, publicize and mitigate its engagement in conflicts of interest… For EA, incentives like the ones related to SBF are in direct conflict with EA’s stated mission of “using evidence and reason to figure out how to benefit others as much as possible” (quote from the Centre For Effective Altruism).
What should EA do?
It appears clear that EA does not consider itself to be at any real risk of falling prey to conflicts of interests. This seems to be the only way to explain the blind spot EA suffers from when it comes to recognizing its incentives associated with relying on donations from tech bilionaires, as obvious as these may be in the particular case of donations by SBF.
Identifying and publicizing obvious sources of potential conflicts of interests
A necessary (but non sufficient) first step would be to acknowledge existing incentive and recognize their potential effects...
I will briefly address a counter-argument that could be made along the lines of: “SBF’s profile in 80,000 Hours clearly mentions SBF’s contributions to EA-aligned causes. Therefore EA is transparent about its funding, therefore EA does not suffer from issues of undisclosed conflicts of interests.” Indeed, SBF’s contributions are mentioned at length in EA publications. But I have seen no instance where this contribution was listed as a sign of potential conflict of interest. On the contrary, SBF is framed as a prime example of Earning to Give, he is presented as an example to follow, a person to admire, to take inspiration from and to be grateful to, which does nothing to warn against potential conflicts of interest.
Going further
It seems crucial that EA, if it values independence of thought and critical thinking, should engage in an in-depth examination of the role that incentives are allowed to play in the organization...
Publicizing existing conflicts of interest achieves little if not accompanied by a significant effort to understand how conflicts of interest are allowed to appear, how to minimize their potential effects, how to strengthen counterpowers within the organization to foster accountability, how to prevent EA from becoming more conflicted and instead reduce the number and strength of exisiting conflicts of interest.
There is a clear trade-off between 1) expanding the available resources of a non-profit organization and 2) protecting said organization from potential conflicts of interests. My opinion is that EA as a community should probably think hard about where it stands on this trade-off...
Conclusion
...It would be pointless to aspire to building an organization in which conflicting incentives are completely eliminated. On the other hand, it would be completely illusory to think that individuals can consciously decide to free themselves from the biases associated with incentives of all kinds. Systemic safeguards are essential, all the more so when an organization aims to hold itself to high standards of rationality. Hopefully, the EA movement will remember this sooner rather than later.
Disclaimers
This post deals with conflicts of interests, it is only natural that I would be particularly transparent regarding the incentives that played into its writing.
First, I did not receive any funding for writing this post, and I have no affiliation to the EA movement.
I wrote this post aiming to submit it to EA’s criticism contest and was thus incentivized to write an effective critique of EA, but one that would not be too antagonizing to the jury of the contest (I believe that the jury is mainly composed of members of EA). I did my best to resist this incentive and aimed to not water down my thesis too much.
By making a pseudonymous submission, I am shielding myself from the fear of reputational damage, which could otherwise have been a powerful incentive to self-censor.
I think this criticism be extended beyond cryptocurrency, to Social Media. Specifically, EA is heavily reliant on funding from Dustin Moskovitz, co-founder Facebook. (I’m fairly ignorant as to details of Moskovitz’s finance, I believe he still owns shares in Meta and so has at least some controlling interest in the company, but I could be off-base here)
Social media is criticised for a lot of things, but here I’m just going to link the following article, because it’s recent, and because it seems topical to a lot of EA global health/development stuff: Meta faces $1.6bn lawsuit over Facebook posts inciting violence in Tigray war.
There’s a story here that goes ‘Man who’s made billions in technology that significantly damages social and political institutions, including spreading misinformation about elections, covid, vaccines, and allowing people to spread abuse and incite violence, now wants to use that money for the good of society ’. And to the degree that you think that’s true, you might think that the harms done by Meta outweigh the good done by Open Philanthropy.
---
There’s a critique of EA, that goes ‘EA is more focused on individual donations than systemic change’. I used to think this was off-base, because there’s plenty of EAs who want to do system-changing things, like advocate for animal welfare laws, or work in government policy.
Now I read this criticism more like:
“By relying on one or two extremely rich donors for a large portion of EA funding, EA is less likely to be advocate for the kind of systemic change that would be harmful to the financial interests of these donors”,
or (and I’m thinking of crypto and social media here:)
“By relying on one or two extremely rich donors who’ve made their fortunes in ‘disruptive’ technology, EA is less likely to be critical of the harms that these technologies do to the world”.
And I actually think that’s quite a valid criticism.
I don’t believe Dustin has been involved in Facebook for many years, is several years into running his new startup (Asana) and doubt there’s any real obligation to like Facebook from that (at least, I do not perceive there to be, and at least given my impression of Dustin from Twitter it would be pretty surprising to me if others did)
I’m not that concerned about this.
First off, it is very hard to find a funding source that doesn’t create a conflict of interest. With ten megadonors in ten different fields, the conflicts would not be as acute but would be more broadly distributed. Government support brings conflicts. Relying on an army of small, mildly engaged donors creates “conflicts” of a different sort—there is a strong motivation to focus on what looks good and will play to a mildly-engaged donor base rather than what does good.
The obvious risk for conflict of interest is that the money impedes or distorts the movement’s message. It’s generally not a meaningful problem for a kidney-disease charity to have financial entanglement with a social media company; it would very much be a problem for the American Academy of Pediatrics. It seems relatively less likely that, applying the principles of EA, that being critical of the harm social media creates and/or advocating for systemic change that would specifically or disproportionately tank Meta/Asana stock would be priority cause areas.
It seems more likely to me that faithful application of EA principles would lead down a path that is contrary to in the interests of very wealthy donors more generally. But that is a hard problem to get around for a movement that wants to have great impact and needs loads of funding to do it.
I think conflict of interest is what leads to existential risk from AI rising up to being the most important issue in EA even though it’s based on dubious reasoning and extrapolations that many people at the forefront of AI development don’t think make sense from a capabilities perspective. It’s been sufficient for senior people for EA to take it seriously and given these same folks control resource allocation it ends up driving a lot of what the community thinks. This bias clearly reveals itself when talking to some EAs terrified of AI but don’t know anything about how it works nor do they have any idea of what actual AI researchers think as well as the obstacles they are trying to overcome. It seems like 95% of the people in EA who are terrified about existential risk from AI just defer to other people who speak about things they don’t really comprehend but because they have control of the money and status in the community they assume they can be trusted. How can the folks who are funding AI Safety research be considered objective when they are the same folks who are considered as the producers of authoritative content on AI safety as well as have familial or intimate relationships with top AI researchers ? I don’t question the sincerity of these folks in their beliefs but given the nature and structure of the situation I cannot trust that EA can come to the correct conclusions about this specific topic. I also think this is a mess that cannot be untangled and will have to run it’s course until EA doesn’t have money to burn.
The same conflict of interest argument applies to ML engineers who have every reason to argue that their work isn’t leading to the potential death of everyone on Earth.
And also to people who are significantly invested in other cause areas and feel it diminishes the importance of their work.
Unfortunately, I think the conflict of interest line of thought ends up being far more expansive in a way that impinges basically everyone.
It’s a lot more direct with AI though. Ai safety org people and EA org are often the same people, or are personal friends, or at least know each other in some capacity. This undeniably grants them advantages compared to some far off animal rights org. Social aspects give their ideas more access, more consideration, and less temptation to be written off as crazy. If someone found decisive proof that AI safety was nonsense, I’m sure they would publish it, but they might be sad about putting some of their personal friends out of jobs, making them look foolish, etc. I think this bias seeps, at least a little bit, into AI safety consideration.
There is a difference. For ML engineers they actually have to follow up their claims by making products that actually work and earn revenue or successfully convince a VC to keep funding their ventures. The source of funding and the ones appealing for the funding have different interests. In this regard ML engineers have more of an incentive to try upsell the capabilities of their products than downplay them. It’s still possible for someone to burn their money funding something that won’t pan out and this is the risk investors have to make (I don’t know of any top VCs as bullish on AI capabilities on as aggressive timelines as EA folks). In the case of AI safety some of the folks who are in charge of the funding are the ones who are also the loudest advocates for the cause as well as some of the leading researchers. The source of funding and the ones utilizing the funding are comingled in a way that would lead to a conflict of interest that seems quite more problematic than I’ve noticed in other cause areas. But if such serious conflicts do exist, then those too are a problem and not an excuse to ignore conflicts of interest.
Not really? Yes, I do think that EA probably has conflict of interest re AI, though I don’t understand why actually having capabilities is actually a defense to the criticism that they are incentivized to ignore the risk, exactly? This is a symmetrical claim that admittedly does teach us to lower or raise our credences in things we have a stake on, but there’s no asymmetry.
I think they would have to believe there is a risk but they are actually just trying to figure out how to make headway on basic issues. The point of my comment was not to argue about AI risk since I think that is a waste of time as those who believe in it seem to hold it more like an ideological/religious belief and I don’t think there is any amount of argumentation or evidence that can convince them(there is also a lot of material online where the top researchers are interviewed and talk about some of these issues for anyone actually interested about what the state of AI is outside the EA bubble). My intention was just to name that there is a conflict of interest in this particular domain that is having a lot of influence in the community and I doubt there will be much done about it.
For people who haven’t been around for a while, the history of AI x-risk as a cause area is actually one of a long struggle for legitimacy and significant funding. 20 years ago, only Eliezer Yudkowsky and a handful of other people even recognised there was a problem. 15 years ago, there was a whole grass-roots movement of people (centred around the Overcoming Bias and LessWrong websites) earning to give to support MIRI (then the Singularity Institute), as they were chronically underfunded. 10 years ago, Holden Karnofsky was arguing against it being a big problem. The fact that AI x-risk now has a lot of legitimacy and funding is a result of the arguments for taking it seriously winning many long and hard battles. Recently, huge prizes were announced for arguments that it wasn’t a (big) risk. Before their cancellation, not much was produced in the way of good arguments imo. OpenPhil are now planning on running a similar competition. If there really are great arguments against AI x-risk being a thing, then they should come to light in response.
For those who want to deepen their knowledge of AI x-risk, I recommend reading the AGI Safety Fundamentals syllabus. Or better yet, signing up for the next iteration of the course (deadline to apply is 5th Jan).
I took that course and gave EA a benefit of the doubt. I was exposed to arguments about AI safety before I knew much about AI and it was very confusing stuff and there is a lot that didn’t add up but I still gave the EA take the benefit of the doubt since I didn’t know much about AI and thought that there was something that I just didn’t understand. I then spent a lot of time actually learning about AI and trying to understand what experts in the field think about what AI can actually do and what lines of research they are pursuing. Suffice it to say that the material on AGI safety didn’t hold up well after this process. The AI x-risk concerns seem very quasi religious. The story is that man will create an omnipresent, omniscient and omnipotent being. Such beings are known as God in religious contexts. More moderate claims have that a being or a multiplicity of them that possess at least one of these characteristics will be created which is more akin to gods in polytheistic religions. This being will then rain down fire and brimstone on humanity for the original sin of being imperfect which is manifested by specification of an imperfect goal. It’s very similar to religious creation stories but with the role of creator reversed but the outcome is the same, Armageddon. Given that the current prophecy seems to indicate that the apocalypse will come by 2030 it seems like there is opportunity for a research study to be done on EA similar to that done of the Seekers. Given this looks very much like a religious belief I doubt there is any type of argumentation that will convince the devout adherents of the ideology of the incredulity of their beliefs. There will also be a selection bias towards people who are prone to this kind of ideological beliefs similar to how some people are just prone to conspiracy theories like QAnon albeit that AI x-risk is a lot more sophisticated. At least the people who believe in mainstream religions are upfront that their beliefs are based on faith. The AI x-risk devotees also base their beliefs on faith but its couched in incomprehensible rationality newspeak, philosophy, absurd extrapolations and theoretical mathematical abstractions that cannot be realized in practical physical systems to give the illusion that it’s more than that.
I’d be interested in whether you actually tried that, and whether it’s possible to read your arguments somewhere, or whether you just saw superficial similarity between religious beliefs and the AI risk community and therefore decided that you don’t want to discuss your counterarguments with anybody.
There have been loads of arguments offered on the forum and through other sources like books, articles on other websites, podcasts, interviews, papers etc. So I don’t think that what’s lacking are arguments or evidence. I think the issue is the mentality some people in EA have when it comes to AI. Are people who are waiting for people to bring them arguments to convince them of something really interested in getting different perspectives? Why not just go look for differing perspectives yourself? This is a known human characteristic, if someone really wants to believe in something they can believe it even to their own detriment and will not seek out information that may contradict with their beliefs (I was fascinated by the tales of COVID patients denying that COVID exists even when dying from it in an ICU). I witnessed this lack of curiosity in my own cohort that completed AGISF. We had more questions than answers at the end of the course and never really settled anything during our meetings other than minor definitions here and there but despite that, some of the folks in my cohort went on to work or try work on AI safety and solicit funding without either learning more about AI itself(some of them didn’t have much of a technical background) or trying to clarify their confusion and understanding of the arguments. I also know another fellow from the same run of AGISF who got funding as an AI safety researcher when they knew so little about how AI actually works. They are all very nice amicable people and despite all the conversations I’ve had with them they don’t seem open to the idea of changing their beliefs even when there are a lot of holes in the positions they have and you directly point out those holes to them. In what other contexts are people not open to the idea of changing their beliefs other than in religious or other superstitious contexts? Well the other case I can think of is when having a certain belief is tied to having an income, reputation or something else that is valuable to a person. This is why the conflict of interest at the source of funding pushing a certain belief is so pernicious because it really can affect beliefs downstream.
I’d still be grateful if you could post a link to the best argument (according to your own impression) by some well-respected scholar against AGI risk. If there are “loads of arguments”, this shouldn’t be hard. Somebody asked for something like that here, and there aren’t so many convincing answers, and no answers that would basically end the cause-area comprehensively and authoritatively.
I think so—see footnote 2 of the LessWrong post linked above.
Asking people for arguments is often one of the best ways to look for differing perspectives, in particular if these people have strongly implied that plenty of such arguments exist.
That this “known human characteristic” strongly applies to people working on AI safety is, up to now, nothing more than a claim.
I share that fascination. In my impression, such COVID patients have often previously dismissed COVID as a kind of quasi-religious death cult, implied that worrying about catastrophic risks such as pandemics is nonsense, and claimed that no arguments would convince the devout adherents of the ‘pandemic ideology’ of the incredulity of their beliefs.
Therefore, it only seems helpful to debate in this style when you have already formed a strong opinion as to which side is right; otherwise you can always just claim that the other side’s reasoning is motivated by religion/ideology/etc. Otherwise, the arguments seem like Bulverism.
I don’t work in AI Safety, I am not active in that area, and I am happy when I get arguments that tell me I don’t have to worry about things. So I can guarantee that I’d be quite open for such arguments. And given that you imply that the only reasons why these nice people still want to work in AI Safety is that they were quasi-religious or otherwise biased, I am looking forward to your object-level arguments against the field of AI Safety.
Here are a couple of links:
What does it mean to align AI with human values?
The implausibility of intelligence explosion
Sorry but I’m not going to do your homework for you. If you want to find arguments for or against AI safety go look for them yourself. If want to actually find out what leading AI researchers think you can find that as well. I have no special insight over the many people who have expertise in the field of AI so I am not the best source and my conclusions could be wrong. I’m still learning more all the time as I increase my expertise in AI. If you have done your homework and have come to the conclusion that AI safety as a field is warranted then well and good. If you are looking for someone who will argue with you in order to convince you one way or another then I hope someone is willing to do that for you either way good luck!
If you don’t want to justify your claims, that’s perfectly fine, no one is forcing you to discuss in this forum. But if you do, please don’t act as if it’s my “homework” to back up your claims with sources and examples. I also find it inappropriate that you throw around many accusations like “quasi religious”, “I doubt there is any type of argumentation that will convince the devout adherents of the ideology of the incredulity of their beliefs”, “just prone to conspiracy theories like QAnon”, while at the same time you are unwilling or unable to name any examples for “what experts in the field think about what AI can actually do”.
Are you sure it was that course?!
Doesn’t sound very like it to me.
Yup very sure. AGI Safety Fundamentals by Cambridge.
Ok, well I took that course, and it most definitely did not have that kind of content in it (can you link to a relevant quote?). Better to think of the AI as an unconscious (arbitrary) optimiser, or even an indifferent natural process. There is nothing religious about AI x-risk.
To be clear I was making an analogy of what the claims made look like and not saying that it is written explicitly that way. I see implicit claims of omnipotence and omniscience of a super intelligent AI from the very first (link)[https://intelligence.org/2015/07/24/four-background-claims/] in the curriculum. These claims 2-4 of that link are just beliefs not testable hypotheses that can be proven or disproven through scientific inquiry.
The whole field of existential risk is made up of hypotheses that aren’t “testable”, in that there would be no one there to read the data in the event of an existential catastrophe. This doesn’t mean that there is nothing useful that we can say (or do) about existential risk. Regarding AI, we can use lines of scientific evidence and inference based on them (e.g. evolution of intelligence in humans etc). The post you link to provides some justifications for the claims it makes.
The justifications made in that post are weak in proportion to the claims made IMO but I’m just a simple human with very limited knowledge and reasoning capability so I am most likely wrong in more ways than I could ever fully comprehend. You seem like a more capable human that is able to think about these type of claims a lot more clearly and understand the arguments much better. Given that argumentation is the principle determinant of how people in industry make products and as a by product the primary determinant of technological development for something like AI, I have full confidence that these type of inferences you allude to will have very strong predictive value as to how the future unfolds when it comes to AI deployment. I hope you and your fellow believers are able to do a lot of useful things about existential risk from AI based on your accurate and infallible inferences and save humanity. If it doesn’t work out at least you will have tried your best! Good luck!
No one is saying that their inferences are “infallible” (and pretty much everyone I know in EA/AI Safety are open to changing their minds based on evidence and reason). We can do the best we can, that is all. My concern is that that won’t be enough, and there won’t be any second chances. Personally, I don’t value “dying with dignity” all that much (over just dying). I’ll still be dead. I would love it if someone could make a convincing case that there is nothing to worry about here. I’ve not seen anything close.