I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . .
Jason
But I think a practical moral philosophy wherein donation expectations are based on your actual material resources (and constraints), not your theoretical maximum earning potential, seems more justifiable.
It’s complicated, I think. Based on your distinguishing (a) and (b), I am reading “salary sacrifice” as voluntarily taking less salary than was offered for the position you encumber (as discussed in, e.g., this post). While I agree that should count, I’m not sure (b) is not relevant.
The fundamental question to me is about the appropriate distribution of the fruits of one’s labors (“fruits”) between altruism and non-altruism. (Fruits is an imperfect metaphor, because I mean to include (e.g.) passive income from inherited wealth, but I’ll stick with it.)
We generally seem to accept that the more fruit one produces, the more (in absolute terms) it is okay to keep for oneself. Stated differently—at least for those who are not super-wealthy—we seem to accept that the marginal altruism expectation for additional fruits one produces is less than 100%. I’ll call this the “non-100 principle.” I’m not specifically defending that principle in this comment, but it seems to be assumed in EA discourse.
If we accept this principle, then consider someone who was working full-time in a “normal” job and earn a salary of 200 apples per year. They decide to go down to half-time (100-apple salary) and spend the half of their working hours producing 100 charitable pears for which they receive no financial benefit. [1]The non-100 principle suggests that it’s appropriate for this person to keep more of their apples than a person who works full-time to produce 100 apples (and zero pears). Their total production is twice as high, so they aren’t similarly situated to the full-time worker who produces the same number of apples. The decision to take a significantly less well-paid job seems analogous to splitting one’s time between remunerative and non-remunerative work. One gives up the opportunity to earn more salary in exchange for greater benefits that flow to others by non-donation means.
I am not putting too much weight on this thought experiment, but it does make me think that either the non-100 principle is wrong, or that the foregone salary counts for something in many circumstances even when it is not a salary sacrifice in the narrower sense.
- ^
How to measure pear output is tricky. The market rate for similar work in the for-profit sector may be the least bad estimate here.
- ^
Tagging @Austin since his comments are the main focus of this quick take
That analysis would be more compelling if the focus of the question were on a specific individual or small group. But, at least as I read it, the question is about the giving patterns of a moderately numerous subclass of EAs (working in AI + “earning really well”) relative to the larger group of EAs.
I’m not aware of any reason the dynamics you describe would be more present in this subclass than in the broader population. So a question asking about subgroup differences seems appropriate to me.
No—that something is unsurprising, even readily predictable, does not imply anything about whether it is OK.
The fact that people seem surprised by the presence of corpspeak here does make me concerned that they may have been looking at the world with an assumption that “aligned” people are particularly resistant to the corrosive effects of money and power. That, in my opinion, is a dangerous assumption to make—and is not one I would find well-supported by the available evidence. Our models of the world should assume that at least the significant majority of people will be adversely and materially influenced by exposure to high concentrations of money and power, and we need to plan accordingly.
Power (and money) corrupts.
What surprises me about this whole situation is that people seem surprised at the executive leadership at a corporation worth an estimated $61.5B would engage in big-corporation PR-speak. The base rate for big-corporation execs engaging in such conduct in their official capacities seems awfully close to 100%. Hence, it does not feel like anything to update on for me.
I’m getting the sense that a decent number of people assume that being “EA aligned” is somehow a strong inoculant against the temptations of money and power. Arguably the FTX scandal—which after all involved multiple EAs, not just SBF—should have already caused people to update on how effective said inoculant is, at least when billions of dollars were floating around.[1]
- ^
This is not to suggest that most EAs would act in fraudulent ways if surrounded by billions of dollars, but it does provide evidence that EAs are not super-especially resistant to the corrosive effects of money and power at that level of concentration. FTX was only one cluster of people, but how many people have been EAs first and then been exposed to the amount of money/power that FTX or Anthropic had/have?
- ^
EA-aligned equity from Anthropic might well be worth $5-$30B+,
Now that you mention this, I think it’s worth flagging the conflict of interest between EA and Anthropic that it poses. Although it’s a little awkward to ascribe conflicts of interest to movements, I think a belief that ideological allies hold vast amounts of wealth in a specific company—especially combined with a hope that such allies will use said wealth to further the movement’s objectives—qualifies.
There are a couple of layers to that. First, there’s a concern that the financial entanglement with Anthropic could influence EA actors, such as by pulling punches on Anthropic, punching extra-hard on OpenAI, or shading policy proposals in Anthropic’s favor. Relatedly, people may hesitate to criticize Anthropic (or make policy proposals hostile to it) because their actual or potential funders have Anthropic entanglements, whether or not the funders would actually act in a conflicted manner.
By analogy, I don’t see EA as a credible source on the virtues and drawbacks of crypto or Asana. The difference is that neither crypto nor management software are EA cause areas, so those conflicts are less likely to impinge on core EA work than the conflict regarding Anthropic.
The next layer is that a reasonable observer would discount some EA actions and proposals based on the COI. To a somewhat informed member of the general public or a policymaker, I think establishing the financial COI creates a burden shift, under which EA bears an affirmative burden of establishing that its actions and proposals are free of taint. That’s a hard burden to meet in a highly technical and fast-developing field. And some powerful entities (e.g., OpenAI) would be incentivized to hammer on the COI if people start listening to EA more.
I’m not sure how to mitigate this COI, although some sort of firewall between funders with Anthropic entanglements and grantmakers might help some.
(In this particular case, how Anthropic communicates about EA is more a meta concern, and so I don’t feel the COI in the same way I would if the concern about Anthropic were at the object level. Also, being comprised of social animals, EA cares about its reputation for more than instrumental reasons—so to the extent that there is a pro-Anthropic COI it may largely counteract that effect. However, I still think it’s generally worth explicitly raising and considering the COI where Anthropic-related conduct is being considered.)
(your text seems to cut off at the end abruptly, suggesting a copy/paste error or the like)
I agree with this being weird / a low blow in general, but not in this particular case. The crux with your footnote may be that I see this as more than a continuum.
I think someone’s interest in private communications becomes significantly weaker as they assume a position of great power over others, conditioned on the subject matter of the communication being a matter of meaningful public interest. Here, I think an AI executive’s perspective on EA is a matter of significant public interest.
Second, I do not find a wedding website to be a particularly private form of communication compared to (e.g.) a private conversation with a romantic partner. Audience in the hundreds, no strong confidentiality commitment, no precautions to prevent public access.
The more power the individual has over others, the wider the scope of topics that are of legitimate public interest for the others to bring up and the narrower the scope of communications that citing would be a weird / low. So what applies to major corporate CEOs with significant influence over the future would not generally apply to most people.
Compare this to paparazzi, who hound celebrities (who do not possess CEO-level power) for material that is not of legitimate public interest, and often under circumstances in which society recognizes particularly strong privacy rights.
I’m reminded of the NBA basketball-team owner who made some racist basketball-related comments to his affair partner, who leaked them. My recollection is that people threw shade on the affair partner (who arguably betrayed his confidences), but few people complained about showering hundreds of millions of dollars worth of tax consequences on the owner by forcing the sale of his team against his will. Unlike comments to a medium-size audience on a website, the owner’s comments were particularly private (to an intimate figure, 1:1, protected from non-consensual recording by criminal law).
Most of the writeup about branding seems focused on communications and storytelling. Is the strategic plan focused on these types of activities?
That observation should not be taken as necessarily implying a criticism, as I think it could be the right approach. One might conclude, for instance, that the communications and storytelling angle is pretty tractable and/or neglected in comparison to other brand issues. But the term branding can be seen as broader in focus, so I think clarity on scope would be helpful here.
Suppose one were managing the brand of a hotel chain consisting mostly of franchisees. It seems to me that there are two sides of the branding coin there. First, hotel brands have brand standards intended to create the brand. These are often very detailed on issues like room size, toiletries, staffing, free cookie at checkin, etc. The brand enters into contracts with franchisees to enforce those brand standards (although enforcement is often iffy, if travel bloggers are to be believed). Then they need to effectively communicate the brand they have created to potential customers. This week’s writeup seems focused on the second step of the process.
Brand management of EA is doubtless harder than managing a hotel chain, but one could imagine identifying the pain points in the current EA brand and developing brand standards to address those points. One challenge is that (at least many) brand standards need to be observable by the public, or at least by intermediaries trusted by the public. For instance, brand standards adopted in response to FTX need to persuade the public—not just “EA leadership” or EAs in general—that anyone who turned a blind eye to warnings about SBF has been appropriately dealt with.[1] Credible brand standards to address racism would need to point to more concrete, public actions than (e.g.) referral to Community Health for possible placement on what may come across to the public as double-secret probation.
I don’t see much about the brand-standards side of the coin in this writeup. I saw some of the linked EAG talks, but they are at a 10,000 foot level. That’s of course understandable for being an EAG talk!
I can think of a number of reasons why there might be less focus on the non-communication side of branding here, such as:
Brand standards may just be outside the scope of this particular writeup.
CEA may think detailed brand standards are inconsistent with what EA is, or are too costly for practical reasons.
CEA may think existing brand standards are fine (or at least not a limiting factor) and are followed, just not communicated effectively to the public.
CEA may believe that there isn’t clear community consensus about what semi-detailed brand standards should be, and CEA doesn’t have the power to enforce its own views on the subject on the community.
CEA may believe that, even if there is (and could be) community consensus about brand standards, CEA has little to no power to actually enforce said standards against people or entities who do not wish to comply.
CEA may believe that any vigorous attempt to enforce brand standards may imply acquiesce in the problems for which no formal enforcement action was taken.
Again, I’m not taking a position at this point on how much CEA can or should try to shape what the EA brand is vs. limiting itself to more effective brand communication.
- ^
“Turned a blind eye” and “appropriately dealt with” are intentionally vague because I’m not trying to reignite that conversation here beyond asserting that “turned a blind eye” is not limited to actual knowledge of the nature of SBF’s fraud.
Thanks for writing this out. I think it’s important to keep in mind that there’s a significant difference in lived experience between the median human being on this planet and the median EA.
As far as hype: AI might or might not be hype. The question is whether we can accept the risk of it being not-hype. Even if development plateaued in the near future, it is already powerful enough to have significant effects on (e.g.) world economies. I’d submit that we especially need non-Western perspectives in thinking about how AI will affect the lives of people in developing countries (cf. the discussion here). In my view, there’s a tendency in EA/EA-adjacent circles to assume technological progress will lift all boats, rather than considering that people have used technological advances throughout history to support their positions of power and privilege.
To be fair to 80K here, it is seeking to figure out where the people it advises can have the most career impact on the margin. That’s not necessarily the same question as what areas are most important on the abstract. For example, someone could believe that climate change is the most important problem facing humanity right now, but nevertheless believe things like progress on climate change is bottlenecked by something other than new talent (e.g., money) and/or there is enough recruitment for people to work on climate change to fill the field’s capacity with excellent candidates without any work on 80K’s part. So I’d encourage you to consider refining your critique to also address how likely devoting the additional resource(s) in question to your preferred cause area(s) is to make a difference.
I think there’s a range of approaches one could take on career advice, ranging (for lack of better terms) from client-centered counseling to advocacy-focused recruiting. Once an advisor has decided where on the continuum they want to be, I think your view that it is “far better to be honest about this and let people make informed decisions” follows. But I think the decision about transparency only comes after the decision about how much to listen to the client-advisees vs. attempt to influence them has been made.
It is not inconsistent for an advisor to personally believe X but be open to a range of V . . . Z when conducting advising. For example, most types of therapists are supposed to be pretty non-directive; not allowing one’s views to shine too brightly to one’s therapy client is not a epistemic defect.
To be sure, 80K has never strongly been into a client-centered counseling model, nor should it have been. The end goal isn’t to benefit the client, and opera and many other things have never been on the table! But the recent announcement seems to be a move away from what physicians might analogize to a shared decisionmaking model toward a narrower focus on roles that are maximum impact in the organization’s best judgment. There are upsides and downsides of that shift.
But one could also reason:
(1) There should be (at least) one EA org focused on AI risk career advice; it is important that this org operate at a high level at the present time.
(2) If there should be such an org, it should be—or maybe can only be -- 80K; it is more capable of meeting criterion (1) quickly than any other org that could try. It already has staff with significant experience in the area and organizational competence to deliver career advising services with moderately high throughput.
(3) Thus, 80K should focus on AI risk career advice.
If one generally accepts both your original three points and these three, I think they are left with a tradeoff to make, focusing on questions like:
If both versions of statement (1) cannot be fulfilled in the next 1-3 years (i.e., until another org can sufficiently plug whichever hole 80K didn’t fill), which version is more important to fulfill during that time frame?
Given the capabilities and limitations of other orgs (both extant and potential future), would it be easier for another org to plug the AI-focused hole or the general hole?
I have a complicated reaction.
First, I think @NickLaing is right to point out that there’s a missing mood here and to express disappointment that it isn’t being sufficiently acknowledged.
2. My assumption is that the direction change is motivated by factors like:
A view of AI as a particularly time-sensitive area right now vs. areas like GHD often having a slower path to marginal impact (in part due to the excellence and strength of existing funding-constrained work).
An assumption that there are / will be many more net positions to fill in AI safety for the next few years, especially to the extent one thinks that funding will continue to shift in this direction. (Relatedly, one might think there will be relatively few positions to fill in certain other cause areas.)
I would suggest that these kinds of views and assumptions don’t imply that people who are already invested in other cause areas should shift focus. People who are already on a solid path to impact are not, as I understand it, 80K’s primary target audience.
3. I’m generally OK with 80K going in this direction if that is what its staff, leadership, and donors want. I’ve taken a harder-line stance on this sort of thing to the extent that I see something as core infrastructure that is a natural near-monopoly (e.g., the Forum, university groups) -- in which case I think there’s an enhanced obligation to share the commons. Here, there’s nothing inherent about career advising that is near-monopolistic (cf. Probably Good and Animal Advocacy Careers exist in analogous spaces). I would expect the new 80K to make at least passing reference to the existence of other EA career advice services for those who decide they want to work in another cause area. Thus, to the extent that there are advisors interested in giving advice in these areas, advisees interested in receiving that advice, and funders interested in supporting those areas, there’s no clear reason why alternative advisors would not fill the gap left by 80K here. I’d like to have seen more lead time, but get that the situation in AI is rapidly evolving and that this is a reaction to external developments.
4. I think part of the solution is to stop thinking of 80K as (quoting Nick’s comment) “one of the top 3 or so EA orgs” in the same sense one might have considered it before this shift. Of course, it’s an EA org in the same sense that (e.g.) Animal Advocacy Careers is an EA org, but after today’s announcement it shouldn’t be seen as a broad-tent EA org in the same vein as (e.g.,) GWWC. Therefore, we should be careful not to read a shift in the broader community’s cause prio into 80K’s statements or direction. This may change how we interact with it and defer (or not) to it in the future. For example, if someone wants to point a person toward broad-based career advice, Probably Good is probably the most appropriate choice.
5. I too am concerned about the EA funnel / onramp / tone-setting issues that EA others have written about, but don’t have much to add on those.
It might be helpful to clarify what you mean by “moral hazard” here.
Personally, I’m optimistic that this could be done in specific ways that could be better than one might initially presume. One wouldn’t fund “CEA”—they could instead fund specific programs in CEA, for instance. I imagine that people at CEA might have some good ideas of specific things they could fund that OP isn’t a good fit for.
That may be viable, although I think it would be better for both sides if these programs were not in CEA but instead in an independent organization. For the small-donor side, it limits the risk that their monies will just funge against OP/GV’s, or that OP/GV will influence how the community-funded program is run (e.g., through its influence on CEA management officials). On the OP/GV side, organizational separation is probably necessary to provide some of the reputational distance it may be looking for. That being said, given that small/medium donors have never to my knowledge been given this kind of opportunity, and the significant coordination obstacles involved, I would not characterize them not having taken it as indicative of much in particular.
~
More broadly, I think this is a challenging conversation without nailing down the objective better—and that may be hard for us on the Forum to do. Without any inside knowledge, my guess is that OP/GV’s concerns are not primarily focused on the existence of discrete programs “that OP isn’t a good fit for” or a desire not to fund them.
For example, a recent public comment from Dustin contain the following sentence: “But I can’t e.g. get SBF to not do podcasts nor stop the EA (or two?) that seem to have joined DOGE and started laying waste to USAID.” The concerns implied by that statement aren’t really fixable by the community funding discrete programs, or even by shelving discrete programs altogether. Not being the flagship EA organization’s predominant donor may not be sufficient for getting reputational distance from that sort of thing, but it’s probably a necessary condition.
I speculate that other concerns may be about the way certain core programs are run—e.g., I would not be too surprised to hear that OP/GV would rather not have particular controversial content allowed on the Forum, or have advocates for certain political positions admitted to EAGs, or whatever. I’m not going to name the content I have in mind in an attempt not to be drawn into an object-level discussion on those topics, but I wouldn’t want my own money being used to platform such content or help its adherents network either. Anyway, these types of issues can probably be fixed by running the program with community/other-donor funding in a separate organization, but these programs are expensive to run. And the community / non-OP/GV donors are not a monolithic constituency; I suspect that at least a significant minority of the community would share OP/GV’s concerns on the merits.
Lastly, I’d flag that CEA being 90% OP/GV funded really can be quite different than 70% in some important ways, still. For example, if OP/GV were to leave—then CEA might be able to go to 30% of its size—a big loss, but much better than 10% of its size.
I agree—the linked comment was focused more on the impact of funding diversity on conflicts of interest and cause prio. But the amount of smaller-EA-donor dollars to go around is limited,[1] and so we have to consider the opportunity cost of diverting them to fund CEA or similar meta work on an ongoing basis. OP/GV is usually a pretty responsible funder, so the odds of them suddenly defunding CEA without providing some sort of notice and transitional funding seems low.
- ^
For instance, I believe GWWC pledgers gave about $32MM/year on average from 2020-2022 [p. 12 of this impact assessment], and not all pledgers are EAs.
- ^
I’d love to see some other EA donors and community members step up here. I think it’s kind of damning how little EA money comes from community members or sources other than OP right now. Long-term this seems pretty unhealthy.
There was some prior relevant discussion in November 2023 in this CEA fundraising thread, such as my comment here about funder diversity at CEA. Basically, I didn’t think that there was much meaningful difference between a CEA that was (e.g.) 90% OP/GV funded vs. 70% OP/GV funded. So I think the only practical way for that percentage to move enough to make a real difference would be both an increase in community contributions/control and CEA going on a fairly severe diet.
As for EAIF, expected total grantmaking was ~$2.5MM for 2025. Even if a sizable fraction of that went to CEA, it would only be perhaps 1-2% of CEA’s 2023 budget of $31.4MM.
I recall participating in some discussions here about identifying core infrastructure that should be prioritized for broad-based funding for democratic and/or epistemic reasons. Identifying items in the low millions for more independent funding seems more realistic than meaningful changes in CEA’s funding base. The Forum strikes me as an obvious candidate, but a community-funded version would presumably need to run on a significantly leaner budget than I understand to be currently in place.
You’re not missing anything!
Cause Prioritization. Does It Ignore Political and Social Reality?
People should be factoring in the risk of waste, fraud, or mismanagement, as well as the risk of adverse leadership changes, into their cost-effectiveness estimates. That being said, these kinds of risks exist for most potential altruistic projects one could envision. If the magnitude of the risk (and the consequences of the fraud etc.) are similar between the projects one is considering, then it’s unlikely that consideration of this risk will affect one’s conclusion.
EA says encourages donations where impact is highest, which often means low income countries. But what happens when you live in one of those countries? Should I still prioritize problems elsewhere?
I think this is undertheorized in part because EA developed in, and remains focused on, high-income countries. It also developed in a very individualistic culture.
EA implicitly tells at least some members of the global top 1% that its OK to stay rich as long as they give a meaningful amount of their income away. If it’s OK for me to keep ~90% of my income for myself and my family, then it’s hard for me to see how it wouldn’t be OK for a lower-income community to keep virtually all of its resources for itself. So given that, I’d be pretty uncomfortable with there being a “EA party line” that moderately low-income communities should send any meaningful amount of their money away to even lower-income communities.
Maybe one could see people in lower-income areas giving money to even lower-income areas as behaving in a supererogatory fashion?
I would generally read EA materials through a lens of the main target audience being relatively well-off people in developed countries. That audience generally isn’t going to have local knowledge of (often) smaller-scale, highly effective things to do in a lower-income country. Moreover, it’s often not cost-effective to evaluate smaller projects thoroughly enough to recommend them over the tried-and-true projects that can absorb millions in funding. You, however, might have that kind of knowledge!
Amartya Sen (Development as Freedom) says that well-being isn’t just about cost-effectiveness, it’s about giving people the capability to sustain improvements in their lives. That makes me wonder: Are EA cause priorities too detached from local realities? Shouldn’t people closest to a problem have more say in solving it?
I think that’s a fair question. However, in current EA global health & development work, the primary intended beneficiaries of classic GiveWell-style work are children under age 5 who are at risk of dying from malaria or other illnesses. Someone else has to speak for them as a class, and I don’t think toddlers can have well-being in the broader sense you describe. Moreover, the classic EA GH&D program is pretty narrow—such as a few dollars for a bednet—so EA efforts generally cause only a very small fraction of all resources spent on the child beneficiary’s welfare to have low local control.
All that makes me somewhat less concerned about potential paternalism than I would be if EAs were commonly telling adult beneficiaries that they knew better about the beneficiary’s own interest than said beneficiaries, or if EAs controlled a significant fraction of all charitable spending and/or all spending in developing countries.
Why is losing the AI arms race relevant to whether the mission as originally envisioned is doomed to fail?
It depends on what exactly “losing the AI arms race” means, which is in turn influenced by how big the advantages of being first (or one of the first) to AGI are. If the mission was to “advance digital intelligence,” and it was widely understood that the mission involved building AGI and/or near-AGI, that would seem to imply some sort of technological leadership position was prerequisite to mission success. I agree that being first to AGI isn’t particularly relevant to succeeding at the mission. But if they can’t stay competitive with Google et al., it’s questionable whether they can meaningfully achieve the goal of “advanc[ing] digital intelligence.”
So for instance, if OpenAI’s progress rate were to be reduced by X% due to the disadvantages in raising capital it faces on account of its non-profit structure, would that be enough to render it largely irrelevant as other actors quickly passed it and their lead grew with every passing month? I think a lot would depend on what X% is. A range of values seem plausible to me; as I mentioned in a different comment I just submitted, I suspect that fairly probative evidence on OpenAI’s current ability to fundraise with its non-profit structure exists but is not yet public.
(I found the language you quoted going back to 2015, so it’s probably a fair characterization of what OpenAI was telling donors and governmental agencies at the beginning.)
OpenAI might claim that preventing a for-profit conversion would destroy or fatally damage the company, but they do not have proof. [ . . . .] The fact is that this far OpenAI has raised huge amounts of money and been at the forefront of scaling with its current hybrid structure, and I think a court could rightfully be skeptical of claims without proof that this cannot continue.
Yes, that’s the counterargument. I submit that there is likely to be pretty relevant documentary and testimonial evidence on this point, but we don’t know which way it would go. So I don’t have any clear opinion on whether OpenAI’s argument would work and/or how much these kinds of concerns would shape the scope of injunctive relief.
OpenAI agreed to terms that I would almost characterize as a poison pill: if the transformation doesn’t move forward on time, the investors can get that $6.6B back. It may be that would-be investors were not willing to put enough money to keep OpenAI going without a commitment to refund if the non-profit board were not disempowered. As you mentioned, corporations exaggerate the detrimental impact of legal requirements they don’t like all the time! But the statements and actions of multiple, independent third-party investors should be less infected on this issue. If an inability to secure adequate funding as a non-profit is what this evidence points toward, I think that would be enough to establish a prima facie case and require proponents to put up evidence of their own to rebut that case.
So who will make that case? It’s not clear Musk will assert that OpenAI can stay competitive while remaining a non-profit; his expression of a desire “[o]n behalf of a consortium of buyers,” “to acquire all assets . . . of OpenAI” for $97,375,000,000 (Order at 14 n.10) suggests he may not be inclined to advocate for OpenAI’s ability to use its own assets to successfully advance its mission.
There’s also the possibility that the court would show some deference on this question to the business judgment of OpenAI’s independent board members if people like Altman and Brockman were screened off enough. It seems fairly clear to me that everyone understood early on there would need to be some for-profit elements in the mix, and so I think the non-conflicted board members may get some benefit of the doubt in figuring that out.
To the extent that evidence from the recent fundraising cycle supports the risk-of-fatal-damage theory, I suspect the relevance of fundraising success that occurred prior to the board controversy may be limited. I think it would be reasonable to ascribe lowered funder willingness to tolerate non-profit control to that controversy.
I understand your frustration here, but EAs may have decided it was better to engage in pro-democracy activity in a non-EA capacity. One data point: the pre-eminent EA funder was one of the top ten donors in the 2024 US elections cycle.
Or they may have decided that EA wasn’t a good fit for this kind of work for any combination of a half-dozen reasons, such as:
the EA brand could be ill-suited or even detrimental to this kind of work, either due to FTX or its association with a tech billionaire who made a lot of money on a platform that many believe to be corrosive of democracy;
the EA “workforce” isn’t well suited to this kind of work;
there are plenty of actors working in these spaces already, and there was no great reason to think that EAs would be more effective than those actors;
being seen as too political would impose heavy costs on other EA cause areas, especially AI policy—and “anti-authoritarian” is not non-partisan in 21st century America.
I don’t think it is necessary to rule out all possible alternative explanations before writing a critical comment. However, I think if you’re going to diagnose what you perceive as the root cause—“You wanted to settle for the ease of linear thinking”—I think it’s fair for us to ask for either clear evidence or a rule-out of alternative explanations.
As David points out, there have been a number of posts about democracy and elections, such as this analysis about the probability that a flipped vote in a US swing state would flip the presidential election outcome. I recall some discussion of the cost-per-vote as well. There are a lot of potential cause areas out there, and limited evaluative capacity, so I don’t think the evaluations being relatively shallow is problematic. That they did not consider all possible approaches to protecting/improving democracy is inevitable. I think it’s generally okay to place the burden of showing that a cause area warrants further investigation on proponents, rather than those proponents expecting the community to do a deep dive for them.