FWIW, I would enjoy more opportunities to organize events and conferences, and manage operations teams.
FWIW, I would enjoy more opportunities to organize events and conferences, and manage operations teams.
My understanding of how EA typically responds to anti-capitalist critiques of EA:
EAs are very split on capitalism, but a significant minority aren’t fans of it, and the majority think (very) significant reforms/regulations of the free market in some form(s) are justified.
The biggest difference on economics between EA and left-wing political movements is EA sees the market liberalization worldwide as a or the main source of increasing quality of life and material standard of living, and an unprecedented decrease in absolute global poverty in human history, in the last several decades. So EAs are likelier to have confidence in free(r) market principles as fundamentally good than most other left-leaning crowds.
Lots of EAs see their participation in EA as the most good they can do with their private/personal efforts, and often they’re quite active in politics, often left-wing politics, as part of the good they do as their public/political efforts. So, while effective giving/altruism is the most good one can do with some resources, like one’s money, other resources, like one’s time, can be put towards efforts aimed at systemic change. Whenever I’ve seen this pointed out, the distinction has mysteriously always been lost on anti-capitalist critics of EA. If there is a more important and different point they’re trying to make, I’m missing it.
A lot of EAs make the case that the kind of systemic change they are pursuing is what they think is best. This includes typical EA efforts, like donating to Givewell-recommended charities. The argument is these interventions are based on robust empirical evidence, and are demonstrably so cost-effective, such that they improve the well-being of people in undeveloped or developing countries, and their subsequent ability to autonomously pursue systemic change in their own societies. There are also a lot of EAs focused on farm animal welfare they believe is the most radically important form of systemic change they can focus on. As far as I’m aware, there are no existing significant or prominent public responses to these arguments from a left-wing perspective. Any such sources would be appreciated.
A lot of anti-capitalist criticism of EA is how it approaches the eradication of extreme global poverty. In addition to not addressing EA’s arguments for how their current efforts are aiming at affecting systemic change in the world’s poorer/poorest countries, anti-capitalist critics haven’t offered up much in the way of concrete, fleshed-out, evidence-based approaches to systemic change that would motivate EA to adopt them.
Anti-capitalist critics are much likelier than EA to see the redistribution of accumulated wealth through private philanthropy as having been accumulated unjustly and/or through exploitative means. Further, they’re likelier to see relative wealth inequality within a society as a fundamentally more important problem, and thus see directly redressing it fundamentally higher priority, than most of the EA community. Because of these different background assumptions, they’re likelier to perceive EA’s typical approaches to doing the most good as insufficiently supportive of democracy and egalitarianism. As a social movement, EA is much more like a voluntary community of people who contribute resources privately available to them, than it is a collective political effort. A lot of EAs are active in political activity aimed at systemic change, publicly do so as part and parcel with their EA motivations, and are not only willing but actively encourage public organization and coordination of these efforts among EAs and other advocates/activists. That anti-capitalist critics haven’t responded to these points seems to hinge on how they haven’t validated the distinction between use of personal/private resources, and public/political resources.
There isn’t much more EA can do to respond to anti-capitalist critics until anti-capitalist critics broach these subjects. The ball is in their court.
Anecdotally, I’d say I know several EAs who have shifted in the last few years from libertarianism or liberalism to conservatism, and some of them have been willing to be vocal about this in EA spaces. However, just as many of them have exited EA because they were fed up with how they weren’t taken seriously. I’d estimate of the dozens of EAs I know personally quite well, and the hundreds I’m more casually familiar with, 10-20% would count as ‘conservative,’ or at least ‘right-of-centre.’ Of course, this is a change from what was before apparently zero representation for conservatives in EA. Unfortunately, I can’t provide more info, as conservatives in EA are not wont to publicly discuss their political differences with EAs, because they don’t feel like their opinions are taken seriously or are respected.
Upvoted for starting an interesting and probing conversation. I do have several nitpicks.
Perhaps the most common criticism of EA is that the movement does not collectively align with radical anticapitalist politics
Maybe I’ve just stopped paying attention to basic criticisms of EA along these lines, because every time all the best responses from EA to these criticisms are produced in an attempt at a good-faith debate, the critics apparently weren’t interested in an actually serious dialogue that could change EA. Yet in the last couple years while the absolute amount of anticapitalism has increased, I’ve noticed less criticism of EA on the grounds it’s not anticapitalist enough. I think EA has begun to have a cemented reputation as a community that is primarily left-leaning, and certainly welcomes anticapitalist thought, but won’t on the whole mobilize towards anticapitalist activism at least until anticapitalist movements themselves produce effective means of ‘systemic change.’
An autistic rights activist condemned EA by alleging incompatibility between cost-benefit analysis and disability rights
I’m skeptical friction between EA and actors who misunderstand so much has consequences bad enough to worry about, since I don’t expect the criticism would be taken so seriously by anyone else to the point it would have much of an impact at all.
Key EA philosopher Peter Singer has been viewed negatively by left-wing academia after taking several steps to promote freedom of speech (Journal of Controversial Ideas, op-ed in defense of Damore)
Key EA philosopher Peter Singer was treated with hostility by left-wing people for his argument on sex with severely cognitively disabled adults
Peter Singer has been treated with hostility by traditional conservatives for his arguments on after-birth abortion and zoophilia
I’m also concerned about the impact of Singer’s actions on EA itself, but I’d like to see more focused analysis exploring what the probable impacts of controversies around Singer are.
MacAskill’s interview with Joe Rogan provoked hostility from viewers because of an offhand comment/joke he made about Britain deserving punishment for Brexit
William MacAskill received pushback from right-wing people for his argument in favor of taking refugees
Ditto my concerns about controversies surrounding Singer for Will as well, although I am generally much less concerned with Will than Singer.
Useful x-risk researchers, organizations and ideas are frequently viewed negatively by leftists inside and outside academia
I know some x-risk reducers who think a lot of left-wing op-eds are beginning to create a sentiment in some relevant circles that a focus on ‘AI alignment as an existential risk’ is a pie-in-the-sky, rich techie white guy concern about AI safety, and more concern should be put on how advances in AI will affect issues of social justice. The concern is diverting the focus of AI safety efforts away from how AGI poses an existential risk for what are perceived as more parochial concerns could be grossly net negative.
Impacts on existential risk:
None yet, that I can think of
Depending on what considers an x-risk, popular support for right-wing politicians that pursue counterproductive climate change or other anti-environmental policies, or who tend to be more hawkish, jingoistic, and nationalistic in ways that will increase the chances of great-power conflict, negatively impacts x-risk reduction efforts. It’s not clear that this has a direct impact on any EA work focused on x-risks, though, which is the kind of impacts you meant to assess.
Left-wing political culture seems to be a deeper, more pressing source of harm.
I understand you provided a caveat, but I think this take still misses a lot.
If you asked a lot of EAs, I think most of them would say right-wing political culture poses a deeper potential source of harm to EA than left-wing political culture. Left-wing political culture is only a more pressing source of harm because EA is disproportionately left-leaning, so the social networks EAs run in, and thus decision-making in EA, are more likely to be currently impacted by left-wing political culture.
It misses what counts as ‘left-wing political culture,’ especially in Anglo-American discourse, as the left-wing landscape is rapidly and dramatically shifting. While most EAs are left-leaning, and a significant minority would identify with the basket socialist/radical/anti-capitalist/far-left, a greater number, perhaps a plurality, would identify as centre-left/liberal/neoliberal. From the political right, and from other angles, both these camps are ‘left-wing.’ Yet they’re sufficiently different that when accuracy matters, as it does regarding EA, we should use more precise language to differentiate between centre-left/liberal and radical/anticapitalist/far-left ‘left-wing political culture.’ For example, in the U.S., it currently seems the ‘progressive’ political identity can apply to everyone from a neoliberal to a social democrat to a radical anticapitalist. On leftist forums I frequent, liberals are often labelled as ‘centrists’ or ‘right-wing,’ and are perceived as having more in common with conservative and moderates than they do anti-capitalists.
Anecdotally, I would say the grassroots membership of the EA movement is more politically divergent, less moderate, and generallly “to the left” of flagship EA organizations/institutions, in that I talk to a lot of EAs who feel EA is generally still too much to the right for their liking, and actually agree with and wish EA would be much more in line with changes left-wing critics would demand of us.
The concerns you raise in your linked post are actually concerns a lot of other people have cited I have in mind for why they don’t currently prioritize AI alignment, existential risk reduction, or the long-term future. Most EAs I’ve talked to who don’t share those priorities say they’d be open to shifting their priorities in that direction in the future, but currently they have unresolved issues with the level of uncertainty and speculation in these fields. Notably, EA is now focusing more and more effort on the source of unresolved concerns with existential risk reduction, such as demonstrated ability to predict the long-term future. That work is only beginning though.
Givewell’s and Open Phil’s worked wasn’t termed ‘Cause X,’ but I think a lot of the stuff you’re pointing to would’ve started before ‘Cause X’ was a common term in EA. They definitely qualify. One thing is Givewell and Open Phil are much bigger organizations than most in EA, so they are unusually able to pursue these things. So my contention that this kind of research is impractical for most organizations to do still holds up. It may be falsified in the near future though. Aside from Givewell and Open Phil, the organizations that can permanently focus on cause prioritization are:
institutes at public universities with large endowments, like the Future of Humanity Institute and the Global Priorities Institute at Oxford University.
small, private non-profit organizations like Rethink Priorities.
Honestly, I am impressed and pleasantly surprised organizations like Rethink Priorities can go from a small team to a growing organization in EA. Cause prioritization is such a niche cause unique to EA, I didn’t know if there was hope for it to keep sustainably growing. So far, the growth of the field has proven sustainable. I hope it keeps up.
I just wanted to channel Aaron’s comment in clarifying the following:
While I don’t mind the characterization, I didn’t originally intend my comment as a kind of audit.
I was not under the impression the money had been disbursed yet, and it wasn’t ever my intention to criticize grantmaking decisions after disbursement, or to evaluate individual grant recommendations from this round in particular, only a general trend in the LTF Fund.
A lot of this is the private sensitivity many community members feel about publicly criticizing the Open Philanthropy Project. I’d chalk it up to the relative power Open Phil wields having complicated impacts on all our thinking on this subject, since with how little the EA community comments on it, the lack of public feedback Open Phil receives seems out of sync with the idea they are the sort of organization that would welcome it. Another thing is the quality of criticism and defense of grantmaking decisions on both sides is quite low. It seems to me EA has overgeneralized its conflict avoidance to exclude scenarios when adversarial debate or communication is fruitful for a community overall, and so when adversarial debate is instrumental, EA is poor at it to the point it doesn’t recognize good debate.
A pattern I’ve seen is for critics of something in EA will parse disagreement with some aspect(s) of their criticism as a wholesale political rejection of everything they’re saying, or taking it as a personal attack on them on retaliation for attacking a shibboleth of EA. These reactions are usually patently false, but this hasn’t stopped EA from garnering a reputation of being hypocritically closed to criticism, and impossible to affect change in.
While I wouldn’t say I generally agree with all of Open Phil’s grants, and simply by chance most EAs or other people wouldn’t because they’re are so many, the impression I’ve gotten is that the EA community and Good Ventures don’t have identical priorities. EA is primarily concerned with global poverty alleviation, AI alignment, and animal welfare. An example of something Open Phil or Good Ventures prioritizes more than EA is criminal justice reform. EA agrees criminal justice reform is one of the more promising areas in public policy to do good, it’s not literally one of EA’s top priorities. So, criminal justice reform is a top priority more particular to Dustin Moskowitz and Cari Tuna.
My impression is that as long as motivations in Open Phil’s grantmaking don’t pull away from effectiveness and other EA values in the cause areas the community cares most about, they don’t mind as much what Open Phil does. A good example of when the EA community is willing to strongly criticize Open Phil when ineffective grantmaking infringes on a cause area EA is more passionate about is the criticism Open Phil received from multiple points over how they made their grant to OpenAI.
Summary: This is the most substantial round of grant recommendations from the EA Long-Term Future Fund to date, so it is a good opportunity to evaluate the performance of the Fund after changes to its management structure in the last year. I am measuring the performance of the EA Funds on the basis of what I am calling ‘counterfactually unique’ grant recommendations. I.e., grant recommendations that, without the Long-Term Future Fund, individual donors nor larger grantmakers like the Open Philanthropy Project would have identified or funded.
Based on that measure, 20 of 23, or 87%, grant recommendations, worth $673,150 of $923,150, or ~73% of the money to be disbursed, are counterfactually unique. Having read all the comments, multiple concerns with a few specific grants came up, based on uncertainty or controversy in the estimation of value of these grant recommendations. Even if we exclude those grants from the estimate of counterfactually unique grant recommendations to make a ‘conservative’ estimate, 16 of 23, or 69.5%, of grants, worth $535,150 of $923,150, or ~58%, of the money to be disbursed, are counterfactually unique and fit into a more conservative, risk-averse approach that would have ruled out more uncertain or controversial successful grant applicants.
These numbers are an extremely significant improvement in the quality and quantity of unique opportunities for grantmaking the Long-Term Future Fund has made since a year ago. This grant report generally succeeds at achieving a goal of coordinating donations through the EA Funds to unique recipients who otherwise would have been overlooked for funding by individual donors and larger grantmakers. This report is also the most detailed of its kind, and creates an opportunity to create a detailed assessment of the Long-Term Future Fund’s track record going forward. I hope the other EA Funds emulate and build on this approach.
In his 2018 AI Alignment Literature Review and Charity Comparison, Ben Hoskins had the following to say about changes in the management structure of the EA Funds.
I’m skeptical this will solve the underlying problem. Presumably they organically came across plenty of possible grants – if this was truly a ‘lower barrier to giving’ vehicle than OpenPhil they would have just made those grants. It is possible, however, that more managers will help them find more non-controversial ideas to fund.
To clarify, the purpose of the EA Funds has been to allow individual donors relatively smaller than grantmakers like the Open Philanthropy Project (including all donors in EA except other professional, private, non-profit grantmaking organizations) to identify higher-risk grants for projects that are still small enough that they would be missed by an organization like Open Phil. So, for a respective cause area, an EA Fund functions as like an index fund that incentivizes the launch of nascent projects, organizations, and research in the EA community.
Of the $923,150 of grant recommendations made to Centre for Effective Altruism for the EA Long-Term Future Fund this round of grantmaking, all but $250,000 of it went to the kind of projects or organizations that the Open Philanthropy Project tends to make. To clarify, there isn’t a rule or practice of the EA Funds not making those kinds of grant. It’s at the discretion of the fund managers to decide if they should recommend grants at a given time to more typical grant recipients in their cause area, or to newer, smaller, and/or less-established projects/organizations. At the time of this grantmaking round, recommendations to better-established organizations like MIRI, CFAR, and Ought were considered the best proportional use of marginal funds allotted for disbursement at this time.
20 (~87% of total number) grant recommendations totalling $723,150 = ~73%
+ 3 (~13% of total number) grant recommendations totalling $200,00 = ~27%
= 23 grant (in total) recommendations totalling $923,150 = 100%
Since this is the most extensive round of grant recommendations from the Long-Term Future Fund to date with the EA Funds’ new management structure, this is the best apparent opportunity for evaluating the success of the changes made to how the EA Funds are managed. In this round of grantmaking, 87% of the total number of grant recommendations were for efforts of individuals, totalling 73% of the total amount of money that would be disbursed for these grants, that would otherwise have been missed by individual donors, or larger grantmaking bodies.
In other words, the Long-Term Future (LTF) Fund is directly responsible for 87% of 23 grant recommendations made, totalling 73% of $923.15K worth of unique grants, that, presumably, would not have been counterfactually identified had individual donors not been able to pool and coordinate their donations through the LTF Fund. I keep highlighting these numbers, because they can essentially be thought of as the LTF Funds’ current rate of efficiency in fulfilling the purposes it was set up for.
Criticisms and Conservative Estimates
Above is the estimate for the number of grants, and the amount of donations to the EA Funds, that are counterfactually unique to the EA Funds, and can be thought of how effective the impact of the Long-Term Future Fund in particular is. That is the estimate for the grants donors to the EA Funds very probably could not have identified by themselves. Yet another question is would they opt to donate to the grant recommendations that have been just been made by the LTF fund managers? Part of the basis for the EA Funds thus far is to trust the fund mangers’ individual discretion based on their years of expertise or professional experience working in the respective cause area. My above estimates are based on the assumption all the counterfactually unique grant recommendations the LTF Funds make are indeed effective. We can think of those numbers as a ‘liberal’ estimate.
I’ve at least skimmed or read all 180+ comments on this post thus far, and a few persistent concerns with the grant recommendations have stood out. These were concerns that the evidence basis on which some grant recommendations were made wasn’t sufficient to justify the grant, i.e., they were ‘too risky.’ If we exclude grant recommendations that are subject to multiple, unresolved concerns from the LTF Funds, we can make a ‘conservative’ estimate of the percentage and dollar value of counterfactually unique grant recommendations made by the LTF Fund.
Concerns with 1 grant recommendations worth $28,000 to hand out printed copies of fanfiction HPMoR to international math competition medalists.
Concerns with 2 grant recommendations worth $40,000 for individuals who are not currently pursuing one or more specific, concrete projects, but rather are pursuing independent research or self-development. The concern is the grant is based on the fund manager’s (managers’ ?) personal confidence in the individual, and even explication for the grant recommendations expressed concern with the uncertainty in the value of grants like these.
Concerns that with multiple grants made to similar forecasting-based projects, there would be redundancy, in particular concern with 1 grant recommendation worth $70,000 to forecasting company Metaculus that might be better suited to an investment for equity in a startup rather than a grant from a non-profit foundation.
In total, these are 4 grants worth $138,000 that multiple commenters have raised concerns with on the basis the uncertainty for these grants means the grant recommendations don’t seem justified. To clarify, I am not making an assumption about the value of these grants are. All I would say about these particular grants is they are unconventional, but that insofar as the EA Funds are intended to be a kind of index fund willing to back more experimental efforts, these projects fit within the established expectations of how the EA Funds are to be manged. Reading all the comments, the one helpful, concrete suggestion was for the LTF Fund to follow-up in the future with grant recipients and publish their takeaways from the grants.
Of the 20 recommendations made for unique grant recipients worth $673,150, if we were to exclude these 4 recommendations worth $138,000, that leaves 16 of 23, or 69.5% of total recommendations, worth $535,150 of $923,150, or ~58% worth of the total grant recommendations, uniquely attributable to the EA Funds. Again, those grant recommendations excluded from this ‘conservative’ estimate are ruled out based on the uncertainty or lack of confidence in them from commenters, not necessarily the fund managers themselves. While presumably any of the value of any grant recommendation could be disputed, these are the only grant recipients for which multiple commenters have made raised still-unresolved concerns so far. These grants are still initially being made, so whether the best hopes of the fund managers for the value of each of these grants will be borne out is something to follow-up with in the future.
While these numbers don’t address suggestions for how the management of the Long-Term Future Fund can still be improved, overall I would say these numbers show the Long-Term Future Fund has made extremely significant improvement since last year at achieving a high rate of counterfactually unique grants to more nascent or experimental projects that are typically missed in EA donations. I think with some suggested improvements like hiring some professional clerical assistance with managing the Long-Term Future Fund, the Long-Term Future Fund is employing a successful approach to making unique grants. I hope the other EA Funds try emulating and building on this approach. The EA Funds are still relatively new, and so to measure their track record of success with their grants remains to be done, but this report provides a great foundation to start doing so.
If you don’t mind me asking, what did goal did you intend to achieve or accomplish with this comment?
This strikes me as a great, concrete suggestion. As I tell a lot of people, great suggestions in EA only go somewhere if someone is done with them. I would strongly encourage you to develop this suggestion into its own article on the EA Forum about how the EA Funds can be improved. Please let me know if you are interested in doing so, and I can help out. If you don’t think you’ll have time to develop this suggestion, please let me know, as I would be interested in doing that myself if you don’t have the time.
The way the management of the EA Funds is structured to me makes sense within the goals set for the EA Funds. So I think the situation in which 2 people are paid full-time for one month to evaluate EA Funds applications makes sense is one where 2 of the 4 volunteer fund managers took a month off from their other positions to evaluate the applications. Finding 2 people from out of the blue to evaluate applications for one month without continuity with how the LTF Fund has been managed seems like it’d be too difficult to effectively accomplish in the timeframe of a few months.
In general, one issue the EA Funds face other granting bodies in EA don’t face is the donations come from many different donors. This consequently means how much the EA Funds receive and distribute, and how it’s distributed, is much more complicated than ones the CEA or a similar organization typically faces.
One issue with this is the fund managers are unpaid volunteers who have other full-time jobs, so being a fund manager isn’t a “job” in the most typical sense. Of course a lot of people think it should be treated like one though. When this came up in past discussions regarding how the EA Funds could be structured better, suggestions like hiring a full-time fund manager came up against trade-offs against other priorities for the EA Funds, like not spending too much overheard on them, or having the diversity of perspectives that comes with multiple volunteer fund managers.
I’ve always thought of “Cause X” as a theme for events like EAG that are meant to prompt thinking in EA, and wasn’t ever intended as something to take seriously and literally in actual EA action. If it was intended to be that, I don’t think it ever should have been. I don’t think it should be treated as such either. I don’t see how it makes sense to anyone as a practical pursuit.
There have been some cause prioritization efforts that took ‘Cause X’ seriously. Yet the presence of x-risk reduction in EA as a top priority, the #1 question has been to verify the validity and soundness of the fundamental assumptions underlying x-risk reduction as the top global priority. That’s because to its nature, based on the overall soundness in the fundamental assumptions behind x-risk, its basically binary whether it is or isn’t the top priority. For prioritizers willing to work within the boundaries that the assumptions determining x-risk as the top moral priority are all true, cause prioritization focused on how actors should be working on x-risk reduction.
Since the question became reformulated as “Is x-risk reduction Cause X?,” much cause prioritization research has been reduced to research on questions in relevant areas of still-great uncertainty (e.g., population ethics and other moral philosophy, forecasting, etc.). As far as I’m aware, no other cause pri efforts have been predicated on the theme of ‘finding Cause X.’
In general, I’ve never thought it made much sense. Any cause that has gained traction in EA already entails a partial answer to that question, along some common lines that arguably define what EA is.
While they’re disparate, all the causes in EA combine some form of practical aggregate consequentialism with global-scale interventions to impact the well-being of as large a population as feasible, within whatever other constraints one is working with. This is true of the initial cause areas EA prioritized: global poverty alleviation; farm animal welfare; and AI alignment. Other causes, like public policy reform, life extension, mental health interventions, wild animal welfare, and other existential risks, all fit with this framework.
It’s taken for granted in EA conversations, but there are shared assumptions that go into this common perspective that distinguish EA from other efforts to do good. If someone disagrees with that framework, and has different fundamental assumptions about what is important, then they naturally sort themselves into different kinds of extant movements that align with their perspective better, such as more overtly political movements. In essence, what separates EA from any other movement, in terms of how any of us, and other private individuals, chose in which socially conscious community to spend our own time, is the different assumptions we make in trying to answer the question: ‘What is Cause X?’
They’re not brought to attention much, but there are sources outlining what the ‘fundamental assumptions’ of EA are (what are typically called ’EA values) which I can provide upon request. Within EA, I think pursuing what someone thinks Cause X is takes the following form:
1. If one is confident one’s current priority is the best available option one can realistically impact within the EA framework, working on it directly makes sense. An example of this work is the work of any EA-aligned organization permanently dedicated to work in one or more specific causes, and efforts to support them.
2. If one is confident one’s current priority is the best available option, but one needs more evidence to convincingly justify it as a plausible top priority in EA, or doesn’t know how individuals can do work to realistically have an impact on the cause, doing research to figure that out makes sense. An example of this kind of work is the research Rethink Priorities is undertaking to identify crucial evidence underpinning fundamental assumptions in causes like wild animal welfare.
3. If one is confident the best available option one will identify is within the EA framework, but you have little to no confidence in what those options will be, it makes sense to do very fundamental research that intellectually explores the principles of effective altruism. An example of this kind of work in EA is that of the Global Priorities Institute.
EA Forum content generally considered most valuable tends to be the kind that advances the objectives of one or more of EA’s cause areas, or the philosophy of the movement in general. Content focused on EA itself as a social community is a different kind of content that is typically related as less valuable. I think this judgement can be inferred from what articles tend to win the EA Forum Prizes. The sticking point is that this post is perceived as a particularly valuable example (perhaps the most valuable example) among a kind of post that are generally regarded as less valuable.
Of course the post in question advances the objectives of EA. At least in the evaluation of the judges, a handful of other posts this month were more valuable still. It wasn’t disqualified.
Whether by coincidence of typically being on the topic of ‘community,’ or another reason, I agree we should neither shy away from incentivize posts that reflect disagreements in EA, or are critical of EA as it is; nor directly disincentivize disagreement. I do believe there is a tendency towards that. While I am wary of incentivizing discussion of disagreement for its own sake, since that could introduce the perverse incentives of people posting articles that don’t do the disagreement justice, overall I believe it’s fairly achievable.
I’ve got a lot on my plate, and it is also not as much a personal priority for me in EA, so I wouldn’t do it, but I would recommend you (or someone else concerned) write an EA Forum article discussing what you think the criteria or priorities should be for the EA Forum Prizes, relative to the kinds of articles that win the prize now, and in particular why it is important they should include incentivizing high-quality treatments of critical disagreements in EA. I would be willing to proofread or otherwise aid in writing the article.
Thanks for your response. I was under a false impression. My apologies for the mistake.
Edit: The original text of this comment below remains unedited, but I made the mistake of stating the CEA sets the conditions of the EA Forum Prizes, when they only provide the funding for them.
Summary: It makes sense the EA Forum is currently set up to promote or incentivize content that clearly advances one or more of EA’s current objectives framed so it’s generally accessible. That content is prioritized based on the view it’s the most important role or function the EA Forum serves as a platform. This is different than the priority of promoting and incentivizing popular content, because it raises awareness and starts a conversation of what is a top priority for the greatest number of community members (active on the EA Forum). This post advances the latter as opposed to the former goal, which is probably why it wouldn’t receive an EA Forum Prize. It seems starting a conversation about what the priorities for promotion and incentives on the EA Forum, and what the criterion for selecting those priorities, should be would be how to best broach this subject.
Why different posts receive the reward, and why this post didn’t receive the reward, is a matter of what kind of posts people want to reward and incentivize, and why. It also makes sense to keep in mind the rewards are given and the EA Forum maintained by the Centre for Effective Altruism (CEA) as an institution. I’m aware with the current strategy for the EA Forum, the goal is to promote content that is:
content that is more basic, and doesn’t assume advanced background knowledge of one or more particular cause areas.
makes intellectual and/or material progress on the general goals of effective altruism, or successfully appeals to a wide audience about why and how a particular means can be applied to achieve those goals.
This is based on the ultimate goal of having the EA Forum be a platform primarily focused on community-building, both in terms of growing the effective altruism movement, and enhancing the level of involvement from people who only relate to EA in a more casual way (e.g., inducing those who merely to ‘subscribe’ to EA as a philosophy to personally ‘identify’ with it, and change what they themselves personally do to be aligned with EA values).
This contrasts with how the current EA community tends to use Facebook groups, which are used for conversations that tend to either be more specialized and technical, e.g., about a specific cause area or career, or social and informal. For a bulk of the current active EA community, their use of the EA Forum is based on prioritizing conversation of affairs in EA that are both official, and general, in that the conversation is, at least in theory, relevant to everyone in EA. It makes sense to a lot of the EA community this should be a primary purpose for the EA Forum, and they’ve gotten accustomed to using it that way.
The problem is what much of the EA community sees as a primary priority for the EA Forum’s role/function is not the top priority for what is promoted or incentivized as part of the EA Forum’s moderation strategy. The EA Forum serves as a public square for whatever topics and subjects are a priority for the EA community at large. The content being incentivized through rewards, or promoted to the frontpage, is content that advances EA’s objectives, as opposed to discussions themed on grievances with the EA community’s social dynamics, what a lot of people in EA call a more ‘meta-level’ discussion or issue. The dedicated space for this on the EA Forum 2.0 so far has been the ‘Community’ section.
One obvious factor here is how promoting or incentivizing content that raises awareness of disagreements and controversies within EA is something that could be offputting to a general readership, or get them more involved in ways that distract from rather than advance progress on EA’s objectives. For what it’s worth, I think this was an unusually fruitful hashing out in public of a common grievance in EA. I also don’t believe the CEA is not rewarding posts critical of community dynamics out a desire to starve these discussions of awareness and attention. They consider these conversations important, but it’s just they merely consider posts that directly advance the objectives of EA as a movement in various ways more valuable.
So based on moderation strategy of the EA Forum, there is a criterion for awarding EA Forum Prizes that are not aligned with the content that tends to be most popular, for whatever reasons. It’s similar to how the Academy Awards don’t usually go to the films that earn the most money at the box office. The next step seems to be having a conversation aiming to reconcile with what the EA Forum’s moderation strategy prioritizes, with why the community at large thinks the most upvoted EA Forum posts are the most important, and should be incentivized.
Dovetailing Milan, I remember from a discussion in the comments of that post itself, it was reckoned that even taking into account changes to the karma system in the EA Forum 2.0, that post received the highest absolute number of upvotes from any post in the history of the EA Forum.
For posterity, to reiterate what Habryka said, I am familiar with the case to which he is referring.