English teacher for adults and teacher trainer, a lover of many things (languages, literature, art, maths, physics, history) and people. Head of studies at the satellite school of Noia, Spain.
Manuel Del Río Rodríguez 🔹
I mean… if I were a conservative billionaire, I would be extremely wary about misuse and subversion of the principles that started some foundations (Mellon is the most egregious, but also Rockefeller, Ford, MacArthur...) and a few months ago we had in this very forum a discussion, if memory serves, of the terrible philanthropic choices of MacKenzie Scott. While I obviously think it is desirable for billionaires to spend money on effective charity giving, I also feel there’s a reasonable case to be made for that money that would conventionally be routed into philanthropy to do more good if directed toward innovation: sometimes through philanthropy to individuals and early projects, sometimes through investment in companies capable of creating major breakthroughs.
I feel this is directionally correct. I also feel most EAs have a set of axioms and priors (and possibly biases) that will make indifferent to any sort of criticism like this.
This was such a harrowing read. Thank you for sharing, Frances. What you describe is both terrible and unacceptable, both the harassment itself and how badly it appears to have been handled for so long. You deserved much better.
Contrapositive actually works better for me: I tend to like most of the EAs I get to interact with (virtually), but that’s not enough to buy me completely on the value set and axioms of the community, beyond adjacency.
Really liked this post, and as an oldie myself (by which I mean in my 40s, which feels like quite old compared to the average EA or EA-Adjacent), I resonated a lot with it. In my case, I am not an ‘old hand EA’ though: I rather arrived relatively circuitously and recently (about 3 years ago) to it.
Some have commented, here or elsewhere, that the fact that EA puts so much emphasis on the effectiveness means that it generally doesn’t care much about either community building, general recruitment/retention and group satisfaction, and when it half-heartedly tries to engage in this, it is with a utilitarian logic that doesn’t seem congenial to the task. Once could make a good case, though, that this isn’t a bug, but a feature: EA as resources-optimizer with little time to waste, given the importance of the issues it tries to solve or ameliorate on dealing with a less active, talented and effective series of people and needs. Once senses an elitist streak inevitably tied to its moral seriousness and focus on results.
On the other hand, I feel communities tend to thrive when they manage to become hospitable and nice places where people are happy to be in, in different degrees. This is what most successful movements -and religions- manage successfully: come for the values, stay for the group.
Passion and intellectual engagement also help a lot, but these perhaps vary a lot in a way that isn’t tractable. Like the OP, I find much of the forum posts dull and uninteresting, but then again, the type of person I am, my priorities, values and interests mean I am probably badly fitted to become anything more than mildly EA-Adjacent, so I don’t think I’d be a good benchmark in this regard. I think Will’s recent post on EA in the age of AGI does hit the nail on the head in many respects, with interesting ideas for revitalizing and updating EA, its actions and its goals. EA might never match religion’s or some group’s capacity for lifelong belonging, but recognizing that limitation, and trying to soften its edges, could make it more resilient.
I really loved this post, both probably because I agree with the core of the thesis (even if I am an atheist) as I’ve understood it and because I like the style (not a very EA one, but then again my own background is mostly in the Humanities). I think it’s spot-on on the recommendations and on the critical appraisal on what is effective to move most people who are not in the subset of young, highly numerical/logical and ambitious nerds who I’d guess are the core audience of EA. Then again, there’s an elitistic streak within EA that might say that the value of the movement is precisely in attracting and focusing on that kind of people.
I found this insightful. I find both communities interesting and overlapping, but I can also perceive the conflicts at the seams, but they seem pretty minor from an outsider’s pov. Personally, I feel I share more beliefs and priors with Rationalism when all is said and done, but I seem them mostly converging.
It was my lame attempt at making a verb out of the Petersburg Paradox, where a calculation of Expected Value of the type I play a coin-tossing game where if I get heads, the pot doubles, if I had tails, I lose everything. The EV is infinite, but in real life, you’ll end up ruined pretty quick. SBF had a talk about this with Tyler Cowen and clearly enjoyed biting the bullet:
COWEN: Okay, but let’s say there’s a game: 51 percent, you double the Earth out somewhere else; 49 percent, it all disappears. Would you play that game? And would you keep on playing that, double or nothing?
BANKMAN-FRIED: With one caveat. Let me give the caveat first, just to be a party pooper, which is, I’m assuming these are noninteracting universes. Is that right? Because to the extent they’re in the same universe, then maybe duplicating doesn’t actually double the value because maybe they would have colonized the other one anyway, eventually.
COWEN: But holding all that constant, you’re actually getting two Earths, but you’re risking a 49 percent chance of it all disappearing.
BANKMAN-FRIED: Again, I feel compelled to say caveats here, like, “How do you really know that’s what’s happening?” Blah, blah, blah, whatever. But that aside, take the pure hypothetical.
COWEN: Then you keep on playing the game. So, what’s the chance we’re left with anything? Don’t I just St. Petersburg paradox you into nonexistence?
BANKMAN-FRIED: Well, not necessarily. Maybe you St. Petersburg paradox into an enormously valuable existence. That’s the other option.I am rather assuming SBF was a radical, no holds barred, naive Utilitarian who just thought he was smart enough to not get caught with (from his pov) minor infringement of arbitrary rules and norms of the masses and that the risk was just worth it.
While I agree that people shouldn’t have renounced the EA label after the FTX scandal, I don’t quite find your simile with veganism convincing. It seems to fail to include two very important elements:
SBF’s public significance within EA: this is more like if one of the most famous Vegan advocates in the planet, the one everybody knows about, was shown to actually not only consume meat, but have a rather big meat-packing plant.
Proximity framing: I think one can make a case for SBF being a pure, naive Utilitarian who just Petersburgged himself to bankruptcy and fraud. While EA is not ideologically ‘naive’ Utilitarian, one can argue that its intellectual foundations aren’t far from Sam’s (in fact, they significantly overlap) and might non-trivially cast a shadow on them. It is common for EAs to make really counterintuitive EV calculations and take pride in giving support to stuff normies would find highly objectionable, while paying what from the outside might seems as only lip-service to ‘oh, yeah, you should abide by socially established rules and norms’ while paradoxically holding that such abiding is merely strategic and revocable.
Depopulation is Bad
I mildly agree that depopulation is bad, but not by much. Problem is I just suspect our starting views and premises are so different on this i can’t see how they could converge. Very briefly, mine would be something like this:
-Ethics is about agreements between existing agents.
-Future people matter only to the degree that current people care about them.
-No moral duty exists to create people.
-Existing people should not be made worse off for the sake of hypothetical future ones.I don’t think there’s a solid argument for the dangers of overpopulation right now or in the near future, and I mostly trust the economic arguments about increased productivity and progress that come from more people. Admittedly, there are some issues that I can think of that would make this less clear:
-If AGI takes off and doesn’t kill us all, it is very likely we can offshore most of the productivity and creativity to it, denying the advantage of bigger populations
-A lot of the increase in carbon emissions come from developing countries that are trying to increase the consumer capacities and lifestyle of their citizens. If scientific breakthroughs do not allow for progress, more people with more Western-like lifestyles will make it incredibly difficult to lower fossil fuel consumption, so if technology doesn’t make the breakthroughs, it makes sense to want less people so that more can enjoy our type of lifestyle.
-Again, with technology, we’ve been extremely lucky in finding low hanging fruit that allowed us to expand food production (i.e., fertilizers, the Green Revolution). Again, one can be skeptic of indefinite future breakthroughs, which could push us down to some Malthusian state.
Do people, on average, have positive or negative externalities (instrumental value)?
I imagine both yes. Most current calculations would say the positive outweigh the negative, but I can imagine how this can cease to be so.
Do people’s lives, on average, have positive intrinsic value (of a sort that warrants promotion, all else equal)?
Can’t really debate this, as I don’t think I believe in any sort of intrinsic value to begin with.
I am trying to articulate (probably wrongly) the disconnect I perceive here. I think ‘vibes’ might sound condescending, but ultimately, you seem to agree with assumptions (like math axioms) not being amenable to disputation. Like, technically, in philosophical practice, one can try to show, I imagine, that given assumption x some contradiction (or at least, something very generally perceived as wrong and undesirable) follows.
I do share the feeling expressed by Charlie Guthmann here that a lot of starting arguments for moral realists are just of the type ‘x is obvious/self-evident/feels good to be/feels worth believing’, and when stated in that way, they feel equally obviously false to those who don’t share those intuitions, and as magical thinking (‘If you really want something, the universe conspires to make it come about’ Paulo Coelho style). I feel more productive engaging strategies should just avoid altogether any claims of the mentioned sort, and perhaps start with stating what might follow from realist assumptions that might be convincing/persuasive to the other side, and vice versa.
Exactly. What morality is doing and scaffolding is something that is pragmatically accepted as good and external no any intrinsic goodness, i.e., individual and/or group flourishing. It is plausible that if we somehow discovered that furthering such flourishing should imply that we need to completely violate some moral framework (even a hypothetical ‘true’ one), it would be okay to do it. Large-scale cooperation is not an end in itself (at lest not for me): it is contingent on creating a framework that maximizes my individual well-being, with perhaps some sacrifices accepted as long as I’m still left overall better than without the large-scape cooperation and under the agreed-upon norms.
I wouldn’t put mathematics in the same bag as morality. As per the indispensibility argument, one can make a fair case (that one can’t for ethics) that strong, indirect evidence for the truth of mathematics (and some types of it actually ‘hard-coded into the universe’) is that all the hard sciences rely on it to explain stuff. Take the math away and there is no science. Take moral realism away and… nothing happens, really?
I agree that ethics does provide a shared structure for trust, fairness, and cooperation, but it makes much more sense to employ, then social-contractual language and speak about game-theoretic equilibria. Of course, the problem with this is that it doesn’t satisfy the urge some people have of trying to force their deeply felt but historically and culturally deeply contingent values into some universal, unavoidable mandate. And we all can feel this when we try, as BB does, to bring up examples of concrete cases that really challenge those values that we’ve interiorized.
They could, but they could also not. Desires and preferences are malleable, although not infinitely so. The critique is presuposing, I feel, that the subject is someone who knows with complete detail not only their preferences, but their exact weights, and that this configuration is stable. I think that is a first model approximation, but it fails to reflect the more messy and complex reality underneath. Still, even accepting the premises, I don’t think an anti-realist would say procrastinating in that scenario is ‘irrational’, but rather that it is ‘inefficient’ or ‘counterproductive’ to attaining a stronger goal/desire, and that the subject should take this into account, whatever decision he or she ends up making .which might include changing the weights and importance of the originally ‘stronger’ desire.
Thanks! I think I can see your pov clearer now. One thing that often leads me astray is how words seem to latch different meanings, and this makes discussion and clarification difficult (as in ‘realism’ and ‘objective’). I think my crux, given what you say, is that I indeed don’t see the point of having a neutral, outsider, point of view of the universe in ethics. I’d need to think more about it. I think trying to be neutral or impartial makes sense in science, where the goal is understanding a mind-independent world. But in ethics, I don’t see why that outsider view would have any special authority unless we choose to give it weight. Objectivity in the sense of ‘from nowhere’ isn’t automatically normatively relevant, I feel. I can see why, for example, when pragmatically trying to satisfy your preferences and being a human in contact with other humans with their own preferences, it makes sense to include in the social contract some specialized and limited uses of objectivity: they’re useful tools for coordination, debate and decision-making, and it benefits the maximization of our personal preferences to have some figures of power (rulers, judges, etc...) who are constrained to follow them. But that wouldn’t make them ‘true’ in any sense: they are just the result of agreements and negotiated duties for attaining certain agreed-upon ends.
I find the jump hard to understand. Your preferences matter to you -not ‘objectively’. They just matter because you want x, y z-. It doesn’t matter if your preferences don’t matter objectively. You still care about them. You might have a preference for being nice to people, and that will still matter to you regardless of anything else -unless you change your preference, which I guess is possible but no easy. It depends on the preference. The principle of indifference… I really struggle to see how it could be meaningful, because one has an innate preference for oneself, so whatever uncertainty you have about other sentients, there’s no reason at all to grant them and their concerns equal value to yours a priori.
Terminology can be a bugger in these discussions. I think we are accepting, as per BB’s own definition at the start of the thread, that Moral Realism would basically reduce to accepting a stance-independent view that moral truths exist. As for truth, I would mean it in the way it gets used when studying other, stance-independent objects, i.e., electrons exist and their existence is independent of human minds and-or of humans having ever existed, and saying ‘electrons exist’ is true because of their correspondence to objects of an external, human-independent reality.
What I take from your examples (correct me if I am wrong or if I misrepresent you) is that you feel that moral statements are not as evidently subjective as say, ‘Vanilla ice-cream is the best flavor’ but not as objective as, say ‘An electron has a negative charge’, as living in some space of in-betweeness with respect to those two extremes. I’d still call this anti-realism, as you’re just switching from a maximally subjective stance (an individual’s particular culinary tastes) to a more general, but still stance-dependent one ( what a group of experts and-or human and some alien minds might possibly agree upon). I’d say again, an electron doesn’t care for what a human or any other creature thinks about its electric charge.
As for each of the bullet points, what I’d say is:
I can see why you’d feel the change from a previous view can be seen as a mistake rather than a preference change -when I first started thinking about morality I felt very strongly inclined to the strongest moral realism, and I know feel that pov was wrong- but this doesn’t imply moral realism as much as that if feels as if moral principles and beliefs have objective truth status, even if they were actually a reorganization of stance-dependent beliefs.
I, on the contrary, don’t feel like there could be ‘moral experts’ - at most, people who seem to live up to their moral beliefs, whatever the knowledge and reasons for having them. Most surveys I’ve seen -there’s a Rationally Speaking episode on this- show that Philosophers and Moral Philosophers specifically don’t seem to behave more morally than their colleagues and similar social and intellectual peers.
Convergence can be explained through evolutionary game theory, coordination pressures, and social learning, not objective moral truths. That many societies converge on certain norms just shows what tends to work given human psychology and conditions, not that these norms are true in any stance-independent sense. It’s functional success, not moral facthood.
I don’t think I have much to object to that, but I do think that doesn’t look at all like ‘stance independent’ if we’re using that as the criterion for ethical realism. What you’re saying seems to boil down, if I understand it correctly is ‘given a bunch of intelligent creatures with some shared psychological perceptions of the world and some tendency towards collaboration, it is pretty likely they’ll end up arriving at a certain set of shared norms that optimize towards their well-being as a group -and in most cases, as individuals-. That makes the ‘state of moral norms that a lot of the civilizations eventually converge on’ something useful for ends x, y, z, but not ‘true’ and ‘independent of human or alien minds’.
I understand the concern that moral facts might seem metaphysically strange, but I don’t think they are any stranger than logical or modal truths.
Not a Philosophy major, so you’ll have to put up with my lack of knowledge, but I think I’d say that logical truths are contingent on the axioms being true, which is determined by how well they seem to match the world and our perceptions of it in the first place. And there are alternatives to classical logic that are ‘as true’ and generate logical truths as valid as those of classical logic. Not sure about modal truths -it is not something I’ve read about yet-. To the extent I grasp them, they appear constructed or definitional, not absolute, i.e.:
“A square cannot be round.” → because of how you define a square
It is possible that life exists on other planets.” → the question is about probabilities
“Necessarily, 2 + 2 = 4.” → Only if Peano Axioms and ZFC is assumed
I’m curious how anti-realists would approach serious moral disagreements, such as those involving human rights abuses, without appealing to something deeper than social consensus or personal feeling. Can we say “this is wrong” in any meaningful way if morality is only expressive or constructed?
Can’t speak for others, but can for myself. I’d say that first, some preferences are widely agreed upon to begin with (at least in liberal, Western societies). When there’s a conflict, we have the framework of societal rules and norms to solve it, and which we accept as the best scenario for maximizing our individual well-being, even if it comes with some trade-offs at times. If there’s a serious disagreement between my preferences and those encoded in the rules, norms and contracts, I try to change those through the appropriate channels. If I fail and ii is something non-negotiable to me, I would have to leave my society and go to another that is better attuned to me.
Just made a search and, rather embarrassingly, I couldn’t find an actual long discussion in the forum (memory didn’t serve as well as I had thought). I think I conflated the 2 comments of Ian Turner and Jason on this topic (in the forum post The ugly sides of two approaches to charity by Julia Wise from January 13th 2025) with EA-focused criticisms of MacKenzie Scott’s donations from this reddit thread, starting from PEEFsmash’s post: