English teacher for adults and teacher trainer, a lover of many things (languages, literature, art, maths, physics, history) and people. Head of studies at the satellite school of Noia, Spain.
Manuel Del Río Rodríguez 🔹
I really loved this post, both probably because I agree with the core of the thesis (even if I am an atheist) as I’ve understood it and because I like the style (not a very EA one, but then again my own background is mostly in the Humanities). I think it’s spot-on on the recommendations and on the critical appraisal on what is effective to move most people who are not in the subset of young, highly numerical/logical and ambitious nerds who I’d guess are the core audience of EA. Then again, there’s an elitistic streak within EA that might say that the value of the movement is precisely in attracting and focusing on that kind of people.
I found this insightful. I find both communities interesting and overlapping, but I can also perceive the conflicts at the seams, but they seem pretty minor from an outsider’s pov. Personally, I feel I share more beliefs and priors with Rationalism when all is said and done, but I seem them mostly converging.
It was my lame attempt at making a verb out of the Petersburg Paradox, where a calculation of Expected Value of the type I play a coin-tossing game where if I get heads, the pot doubles, if I had tails, I lose everything. The EV is infinite, but in real life, you’ll end up ruined pretty quick. SBF had a talk about this with Tyler Cowen and clearly enjoyed biting the bullet:
COWEN: Okay, but let’s say there’s a game: 51 percent, you double the Earth out somewhere else; 49 percent, it all disappears. Would you play that game? And would you keep on playing that, double or nothing?
BANKMAN-FRIED: With one caveat. Let me give the caveat first, just to be a party pooper, which is, I’m assuming these are noninteracting universes. Is that right? Because to the extent they’re in the same universe, then maybe duplicating doesn’t actually double the value because maybe they would have colonized the other one anyway, eventually.
COWEN: But holding all that constant, you’re actually getting two Earths, but you’re risking a 49 percent chance of it all disappearing.
BANKMAN-FRIED: Again, I feel compelled to say caveats here, like, “How do you really know that’s what’s happening?” Blah, blah, blah, whatever. But that aside, take the pure hypothetical.
COWEN: Then you keep on playing the game. So, what’s the chance we’re left with anything? Don’t I just St. Petersburg paradox you into nonexistence?
BANKMAN-FRIED: Well, not necessarily. Maybe you St. Petersburg paradox into an enormously valuable existence. That’s the other option.I am rather assuming SBF was a radical, no holds barred, naive Utilitarian who just thought he was smart enough to not get caught with (from his pov) minor infringement of arbitrary rules and norms of the masses and that the risk was just worth it.
While I agree that people shouldn’t have renounced the EA label after the FTX scandal, I don’t quite find your simile with veganism convincing. It seems to fail to include two very important elements:
SBF’s public significance within EA: this is more like if one of the most famous Vegan advocates in the planet, the one everybody knows about, was shown to actually not only consume meat, but have a rather big meat-packing plant.
Proximity framing: I think one can make a case for SBF being a pure, naive Utilitarian who just Petersburgged himself to bankruptcy and fraud. While EA is not ideologically ‘naive’ Utilitarian, one can argue that its intellectual foundations aren’t far from Sam’s (in fact, they significantly overlap) and might non-trivially cast a shadow on them. It is common for EAs to make really counterintuitive EV calculations and take pride in giving support to stuff normies would find highly objectionable, while paying what from the outside might seems as only lip-service to ‘oh, yeah, you should abide by socially established rules and norms’ while paradoxically holding that such abiding is merely strategic and revocable.
Depopulation is Bad
I mildly agree that depopulation is bad, but not by much. Problem is I just suspect our starting views and premises are so different on this i can’t see how they could converge. Very briefly, mine would be something like this:
-Ethics is about agreements between existing agents.
-Future people matter only to the degree that current people care about them.
-No moral duty exists to create people.
-Existing people should not be made worse off for the sake of hypothetical future ones.I don’t think there’s a solid argument for the dangers of overpopulation right now or in the near future, and I mostly trust the economic arguments about increased productivity and progress that come from more people. Admittedly, there are some issues that I can think of that would make this less clear:
-If AGI takes off and doesn’t kill us all, it is very likely we can offshore most of the productivity and creativity to it, denying the advantage of bigger populations
-A lot of the increase in carbon emissions come from developing countries that are trying to increase the consumer capacities and lifestyle of their citizens. If scientific breakthroughs do not allow for progress, more people with more Western-like lifestyles will make it incredibly difficult to lower fossil fuel consumption, so if technology doesn’t make the breakthroughs, it makes sense to want less people so that more can enjoy our type of lifestyle.
-Again, with technology, we’ve been extremely lucky in finding low hanging fruit that allowed us to expand food production (i.e., fertilizers, the Green Revolution). Again, one can be skeptic of indefinite future breakthroughs, which could push us down to some Malthusian state.
Do people, on average, have positive or negative externalities (instrumental value)?
I imagine both yes. Most current calculations would say the positive outweigh the negative, but I can imagine how this can cease to be so.
Do people’s lives, on average, have positive intrinsic value (of a sort that warrants promotion, all else equal)?
Can’t really debate this, as I don’t think I believe in any sort of intrinsic value to begin with.
I am trying to articulate (probably wrongly) the disconnect I perceive here. I think ‘vibes’ might sound condescending, but ultimately, you seem to agree with assumptions (like math axioms) not being amenable to disputation. Like, technically, in philosophical practice, one can try to show, I imagine, that given assumption x some contradiction (or at least, something very generally perceived as wrong and undesirable) follows.
I do share the feeling expressed by Charlie Guthmann here that a lot of starting arguments for moral realists are just of the type ‘x is obvious/self-evident/feels good to be/feels worth believing’, and when stated in that way, they feel equally obviously false to those who don’t share those intuitions, and as magical thinking (‘If you really want something, the universe conspires to make it come about’ Paulo Coelho style). I feel more productive engaging strategies should just avoid altogether any claims of the mentioned sort, and perhaps start with stating what might follow from realist assumptions that might be convincing/persuasive to the other side, and vice versa.
Exactly. What morality is doing and scaffolding is something that is pragmatically accepted as good and external no any intrinsic goodness, i.e., individual and/or group flourishing. It is plausible that if we somehow discovered that furthering such flourishing should imply that we need to completely violate some moral framework (even a hypothetical ‘true’ one), it would be okay to do it. Large-scale cooperation is not an end in itself (at lest not for me): it is contingent on creating a framework that maximizes my individual well-being, with perhaps some sacrifices accepted as long as I’m still left overall better than without the large-scape cooperation and under the agreed-upon norms.
I wouldn’t put mathematics in the same bag as morality. As per the indispensibility argument, one can make a fair case (that one can’t for ethics) that strong, indirect evidence for the truth of mathematics (and some types of it actually ‘hard-coded into the universe’) is that all the hard sciences rely on it to explain stuff. Take the math away and there is no science. Take moral realism away and… nothing happens, really?
I agree that ethics does provide a shared structure for trust, fairness, and cooperation, but it makes much more sense to employ, then social-contractual language and speak about game-theoretic equilibria. Of course, the problem with this is that it doesn’t satisfy the urge some people have of trying to force their deeply felt but historically and culturally deeply contingent values into some universal, unavoidable mandate. And we all can feel this when we try, as BB does, to bring up examples of concrete cases that really challenge those values that we’ve interiorized.
They could, but they could also not. Desires and preferences are malleable, although not infinitely so. The critique is presuposing, I feel, that the subject is someone who knows with complete detail not only their preferences, but their exact weights, and that this configuration is stable. I think that is a first model approximation, but it fails to reflect the more messy and complex reality underneath. Still, even accepting the premises, I don’t think an anti-realist would say procrastinating in that scenario is ‘irrational’, but rather that it is ‘inefficient’ or ‘counterproductive’ to attaining a stronger goal/desire, and that the subject should take this into account, whatever decision he or she ends up making .which might include changing the weights and importance of the originally ‘stronger’ desire.
Thanks! I think I can see your pov clearer now. One thing that often leads me astray is how words seem to latch different meanings, and this makes discussion and clarification difficult (as in ‘realism’ and ‘objective’). I think my crux, given what you say, is that I indeed don’t see the point of having a neutral, outsider, point of view of the universe in ethics. I’d need to think more about it. I think trying to be neutral or impartial makes sense in science, where the goal is understanding a mind-independent world. But in ethics, I don’t see why that outsider view would have any special authority unless we choose to give it weight. Objectivity in the sense of ‘from nowhere’ isn’t automatically normatively relevant, I feel. I can see why, for example, when pragmatically trying to satisfy your preferences and being a human in contact with other humans with their own preferences, it makes sense to include in the social contract some specialized and limited uses of objectivity: they’re useful tools for coordination, debate and decision-making, and it benefits the maximization of our personal preferences to have some figures of power (rulers, judges, etc...) who are constrained to follow them. But that wouldn’t make them ‘true’ in any sense: they are just the result of agreements and negotiated duties for attaining certain agreed-upon ends.
I find the jump hard to understand. Your preferences matter to you -not ‘objectively’. They just matter because you want x, y z-. It doesn’t matter if your preferences don’t matter objectively. You still care about them. You might have a preference for being nice to people, and that will still matter to you regardless of anything else -unless you change your preference, which I guess is possible but no easy. It depends on the preference. The principle of indifference… I really struggle to see how it could be meaningful, because one has an innate preference for oneself, so whatever uncertainty you have about other sentients, there’s no reason at all to grant them and their concerns equal value to yours a priori.
Terminology can be a bugger in these discussions. I think we are accepting, as per BB’s own definition at the start of the thread, that Moral Realism would basically reduce to accepting a stance-independent view that moral truths exist. As for truth, I would mean it in the way it gets used when studying other, stance-independent objects, i.e., electrons exist and their existence is independent of human minds and-or of humans having ever existed, and saying ‘electrons exist’ is true because of their correspondence to objects of an external, human-independent reality.
What I take from your examples (correct me if I am wrong or if I misrepresent you) is that you feel that moral statements are not as evidently subjective as say, ‘Vanilla ice-cream is the best flavor’ but not as objective as, say ‘An electron has a negative charge’, as living in some space of in-betweeness with respect to those two extremes. I’d still call this anti-realism, as you’re just switching from a maximally subjective stance (an individual’s particular culinary tastes) to a more general, but still stance-dependent one ( what a group of experts and-or human and some alien minds might possibly agree upon). I’d say again, an electron doesn’t care for what a human or any other creature thinks about its electric charge.
As for each of the bullet points, what I’d say is:
I can see why you’d feel the change from a previous view can be seen as a mistake rather than a preference change -when I first started thinking about morality I felt very strongly inclined to the strongest moral realism, and I know feel that pov was wrong- but this doesn’t imply moral realism as much as that if feels as if moral principles and beliefs have objective truth status, even if they were actually a reorganization of stance-dependent beliefs.
I, on the contrary, don’t feel like there could be ‘moral experts’ - at most, people who seem to live up to their moral beliefs, whatever the knowledge and reasons for having them. Most surveys I’ve seen -there’s a Rationally Speaking episode on this- show that Philosophers and Moral Philosophers specifically don’t seem to behave more morally than their colleagues and similar social and intellectual peers.
Convergence can be explained through evolutionary game theory, coordination pressures, and social learning, not objective moral truths. That many societies converge on certain norms just shows what tends to work given human psychology and conditions, not that these norms are true in any stance-independent sense. It’s functional success, not moral facthood.
I don’t think I have much to object to that, but I do think that doesn’t look at all like ‘stance independent’ if we’re using that as the criterion for ethical realism. What you’re saying seems to boil down, if I understand it correctly is ‘given a bunch of intelligent creatures with some shared psychological perceptions of the world and some tendency towards collaboration, it is pretty likely they’ll end up arriving at a certain set of shared norms that optimize towards their well-being as a group -and in most cases, as individuals-. That makes the ‘state of moral norms that a lot of the civilizations eventually converge on’ something useful for ends x, y, z, but not ‘true’ and ‘independent of human or alien minds’.
I understand the concern that moral facts might seem metaphysically strange, but I don’t think they are any stranger than logical or modal truths.
Not a Philosophy major, so you’ll have to put up with my lack of knowledge, but I think I’d say that logical truths are contingent on the axioms being true, which is determined by how well they seem to match the world and our perceptions of it in the first place. And there are alternatives to classical logic that are ‘as true’ and generate logical truths as valid as those of classical logic. Not sure about modal truths -it is not something I’ve read about yet-. To the extent I grasp them, they appear constructed or definitional, not absolute, i.e.:
“A square cannot be round.” → because of how you define a square
It is possible that life exists on other planets.” → the question is about probabilities
“Necessarily, 2 + 2 = 4.” → Only if Peano Axioms and ZFC is assumed
I’m curious how anti-realists would approach serious moral disagreements, such as those involving human rights abuses, without appealing to something deeper than social consensus or personal feeling. Can we say “this is wrong” in any meaningful way if morality is only expressive or constructed?
Can’t speak for others, but can for myself. I’d say that first, some preferences are widely agreed upon to begin with (at least in liberal, Western societies). When there’s a conflict, we have the framework of societal rules and norms to solve it, and which we accept as the best scenario for maximizing our individual well-being, even if it comes with some trade-offs at times. If there’s a serious disagreement between my preferences and those encoded in the rules, norms and contracts, I try to change those through the appropriate channels. If I fail and ii is something non-negotiable to me, I would have to leave my society and go to another that is better attuned to me.
I’d like to hear more about this too. From a very simplified overview, what I seemed to get was the core of the arguments was just ‘everything is reducible to intuitions, so moral intuitions are as good as any other, including those behind accepting logic or realist views of the world’.
I think I concede that ‘pleasure is good for the being experiencing it’. I don’t think this leads to were you take it, though. It is good for me to eat meat, but probably it isn’t good for the animal. But in the thought experiment you make, I prefer world A where I’m eating bacon and the pig is dead than world B where the pig is feeling fine and I’m eating broccoli. You can’t jump from what’s good for one to what’s good for many. But besides, granting something is good for he who experiences is feels likes bit broad: the good for him doesn’t make it into some law that must be obeyed, even form him/her. There are trade-offs between other desires, you might also want to consider (or not) long-term effects, etc… It also has no ontological status as ‘the good’, just as there is no Platonic form of ‘the good’ floating in Platonic heaven.
I fail to follow the apple example. Why should I epistemically have eaten the apple? Either I have a true goal (and desire) to eat it or not. If I do, I will not refuse to eat it. If you assume it is a goal, I am assuming it is true, although people don’t generally have those sorts of goals, I think. They look more like… lists of preferences and degree of each preferences. Some are core-preferences difficult to change, while others are very mutable.
If by epistemic normativity you mean something like there are x, y, z reasons we should trust when we want to have proper beliefs about things, what I’d say is that this doesn’t seem normative to me. I personally value truth very highly as an end in itself, but even if I didn’t, truthful information is useful for acting to satisfy your desires, but I don’t see why one has some obligation to do so.I f someone doesn’t follow the effective means to their ends, they’re being ineffective or foolish, but not violating any norm. If you want a bridge to stand, build it this way; otherwise, it falls. But there’s no moral or rational requirement to build it that way—you just won’t get what you want.
Strong disagree. I am not closed to being persuaded on this, though, but I haven’t found your arguments convincing yet.
Even before going into details, though, I’d like to start with the end. I see that you find it intuitively very hard to reject the stance-independent wrongness of torture. If it boils down to intuitions, I find it as hard to accept that morality could be anything other than a human invention that is useful for some instrumental needs, and nothing more.
I am still starting to explore the philosophical grounds for my intuitions, but at the moment, I think a valid summary is something like this:
Moral Anti-Realism: moral statements do not express stance-independent truths. There is no objective moral reality analogous to mathematical or physical facts.
Contractarian Ethics: moral obligations are agreements between rational agents. Ethics emerges from social contracts (negotiated, context-sensitive rules for mutual benefit) not from metaphysical truths.
Subjective Preference: Moral norms are built from individual preferences, desires, and aversions filtered through the pragmatic need to live together peacefully and negotiate conflicts. Some preferences (e.g. for not being tortured) are near-universal, but still not “objective.”
Rationality is procedural and instrumental: it is about coherently pursuing one’s preferences and goals, given the available information, constraints and beliefs.
Skeptic of all intuitions: Moral intuitions are evolved (biologically and culturally) emotional heuristics which we’ve also internalized, policed and indoctrinated into since childhood.
Nitpicking some of the stuff you talk about:
But lots of moral statements just really don’t seem like any of these. The wrongness of slavery, the holocaust, baby torture, stabbing people in the eye—it seems like all these things really are wrong and this fact doesn’t depend on what people think about it.
‘Seems’ is a verb you use a lot all through this section. Lots of things seem, but we’ve learned not to trust intuitions. The sun seems to move and rise in the East. With empirical stuff, we can at least make some observations and measurements, develop some theories and put them to the test. We can’t seem to have anything similar with ethics. A plausible explanation for these things you list seem morally true to us is the same as for why, from the top of skyscraper in a city, the streets below all seem to radiate outward from your position. We are Westerners, we are part of a culture with specific values that has, because of accidents, been tremendously materially, economically and politically successful. It is easy to imagine we are, if not at a pinnacle, ‘in the right road of history’ and that all in the present will have to converge and just deep our project. I find it much more likely that 500 years from now, our successors -probably non WEIRD people, perhaps AI- will look with the same contempt to our moral fantasies as we have for the cults of the Roman and Babylonian gods. You’re assuming as obvious a narrative of lineal moral progress which I think is really open to disputation.
If I have a reason to prevent my own suffering, it seems that suffering is bad, which gives me a moral reason to prevent it.
Suffering is bad for me. It seems plausible to assume it will also be so for others, which means I should use this part of information as part of the bargaining set of tools for game-theoretically negotiating with others the satisfaction of my preference with the minimal sacrifice one can get away with while maximizing the overall results (but just because the latter ultimately give a bigger satisfaction of my own than in the lack of agreements and contracts).
But this means that moral anti-realists must think that you can never have a reason to care about something independ of what you actually do care about. This is crazy as shown by the following cases
I fail to see where you’re going with these contrived examples of yours. Like, what people desire is (I’d say always, but let’s caveat it a bit) what gives them pleasure. It is not plausible to consider cases where this is not the case. But even if that wasn’t the case, I don’t see the irrationality even in these examples. You’re assuming a very specific, value-laden view of rationality -one that says people are “irrational” if they pursue ends you see as harmful, malformed, or futile. But I imagine anti-realists view rationality as I stated above: as consistency between means and ends. If someone has strange or harmful goals, that may be sad or tragic to you, but it’s not irrational on their terms. You’re just begging the question by smuggling in your own evaluative framework as if it were universal.
But just as there are visual appearances, there are intellectual appearances. Just as it appears to me that there’s a table in front of me, it appears to me that it’s wrong to torture babies. Just as I should think there’s a table absent a good reason to doubt it, I should think it’s wrong to torture babies. In fact, I should be more confident in the wrongness of torturing babies, because that seems less likely to be the result of error. It seems more likely I’m hallucinating a table than that I’m wrong about the wrongness of baby torture.
This analogy fails because it treats moral intuition like sensory perception, but without acknowledging the critical difference: empirical perceptions are testable, correctable, and embedded in a shared external reality. I might trust that I see a table but I can measure it, predict how it behaves, let others confirm it. Moral intuitions don’t offer that. They’re not observable facts but untestable gut reactions. Saying “I just see that baby torture is wrong” is not evidence, it’s a psychological datum, not a method of discovery. You’re proposing a methodology where feeling intensely about something counts as knowing it, even in the absence of any testing, mechanism, or independent verification. That’s not realism; it’s intuitionism dressed as epistemology.
We all begin inquiry from things that “seem right”, but in empirical and mathematical domains, we don’t stop there. We test, predict, measure, or prove. That’s the key difference: perception and intuition may guide us initially, but scientific realism and mathematical Platonism justify beliefs by their explanatory power, coherence, and predictive success. In contrast, moral realism lacks any comparable mechanism. You can’t test a moral intuition the way you test a physical hypothesis or formalize a logical inference. There’s no experiment, model, or predictive structure that tells us whether “baby torture is wrong” is a metaphysical fact or just a deeply shared psychological aversion. You’re claiming parity where there’s a methodological gap.
As for the claim that critics of intuition rely on intuitions too: there’s a difference between relying on formal coherence (e.g., basic logical tautologies) and on moral gut feelings. The probability example confuses things, as Bayes’ theorem and the conjunction rule aren’t known by intuition but by mathematical derivation, and our confidence in them comes from their internal consistency and predictive accuracy, not how they “feel.”
I’d also like to go into the last two big topics you propose, i.e., evolutionary debunking arguments and physicalism, but this post is already too long, and probably not conducive to a conversation.
Hello, and thanks for engaging with it. A couple of notes about the points you mention:
I have only read Thorstad’s arguments as they appear summarized in the book (he does have a blog in which one of his series, which I haven’t read yet, goes into detail on this: https://reflectivealtruism.com/category/my-papers/existential-risk-pessimism ). I have gone back to the chapter, and his thesis, in a bit more detail, would be that Ord’s argument is predicated on a lot of questionable assumptions, i.e.,the time of perils will be short, that the current moment is very dangerous, the future time will be much less so and it will stay so for a long time. He questions the evidence for all those assumptions, but particularly the last: “For humans to survive for a billion years, the annual average risk of our extinction needs to be no higher than one in a billion. That just doesn’t seem plausible—and it seems even less plausible that we could know something like that this far in advance.” He also goes on to expand it citing extreme uncertainty of events far in time, that it is unlikely that treaties or world government could keep risk low, that ‘becoming more intelligent’ is too vague and AGI absurdly implausible (“The claim that humanity will soon develop superhuman artificial agents is controversial enough,” he writes. “The follow-up claim that superintelligent artificial systems will be so insightful that they can foresee and prevent nearly every future risk is, to most outside observers, gag-inducingly counterintuitive.”).
As for the second statement, my point wasn’t that extinction always trumps everything else in expected value calculations, but that if you grant the concept of existential risk any credence, then -ceteris paribus-, the sheer scale of what’s at stake (e.g., billions of future lives across time and space) makes extinction risks of overriding importance in principle. That doesn’t mean that catastrophic-but-non-extinction events are negligible, just that their moral gravity derives from how they affect longterm survival and flourishing. I think you make a very good argument that massive, non-extinction catastrophes might be nearly as bad as extinction if they severely damage humanity’s trajectory but I feel it is highly speculative on the difficulties of making comebacks and on the likelihood of extreme climate change, and I still find the difference between existential risk and catastrophe(s) significant.
Really liked this post, and as an oldie myself (by which I mean in my 40s, which feels like quite old compared to the average EA or EA-Adjacent), I resonated a lot with it. In my case, I am not an ‘old hand EA’ though: I rather arrived relatively circuitously and recently (about 3 years ago) to it.
Some have commented, here or elsewhere, that the fact that EA puts so much emphasis on the effectiveness means that it generally doesn’t care much about either community building, general recruitment/retention and group satisfaction, and when it half-heartedly tries to engage in this, it is with a utilitarian logic that doesn’t seem congenial to the task. Once could make a good case, though, that this isn’t a bug, but a feature: EA as resources-optimizer with little time to waste, given the importance of the issues it tries to solve or ameliorate on dealing with a less active, talented and effective series of people and needs. Once senses an elitist streak inevitably tied to its moral seriousness and focus on results.
On the other hand, I feel communities tend to thrive when they manage to become hospitable and nice places where people are happy to be in, in different degrees. This is what most successful movements -and religions- manage successfully: come for the values, stay for the group.
Passion and intellectual engagement also help a lot, but these perhaps vary a lot in a way that isn’t tractable. Like the OP, I find much of the forum posts dull and uninteresting, but then again, the type of person I am, my priorities, values and interests mean I am probably badly fitted to become anything more than mildly EA-Adjacent, so I don’t think I’d be a good benchmark in this regard. I think Will’s recent post on EA in the age of AGI does hit the nail on the head in many respects, with interesting ideas for revitalizing and updating EA, its actions and its goals. EA might never match religion’s or some group’s capacity for lifelong belonging, but recognizing that limitation, and trying to soften its edges, could make it more resilient.