There was an interesting paper a couple of years ago in one of the worldâs best philosophy journals arguing that incommeasurability as understood in a lot of the philosophical literature simply isnât possible: https://ââphilarchive.org/âârec/ââDORTCF-2 Though itâs not clear if the alternative view they like is really best described as one on which incommeasurability doesnât exist at all, rather than as an alternative theory of incommeasurability on which it might still make sense to say the nematode welfare is in some sense not distinguishable from zero but not exactly zero either.
David Mathersđ¸
Reading the section on utilitarianism here, I find it hard to tell exactly what practices you want the community to give up/âdo less of in what circumstances, beyond stopping making strong moral demands on people. (Something which its far from clear only utilitarianism and not other moral theories could be used to justify*.) Suppose an EA org does formal cost benefit analysis comparing the amount of suffering prevented by two different animal welfare interventions. Is that already âutilitarianismâ and bad? If not, what features have to be added to make it bad? If it turns out that the answer is actually ânot properly taking into account uncertainty about the accuracy of your cost-benefit analysisâ I donât think itâs news to people that they have to do that, whether or not they are actually good at doing so in practice.
Or is what you want with regard to utilitarianism not a change in EA practice around selecting interventions, but just for people to stop saying posititive things about utilitarianism as a philosophical tradition? Desist from writing academic work that defends it?
*https://ââresearch-repository.st-andrews.ac.uk/ââbitstream/ââhandle/ââ10023/ââ1568/ââAshford2003-Ethics113-Demandingness.pdf?sequence=1&isAllowed=y
I said a 50â50 chance of getting caught for trying to make a pathogen that brings about human extinction, not a 50â50 chance of successfully killing all life (far, far harder.).
Feasibility depends on that kind of biotech being achievable for small non-expert groups obviously, or at least small groups only some of whom are experts. But even if it is not feasible, and your position is in fact robust, I think the broader point that I donât really believe that you would actually kill everyone in that situation remains.
I think this is fairly fragile to future developments. What if you become convinced humans do more harm than good and also itâs a short window of time after it becomes technically possible for laypeople to make a doomsday virus in their garage, and before the government regulates to prevent this? It seems like, even if you have a 50â50 chance of being caught or whatever, the value of succeeding on your credences will be so high, that the expected value on your credence of trying to kill everyone might well still be net positive.
I think I sometimes get an unreal vibe from some of your writing not because I actually think there is a danger you would kill everyone, but because I think its obvious you probably wouldnât, and so you donât really fully endorse ETHU in every feasible situation.
Vasco, do you consider it evidence for or against a theory that it doesnât have recommendations that involve large amounts of destruction of humans? I feel like you have a tendency to dodge this issue when pressed on it by just repeating that your view doesnât in fact recommend supervillain type stuff given practical current constraints. But that leaves open the question of what you would do if those constraints were loosened somehow.
I think thereâs an underlying assumption your making here that you canât act rationally unless you can fully respond to any objection to your action, or provide some sort of perfect rationale for it. Otherwise, itâs at least possible to get things right just by actually giving the right weight to other peopleâs views, whether or not you can also explain philosophically why that is the right weight to give them. I think if you assume the picture of rationality on which you need this kind of full justification, pretty much nothing anyone does or feasibly could do is ever rational, and so the question of whether you can do rational cause prioritization without engaging with philosophy becomes uninteresting (answer no, but in practice you canât do it even after engaging with philosophy, or really ever act rationally.)
On reflection, my actual view here is maybe that binary rational/ânot rational classification isnât very useful, rather things are more or less rational.
EDIT: Iâd also say that something is going wrong if you think no one can ever update on testimony before deciding how much weight to give to other peopleâs opinions. As far as I remember, the literature about conciliationism and the equal weight view is about how you should update on learning propositions that are actually about other peopleâs opinions. But the following at least sometimes happens: someone tells you something that isnât about peopleâs opinions at all, and then you get to add [edit: the proposition expressed by] that statement to your evidence, or do whatever it is your meant to do with testimony. The updating here isnât on propositions about other peopleâs opinions at all. I donât automatically see why expert testimony about philosophical issues couldnât work like this, at least sometimes, though I guess I can imagine views on which it doesnât (for example, maybe you only get to update on the proposition expressed and not on the fact that the expert expressed that proposition if the experts belief in the proposition amounts to knowledge, and no one has philosophical knowledge.)
Itâs definitely good if people engage with it deeply if they make the results of their engagement public (I donât think most people can outperform the community/âthe wider non-EA world of experts on their own in terms of optimizing their own decisions.) But the question just asked whether it is possible to rationally set priorities without doing a lot of philosophical work yourself, not whether that was the best thing to do.
Sure, but itâs an extreme view that itâs never ok to outsource epistemic work to other people.
It is possible to rationally prioritise between causes without engaging deeply on philosophical issues
Deference to consensus of people more expert than you is always potentially rational, so this is clearly possible.
On the 7-15% figure I donât actually see where the idea that smaller, less intelligent animals suffer less when they are in physical pain is commonsense comes from. People almost never cite a source for it being commonsense, and I donât recall having had any opinion about it before I encountered academic philosophy. I think it is almost certainly true that people donât care very much about small dumb animals, but that, but there are a variety of reasons why that is only moderate evidence for the claim that ordinary people think they experience less intense pain:
-They might not have ever thought about it, since most people donât feel much need to give philosophical justifications for banal, normal opinions like not caring much about animals.
-Hedonistic utilitarianism is not itself part of commonsense, but without assuming it, you canât quickly and easily move from âwhat happens to bees isnât importantâ to âbees have low capacity for pain.â
-We know there are cases where people downgrade the importance of what happens to subjects who they see as outside of their community, even when they definitely donât believe those subjects have diminished capacity for pain. Many ordinary people are nationalists who donât care that much about foreigners, but they donât think foreigners feel less pain!
-They might just assume that it is unlikely small simple animals can feel pain at all. This doesnât necessarily mean they also think that, conditional on small simple animals being able to feel pain they only feel it a little bit.
Independently of what the commonsense prior is here, Iâd also say that I have a PhD in the philosophy of consciousness, and I donât think the claim that less neurons=less capacity for pain is commonly defended in the academic literature. At most some people might defend the more general idea that how conscious a state is comes in degrees, and some theories that allow for that might predict bee pains are not very conscious. But Iâve never seen any sign that this is a consensus view. In general, âmore neurons=more intense painsâ seems to play badly with the standard functionalist picture that what makes a particular mental state the mental state it is, is itâs typical causes and effects, not its intrinsic properties. Not to mention that it seems plausible there could be aliens without neurons who nonetheless felt pain.
The risk is non-zero, but you made a stronger claim that it was âthe most probable extinction risk aroundâ.
EDIT: As for reasons to think they will reverse, they seem to be a product of liberal modernity, but currently we need a population way, way above the minimum viable number to keep long term modernity going. Maybe AI could change that I guess, but itâs very hard to make predictions about the demographic trend if AI does all work.
âBelow-replacement fertility is perhaps the simplest and most probable extinction risk aroundâ
For it to present a significant extinction risk, youâd need current demographic trends to persist way past the point where changes in population have completely transformed society to the point where thereâs no reason to think current demographic trends will hold.
This is very bad news for longtermism if correct, since it suggests that value in the far future gained by preventing extinction now is much lower than it would otherwise be.
If you think there might well be forms of naturalism that are true but trivial, is your credence in anti-realism really well over >99%?
This forum probably isnât the place for really getting into the weeds of this, but Iâm also a bit worried about accounts of triviality that conflate a priority or even analyticity and triviality: Maths is not trivial in any sense of âtrivialâ on which âtrivialâ means ânot worth bothering withâ. Maybe you can get out of this by saying maths isnât analytic and itâs only being analytic that trivializes things, but I donât think it is particulary obvious that there is a sense making concept of analyticity that doesnât apply to maths. Apparently Neo-Fregeans think that lots of maths is analytic, and as far as I know that is a respected option in the philosophy of math: https://ââplato.stanford.edu/ââentries/ââlogicism/ââ#NeoFre
I also wonder about exactly what is being claimed to be trivial: individual identifications of moral properties with naturalistic properties, if they are explicitly claimed to be analytic? Or the claim that moral naturalism is true and there are some analytic truths of this sort? Or both?
Also, do you think semantic claims in general are trivial?
Finally, do you think the naturalists whose claims you consider âtrivialâ mostly agree with you that their views have the features that you think make for triviality but disagree that having those features means their views are of no interest. Or do most of them think their claims lack the features you think make for triviality? Or do you think most of them just havenât thought about it/âdonât have a good-faith substantive response?
So your claim is that naturalists are just stipulating a particular meaning of their own for moral terms? Can you say why you think this? Donât some naturalists just defend the idea that moral properties could be identical with complex sociological properties without even saying *which* properties? How could those naturalists be engaging in stipulative definition, even accidentally?
Iâd also say that this only bears on the truth/âfalsity of naturalism fairly indirectly. Thereâs no particular connection between whether naturalism is actually true and whether some group of naturalist thinkers happen to have stipulatively defined a moral term, although I guess if most defenses of naturalism did this, that would be evidence that naturalism couldnât be defended in other ways, which is evidence against itâs truth.
Is being trivial and of low interest evidence that naturalist forms of realism are *false*? âRed things are redâ is boring and trivial, but my credence in it is way above 0.99.
Yeah, I think I recall David Thorstad complaining that Ordâs estimate was far too high also.
Be careful not to conflate âexistential riskâ in the special Bostrom-dervied definition that I think Ord, and probably Will as well, are using with âextinction riskâ though. X-risk from climate *can* be far higher than extinction risk, because regressing to a pre-industrial state and then not succeeding in reindustrialising (perhaps because easily accessible coal has been used up), counts as an existential risk, even though it doesnât involve literal extinction. (Though from memory, I think Ord is quite dismissive of the possibility that there wonât be enough accessible coal to reindustrialise, but I think Will is a bit more concerned about this?)
Is there actually an official IPCC position on how likely degrowth from climate impacts is? I had a vague sense that they were projecting a higher world gdp in 2100 than now, but when I tried to find evidence of this for 15 minutes or so, I couldnât actually find any. (Iâm aware that even if that is the official IPCC best-guess position that does not necessarily mean that climate experts are less worried about X-risk from climate than AI experts are about X-risk from AI.)
What did the IPCC people say exactly?
https://ââwww.nytimes.com/ââ2025/ââ07/ââ23/ââhealth/ââpepfar-shutdown.html
Pepfar maybe still being killed off after all :(