Yeah, I think this is probably right. My point isnât that there is nothing troubling or potentially dangerous about Vascoâs reasoning-thatâs clearly not true-but just that people should be careful in how they describe it, and not claim it rests on more controversial starting premises than it actually does. (I.e. in particular that it doesnât have hedonism or consequentialism as a starting premise; obviously it does make some controversial assumptions.)
David Mathersđ¸
Not giving to global health charities.
In any case though, I think what I mostly object to isnât the claim that if you endorse Vascoâs reasoning because you are a utilitarian that counts as ânaiveâ, but rather the use of the ânaive utilitarianâ label to imply that his reasoning:
a) is distinctively utilitarian rather than being compatible with a variety of moral views
b) commits you to being prepared to use violence/âdeception.
Iâd distinguish here between actions and reasons for action. The action is not conventionally immoral, but the reason for action is. I think this is probably a significant distinction, though how it is significant doesnât feel very clear to me.
I think Vascoâs personal strong endorsement of hedonistic utilitarianism has maybe caused confusion about the degree to which the meat eating problem can be avoided just by abandoning utilitarianism for standard reasons. And I also worry some of the criticism of Vasco is stretching the term ânaive utilitarianâ beyond its standard meaning.
On the first point, an overall ethical view could imply that global health donations are bad for the reasons Vasco gives, even if it was quite distant from hedonistic utilitarianism in a number of ways:
-Firstly, this doesnât really seem to be a question of preference versus hedonistic utilitarianism. Presumably there is some sense in which preference utilitarianism counts animal suffering as frustrated preference and still bad. So the frustration of animal preferences caused by meat eating could still outweigh the value in terms of satisfied preference from saving human lives. Itâs unclear to me which of preference or hedonistic utilitarianism is more likely to deliver this result, but I donât see an obvious reason why hedonistic utilitarianism is more likely to.
-Secondly, and more importantly, just valuing other things apart from pleasure and suffering wonât necessarily reverse Vascoâs conclusion that meat consumption patterns mean saving human lives does more harm than good, though youâd need to re-do the analysis. On any sane pluralist view where things other than pleasure and suffering matter (and as it happens, I fairly strongly reject pure hedonism), suffering is still bad. So it could still be the case that the badness of suffering caused by the average human through meat consumption outweighs the value of the hedonic and non-hedonic goods that the typical beneficiary of life-saving global health charities will experience in a lifetime. It is likely true that bringing in non-hedonic goods makes it more likely that the goods experience in a human lifetime come out as outweighing the suffering caused by meat consumption, but more likely doesnât mean âguaranteedâ or even âprobability above 50%â, itâs purely relative.
-Thirdly, nothing about Vascoâs reasoning here implies the controversial consequentialist claim that you should murder people or otherwise violate human rights, or break standard commonsense moral rules whenever that produces the best consequences. Itâs perfectly coherent to think that a human life continuing is net harmful because of meat consumption, but also you shouldnât murder that person, or try and bring about their death through conventionally immoral means like lying, law-breaking etc., because consequentialism is false, and the ends donât justify the means. Itâs true that Vasco is (effectively) recommending one particular action that would bring about deaths in response to the harms humans would cause, namely, not donating to global health charities. But this is not a conventionally immoral or obviously rights-violating action. Common sense morality says that you are allowed not to give to global health charities for any number of reasons: you want to spend the money on your own children, you want to give to research into the rare cancer that killed your Dad etc. So common sense morality is consistent with the recipients of global health charities not having a right to our help, and with it being morally permissible to withhold that help even if it leads to their deaths*.
The meat eater/âeating problem is an issue for anyone who
A) Donates to global health
B) Thinks that animal suffering can in principle be compared to, and sometimes outweigh large benefits to humans
C) Thinks we shouldnât make donations that are net harmful.
That is surely a far wider group than âhedonistic utilitariansâ, not just in principle, but in practice. I say this not to defend Vascoâs personal honor-I find total commitment to hedonistic utilitarianism a bit scary as it happens, but because I donât think other people should avoid thinking about the potential inconsistency in their views here. Even if, like Karthik, you are 100% certain that the correct reaction to any inconsistency canât possibly be deciding that it is net good when the average child dies, it is probably still good to think about which of your other commitments you want to give up to avoid inconsistency.
As for ânaive utilitarianismâ, as I understood this term it doesnât mean âembracing any conclusion that conflicts with common sense, because you are a utilitarian and believe it is correct from a utilitarian point of view.â Rather, as I understood it a ânaive utilitarianâ was a utilitarian who:
A) Tries to make moral decisions on the basis of explicit utility calculations
and
B) Is prepared to perform conventionally highly immoral and norm-breaking actions like stealing and murder, if an explicit utility calculation implies they are optimal.
And part of the point of calling this ânaiveâ was that such a decision procedure was not only contrary to common sense, but also unlikely to actually maximize utility.
Vascoâs post isnât a clear example of naive utilitarianism in this sense because he isnât recommending any action that is clearly highly conventionally immoral and norm-breaking. The only action he is recommending, if any, is not donating to global health charities. His reasons for thinking this are definitely extremely inconsistent with common sense, but thatâs not enough to make it ânaive utilitarianismâ as I understand the term, because I understand it, naive utilitarianism is distinctively about pursuing utilitarian ends through ruthless/âviolent/âdeceptive means.
*(Iâm not denying there might be some less common sense non-consequentialist moral views on which aid recipients do have a right to our help.)
âI think your posting about him undermines your credibility elsewhere.â This seems worryingly like epistemic closure to me (though it depends a bit what âelsewhereâ refers to.) A lot of Thorstadâs work is philosophical criticism of longtermist arguments, and not super-technical criticism either. You can surely just assess that for yourself rather than discounting it because of what he said about an unrelated topic, unless he was outright lying. I mostly agree with Thorstadâs conclusions about Scottâs views on HBD, but whilst that makes me distrust Scottâs political judgement, it doesnât effect my (positive) view of the good stuff Scott has written about largely unrelated topics like whether antidepressants work, or the replication crisis.
Iâd also say that the significance of Scott sometimes pushing back against HBD stuff is very dependent on why he pushes back. Does he push back because he thinks people are spreading harmful ideas? Or does he push back because he thinks if the blog becomes too associated with taboo claims it will lose influence, or bring him grief personally? The former would perhaps indicate unfairness in Thorstadâs portrayal of him, but the latter certainly would not. In the leaked email (which I think is likely genuine, or heâd say it wasnât, but of course we canât be 100% sure) he does talk about stratigising to maintain his influence with liberals on this topic. My guess, as a long-time reader is that itâs a bit of both. I donât think Scott is sympathetic to people genuinely wanting to hurt Black people, and Iâm sure there are Reactionary claims about race that he thinks are just wrong. But heâs also very PR conscious on this topic in my view. And itâs hard to see why heâs had so many HBD-associated folk on his blogroll if he doesnât want to quietly spread some of the ideas.
Itâs easy for both to be true at the same time right? That is skeptics tone it down within EA, and believers tone it down when dealing with people *outside* EA.
As the article in The Critic itself points out, it is hardly surprising that a group that is disproportionately made up of young, single men are more criminal than the general population, since young men are overwhelmingly more criminal than anyone else, and single men are likely plausibly worse. Itâs not clear what this tells us about immigrants even from Syria or Afghanistan, let alone anywhere else, if we control for that. My guess for what itâs worth is that they will still have higher crime rates even if you control if they are Syrians (donât know about Afghans, suspect more positive selection there), but youâd need to actually look.
Can you be more specific about what right-coded stuff you want OP to fund that they arenât?
I feel like on the one hand, I have no problem with GV not funding certain right-coded things where I think the ideas are genuinely bad for more or less standard reasons why socially liberal people donât like right-wing things, and thatâs also what GV thinks. But on the other hand, if the issue is (as I somewhat suspect) more like âDustin doesnât want to fund stuff that looks bad to influential people in the Democrat party because he doesnât want to lose influence, regardless of whether he personally thinks that stuff is badâ that seems a lot dodgier.
I suspect that it is either, the second, bad, influence-maxing thing or something else, since I doubt people are actually going to OP demanding funding for HDB-type stuff or âinvestigate whether women being allowed to have jobs is badâ*. But maybe intelligence enhancement stuff, minus any HBD connection, is a more plausible case of genuine ideological disagreement between GV and people who might want GV funding?
*Iâm not making this one up as a real right-Rationalist or former Rationalist take, I saw Roko say it on twitter.
I think Thorstad has written very good stuff-for example on the way in which arguments for small reductions in extinction risk. More politically, his reporting on Scott Alexander and some other figures connected to the communityâs racism is a useful public service and he has every right to be pissed off {EDIT: sentence originally ended here: I meant to say he has every right to be pissed of at people ignore or disparaging the racism stuff]. I donât even necessarily entirely disagree with the meta-level critique being offered here.
But it was still striking to me that someone responded to the complaint that people making the institutional critique tend not to actually have much in the way of actionable information, and to take a âlet me explain why these people came to their obviously wrong viewsâ tone, by posting a bunch of stuff that was mostly like that.
If my tone is sharp itâs also because, like Richard I find the easy, unthinking combination of âthe problem with these people is that they donât care about changing the systemâ with âwhy are they doing meat alternatives and not vegan outreach aimed at a particular ethnic group that makes up <20% of the population or animal sheltersâ to be genuinely enragingly hypocritical and unserious. Thatâs actually somewhat separate from whether EAs are insufficiently sympathetic to anticapitalist or âsocial justiceâ-coded.
Incidentally, while I agree with Jason that itâs âMoskowitz and Tuna ought to be able to personally decide where nearly all the money in the movement is spentâ that is the weird claim that needs defending, my guess is that at least one practical effect of this has been to pull the movement left, not right, on several issues. Open Phil spent money on anti- mass incarceration stuff, and vaguely left-coded macroeconomic policy stuff at a time when the community was not particularly interested in either of those things. Indeed I remember Thorstad singling out critiques of the criminal justice stuff as examples of the community holding left-coded stuff to a higher standard of proof. More recently you must have seen the rationalist complaints on the forum about how Open Phil wonât fund anything âright-codedâ. None of thatâs to say there are no problems in principle with unaccountable billionares of course. After all, our other major billionaire donor was SBF! (Though his politics wasnât really the issue.)
Yeah, I suppose that is fair.
Iâm not sure any of these except maybe the second actually answer the complaints Richard is making.
The first linked post here seems to defend, or at least be sympathetic to, the position that encouraging veganism specifically among Black people in US cities is somehow more an attempt at âsystemic changeâ with regard to animal exploitation than working towards lab-grown meat (the whole point of which is that it might end up replacing farming altogether).
The third post is mostly not about the institutional critique at all, and the main thing it does say about it is just that longtermists canât respond to it by saying they only back interventions that pass rigorous GiveWell-style cost benefit analysis. Which is true enough, but does zero to motivate the idea that there are good interventions aimed at institutional change available. Thorstad does also say âwell, havenât anti-oppression mass movements done a whole lot of good in the past; isnât a bit suspicious to think theyâve suddenly stopped doing soâ. Which is a good point in itself, but fairly abstract and doesnât actually do much to help anyone identify what reforms they should be funding.
The fourth post is extraordinarily abstract: the point seems to be that a) we should pay more attention to injustice, and b) people often use abstract language about what is rational to justify injustice against oppressed groups. Again, this is not very actionable, and Thorstadâs post does not really mention Craryâs arguments for either of these claims.
I think this goes some way to vindicating Richardâs complaint that not enough specific details are given in these sort of critiques, rather than undermining it actually (though only a little, these are short reviews, and may not do the stuff being reviewed justice.)
In fairness, you could consistently think âbillionaires are biased against intervention which are justified via premises that make âthe systemâ/âbillionaires sound badâ without believing we should abolish capitalism. The critique could also be pointing to a real problem, and maybe on that could be mitigated in various way, even if âabolish the systemâ is not a good idea. (Not a comment either way on whether your criticism of the versions of the institutional critique that have actually been made is correct.)
Firstly, itâs not really me you should be thanking, itâs not my project, I am just helping with it a bit.
Secondly, itâs just another version of this, donât expect any info about funding beyond an update to the funding info in this: https://ââwww.alignmentforum.org/ââposts/ââzaaGsFBeDTpCsYHef/ââshallow-review-of-live-agendas-in-alignment-and-safety
âpwning the childless cat ladiesâ I know this is just a joke in passing and not the point of the paper, but this is sexist (in the sense that it comes off hostile to women or at least gender-nonconforming women) and sexism should be avoided for both substantive and PR reasons.
Even if the conscious states in humans are more intense, it doesnât follow necessarily that consciousness makes them more intense. Probably some of these people would respond to you as follows more intense states have more influence in the brain, and so in humans they are more likely to attract the attention of introspective mechanisms and become conscious in particular, but in animals without introspection, having more influence does not mean being conscious, because there is no introspective mechanism to attract the attention of. (I am improvising here somewhat, Iâve never seent his combination of views specifically.)
I think that Dennett probably said inconsistent things about this over time.
People who deny animal consciousness are often working with a background assumption that any thing can in principle be perceived unconsciously, and that in practice loads of unconscious representation goes on in the human brain. Itâs not clear what use a conscious pain is above a unconscious perception of bodily damage.
Iâm working on a âwho has funded what in AI safetyâ doc. Surprisingly, when I looked up Lightspeed Grants online (https://ââlightspeedgrants.org/ââ) I couldnât find any list of what they funded. Does anyone know where I could find such a list?
I havenât read the paper, but a simple objection is that youâre never going to be certain your actions only have finite effects, because you should only assign credence 0 to contradictions. (I donât actually know the argument for the latter, but some philosophers believe it.) So you have to deal with the very, very small but not literally 0 chance that your actions will have an infinitely good/âbad outcome because your current theories of how the universe works are wrong. However, anything with a chance of bringing about an infinitely good or bad outcome has an infinite expected value or an undefined one. So unless all expected values are undefined (which brings it own problems) you have to deal with infinite expected values, which is enough to cause trouble.