That sentence you quoted doesn’t exhaust my normativity, but marks the extent of it which motivates my interest in EA. The word ‘maximally’ is very unclear here; I mean maximally internal to my giving, not throughout every minutia of my consciousness and actions.
The issue I wanted to raise was several-fold: that very many effective altruists take as obvious and unproblematic that utilitarianism does exhaust human value, which is reinforced by the fact that almost no one speaks to this point; that it seriously effects the evaluation of outcomes (i.e. the xrisk community, including if not especially Nick Bostrom, speak with a straight-face about totalitarianism as a condition of controlling nanotechnology and artificial intelligence); and the tactics for satisfying those outcomes.
In regard to the last point, in response to a user suggesting that we should reshape our identity, presentation and justification when speaking to conservatives, in order to effectively bring them to altruism, I posted:
“I find the this kind of rationalization—subordinating ones ethics to what can effectively motivate people to altruism—both profoundly conservative and, to some extent, undignified and inhuman, i.e. the utility slave coming full circle to enslave their own dictate of utility maximisation.”
That kind of thinking, however, is extremely common.
In response to your second paragraph:
“It just seems obvious to me that, all other things equal, helping two people is better than helping one.”
This simply begs the question: “helping” and “people” are heavily indeterminate concepts, the imputation of content to which is heavily consequential for the action-guidance that follows.
“If various moral theories favoured by academics don’t reach that conclusion, then so much worse for them; if they do reach that conclusion, then all the better. And in the latter case, the precise formulations of the theories matter very little to me.”
I find this perhaps culpable of wishful thinking; insofar as it would be nice if the natural structure of the world inhered an objective morality dovetailing with my historically specific intuitions and attitudes, that doesn’t itself vindicate it as so. More often that not, the imposition of the latter on the former occurs. Something seeming obvious to oneself isn’t premise for its truth.
If you follow the history of utilitarianism, it is a history of increasing dilution, from the moral naturalism of Bentham’s conception of a unified human good psychologically motivating all human action, to Mill’s pluralising of that good, to Sidgwick’s wholesale rejection of naturalism and value commensurability, and argument that the only register of independent human valuation is mere intuition, to Moore’s final reductio of the tradition in Principia Ethica (‘morality consists in a non-natural good, whatever I feel it to be, but by the way, aesthetics and interpersonal enjoyment are far and away superior’). Suffice it to say that nearly all utilitarians are intuitionists today, which I honestly can’t take seriously as an independent reason for action, and is a standard by which utilitarianism sowed its own death—any and all forms of utilitarianism entail serious counter-intuition. Hence the climb of Rawls and liberal egalitarianism to predominance in the academy; it simply better satisfies the historical values and ideology of the here and now.
My philosophical background is that of the physics stereotype that utterly loathes most academic philosophy, so I’m not sure if this discussion will be all that fruitful. Still I’ll give this a go.
This simply begs the question: “helping” and “people” are heavily indeterminate concepts, the imputation of content to which is heavily consequential for the action-guidance that follows.
At some pretty deep level, I just don’t care. I treat statements like “It is better if people get vaccinated” or “It is better if people in malaria-prone areas sleep under bednets” as almost axiomatic, and that’s my start-off point for working out where to donate. If there are lots of philosophers out there who disagree, well that’s disappointing to me, but it’s not really so bad, because there are plenty of non-philosophers out there.
Suffice it to say that nearly all utilitarians are intuitionists today, which I honestly can’t take seriously as an independent reason for action, and is a standard by which utilitarianism sowed its own death—any and all forms of utilitarianism entail serious counter-intuition.
The utilitarian bits of my morality do certainly come out of intuition, whether it’s of the “It is better if people get vaccinated” form or by considering amusingly complicated trolley problems as in Peter Unger’s Living High and Letting Die. And when you carry through the logic to a counter-intuitive conclusion like “You should donate a large chunk of your money to effective charity” then I bite that bullet and donate; and when you carry through the logic to conclude that you should cut up an innocent person for their organs, I say “Nope”. I don’t know anyone who strictly adheres to a pure form of any moral system; I don’t know of any moral system that doesn’t throw up some wildly counter-intuitive conclusions; I am completely OK with using intuition as an input to judging moral dilemmas; I don’t consider any of this a problem.
it seriously effects the evaluation of outcomes (i.e. the xrisk community...)
Yeah, the presence of futurist AI stuff in the EA community (and also its increasing prominence) is a surprise to me. I think it should be a sort of strange cousin, a group of people with a similar propensity to bite bullets as the rest of the EA community, but with some different axioms that lead them far away from the rest of us.
If you want to say that this is a consequence of utilitarian-type thinking, then I agree. But I’m not going to throw out cost-effectiveness calculations and basic axioms like “helping two better is better than helping one” just because there are people considering world dictators controlling a nano-robot future or whatever.
That sentence you quoted doesn’t exhaust my normativity, but marks the extent of it which motivates my interest in EA. The word ‘maximally’ is very unclear here; I mean maximally internal to my giving, not throughout every minutia of my consciousness and actions.
The issue I wanted to raise was several-fold: that very many effective altruists take as obvious and unproblematic that utilitarianism does exhaust human value, which is reinforced by the fact that almost no one speaks to this point; that it seriously effects the evaluation of outcomes (i.e. the xrisk community, including if not especially Nick Bostrom, speak with a straight-face about totalitarianism as a condition of controlling nanotechnology and artificial intelligence); and the tactics for satisfying those outcomes.
In regard to the last point, in response to a user suggesting that we should reshape our identity, presentation and justification when speaking to conservatives, in order to effectively bring them to altruism, I posted:
“I find the this kind of rationalization—subordinating ones ethics to what can effectively motivate people to altruism—both profoundly conservative and, to some extent, undignified and inhuman, i.e. the utility slave coming full circle to enslave their own dictate of utility maximisation.”
That kind of thinking, however, is extremely common.
In response to your second paragraph:
“It just seems obvious to me that, all other things equal, helping two people is better than helping one.”
This simply begs the question: “helping” and “people” are heavily indeterminate concepts, the imputation of content to which is heavily consequential for the action-guidance that follows.
“If various moral theories favoured by academics don’t reach that conclusion, then so much worse for them; if they do reach that conclusion, then all the better. And in the latter case, the precise formulations of the theories matter very little to me.”
I find this perhaps culpable of wishful thinking; insofar as it would be nice if the natural structure of the world inhered an objective morality dovetailing with my historically specific intuitions and attitudes, that doesn’t itself vindicate it as so. More often that not, the imposition of the latter on the former occurs. Something seeming obvious to oneself isn’t premise for its truth.
If you follow the history of utilitarianism, it is a history of increasing dilution, from the moral naturalism of Bentham’s conception of a unified human good psychologically motivating all human action, to Mill’s pluralising of that good, to Sidgwick’s wholesale rejection of naturalism and value commensurability, and argument that the only register of independent human valuation is mere intuition, to Moore’s final reductio of the tradition in Principia Ethica (‘morality consists in a non-natural good, whatever I feel it to be, but by the way, aesthetics and interpersonal enjoyment are far and away superior’). Suffice it to say that nearly all utilitarians are intuitionists today, which I honestly can’t take seriously as an independent reason for action, and is a standard by which utilitarianism sowed its own death—any and all forms of utilitarianism entail serious counter-intuition. Hence the climb of Rawls and liberal egalitarianism to predominance in the academy; it simply better satisfies the historical values and ideology of the here and now.
My philosophical background is that of the physics stereotype that utterly loathes most academic philosophy, so I’m not sure if this discussion will be all that fruitful. Still I’ll give this a go.
At some pretty deep level, I just don’t care. I treat statements like “It is better if people get vaccinated” or “It is better if people in malaria-prone areas sleep under bednets” as almost axiomatic, and that’s my start-off point for working out where to donate. If there are lots of philosophers out there who disagree, well that’s disappointing to me, but it’s not really so bad, because there are plenty of non-philosophers out there.
The utilitarian bits of my morality do certainly come out of intuition, whether it’s of the “It is better if people get vaccinated” form or by considering amusingly complicated trolley problems as in Peter Unger’s Living High and Letting Die. And when you carry through the logic to a counter-intuitive conclusion like “You should donate a large chunk of your money to effective charity” then I bite that bullet and donate; and when you carry through the logic to conclude that you should cut up an innocent person for their organs, I say “Nope”. I don’t know anyone who strictly adheres to a pure form of any moral system; I don’t know of any moral system that doesn’t throw up some wildly counter-intuitive conclusions; I am completely OK with using intuition as an input to judging moral dilemmas; I don’t consider any of this a problem.
Yeah, the presence of futurist AI stuff in the EA community (and also its increasing prominence) is a surprise to me. I think it should be a sort of strange cousin, a group of people with a similar propensity to bite bullets as the rest of the EA community, but with some different axioms that lead them far away from the rest of us.
If you want to say that this is a consequence of utilitarian-type thinking, then I agree. But I’m not going to throw out cost-effectiveness calculations and basic axioms like “helping two better is better than helping one” just because there are people considering world dictators controlling a nano-robot future or whatever.