Yes. But then, shouldn’t all arguments about what is appropriate for EA’s to do generalize to what it is appropriate for everyone to do? Isn’t that the fundamental claim of the EA philosophy?
I don’t think so. I meant your above argument is one for effective altruism to grow, and that growth to primarily be driven by people who go into earning to give. That doesn’t mean everyone should earn to give. If effective altruism grew indefinitely, there would be a point at which there are diminishing marginal returns for more earning to give relative to other options people would pursue. Your argument makes the case this is true for the relative proportion of earning to give within effective altrusm, but also seems to me to imply the amount of earning to give in the world should grow in its absolute quantity as well. This doesn’t imply, however, that 50% of anyone who could earn to give should, nor that everyone should do what effective altruism prescribes now. If effective altruism did become a community of, say, tens of millions of people, what effective altruism would have the marginal person do in that case would likely look much different than what it recommends people do now. I believe the fundamental claim of the EA philosophy isn’t that the arguments from effective altruism should necessarily generalize to everyone, but should generalize to the marginal, i.e., next person who adopts effective altruism. What this generalization is changes as the number of effective altruists grows. However, effective altruism is very far from a number of people such that it would change all its recommendations to the average or marginal community member.
If I understand you correctly I think you make two interesting points here:
the potential of EA as a political vehicle for financial charity
The current EA advice has to be the marginal advice
When I wrote “isn’t that the fundamental claim of EA” I suppose more properly I am referring to the claims that 1) EA is a suitable moral philosophy 2) the consensus answers in the real existing EA community correspond to this philosophy. In other words that EA is, broadly speaking, “right” to do.
I’m going to address both of your above questions with one answer. So, effective altruism is sort of a moral philosophy, but it’s not as complete or at all formalized a system as most religious deontologies, utilitarianism, or other forms of consequentialism or deontology. Virtue ethics is like effective altruism in that it runs on heuristics rather than the principles of deontology, or the calculations of utilitarianism. I think virtue ethics and effective altruism are similar in how they output recommendations in such a way they attempt to be amenable to human psychology. However, with it’s own heuristics, virtue ethics has thousands of years of ancient and modern philosophy from every civilization to build upon and learn from. Effective altruism is new.
There are three types of ethics in formal/academic philosophy: normative ethics, the ethics of what people should do generally; practical ethics, the ethics of what people should do in specific and applied scenarios; and meta-ethics, the philosophy and analysis of ethics as a discipline in its own right. When anyone thinks of any one ethical system, or “philosophy”, such as Kant’s categorical imperative, or preference utilitarianism, or Protestant ethics, it’s almost always a system of normative ethics. Because of how different effective altruism is, what with it trying to mimic science in so many ways to figure out existing goals, and accomodating whatever normative system people used to reach the conclusion of their moral goals, so long as they converge on the same goals, effective altruism seems more like a system of practical rather than normative ethics. This makes it difficult to compare to other moral systems. The fact there seems to be missing the way by which effective altruism determines which moral goals are worth pursuing is a fair criticism lobbed at the philosophy in the past, and one philosophers like Will MacAskill and Peter Singer research to solve without forcing effective altruism to conform to one normative framework. That seems to be the role of meta-ethics in effective altruism. As it grows, though, effective alturism is becoming less necessarily theoretical or normative in its formulation. It’s a movement started by philosophers which may, in fulfilling its goals, may become less philosophical and more pragmatic.
That’s a challenge. It’s a unique challenge. Effective altruism seems a suitable moral philosophy to me, for more reasons than the fact it can be made consistent with other ethical worldviews, whether deontological or consequentialist, religious or secular. From a practical perspective, I think effective altruism is “right”, but because it’s so odd among intellectual movements, I’m not sure what to compare it too.
“The fact there seems to be missing the way by which effective altruism determines which moral goals are worth pursuing … That seems to be the role of meta-ethics in effective altruism.”
Maybe the answer is not to be found in meta-ethics or in analysis generally, but in politics, that is, the raw realities of what people believe and want any given moment, and how consensus forms or doesn’t.
In other words, I think the answer to “what goals are worth pursuing” is, broadly, ask the people you propose to help what it is they want. Luckily, this happens regularly in all sorts of ways, including global scale surveys. This is part of what the value of “democracy” means to me.
I’m not averse to such an approach. I think the criticism how effective altruism determines a consensus of what defines or philosopically grounds “the good” comes from philosophers or other scholars who are weary of populist consensus on ethics when it’s in no way formalized. I’m bringing in David Moss to address this point; he’ll know more.
<Maybe the answer is not to be found in meta-ethics or in analysis generally, but in politics, that is, the raw realities of what people believe and want any given moment, and how consensus forms or doesn’t.
In other words, I think the answer to “what goals are worth pursuing” is, broadly, ask the people you propose to help what it is they want. Luckily, this happens regularly in all sorts of ways, including global scale surveys.>
I guess it depends on what you mean by “what people believe and want any given moment.” If you interpret this as: the results of a life satisfaction survey or maximising preferences or something, then the result will look pretty much like standard consequentialist EA.
If you mean something like: the output of people’s decisions based on collective deliberation, e.g. what a community decides they want collectively as the result of a political process, then it might be (probably will be) something totally different to what you would get if you were trying to maximise preferences.
Yes. But then, shouldn’t all arguments about what is appropriate for EA’s to do generalize to what it is appropriate for everyone to do? Isn’t that the fundamental claim of the EA philosophy?
I don’t think so. I meant your above argument is one for effective altruism to grow, and that growth to primarily be driven by people who go into earning to give. That doesn’t mean everyone should earn to give. If effective altruism grew indefinitely, there would be a point at which there are diminishing marginal returns for more earning to give relative to other options people would pursue. Your argument makes the case this is true for the relative proportion of earning to give within effective altrusm, but also seems to me to imply the amount of earning to give in the world should grow in its absolute quantity as well. This doesn’t imply, however, that 50% of anyone who could earn to give should, nor that everyone should do what effective altruism prescribes now. If effective altruism did become a community of, say, tens of millions of people, what effective altruism would have the marginal person do in that case would likely look much different than what it recommends people do now. I believe the fundamental claim of the EA philosophy isn’t that the arguments from effective altruism should necessarily generalize to everyone, but should generalize to the marginal, i.e., next person who adopts effective altruism. What this generalization is changes as the number of effective altruists grows. However, effective altruism is very far from a number of people such that it would change all its recommendations to the average or marginal community member.
If I understand you correctly I think you make two interesting points here:
the potential of EA as a political vehicle for financial charity
The current EA advice has to be the marginal advice
When I wrote “isn’t that the fundamental claim of EA” I suppose more properly I am referring to the claims that 1) EA is a suitable moral philosophy 2) the consensus answers in the real existing EA community correspond to this philosophy. In other words that EA is, broadly speaking, “right” to do.
I’m going to address both of your above questions with one answer. So, effective altruism is sort of a moral philosophy, but it’s not as complete or at all formalized a system as most religious deontologies, utilitarianism, or other forms of consequentialism or deontology. Virtue ethics is like effective altruism in that it runs on heuristics rather than the principles of deontology, or the calculations of utilitarianism. I think virtue ethics and effective altruism are similar in how they output recommendations in such a way they attempt to be amenable to human psychology. However, with it’s own heuristics, virtue ethics has thousands of years of ancient and modern philosophy from every civilization to build upon and learn from. Effective altruism is new.
There are three types of ethics in formal/academic philosophy: normative ethics, the ethics of what people should do generally; practical ethics, the ethics of what people should do in specific and applied scenarios; and meta-ethics, the philosophy and analysis of ethics as a discipline in its own right. When anyone thinks of any one ethical system, or “philosophy”, such as Kant’s categorical imperative, or preference utilitarianism, or Protestant ethics, it’s almost always a system of normative ethics. Because of how different effective altruism is, what with it trying to mimic science in so many ways to figure out existing goals, and accomodating whatever normative system people used to reach the conclusion of their moral goals, so long as they converge on the same goals, effective altruism seems more like a system of practical rather than normative ethics. This makes it difficult to compare to other moral systems. The fact there seems to be missing the way by which effective altruism determines which moral goals are worth pursuing is a fair criticism lobbed at the philosophy in the past, and one philosophers like Will MacAskill and Peter Singer research to solve without forcing effective altruism to conform to one normative framework. That seems to be the role of meta-ethics in effective altruism. As it grows, though, effective alturism is becoming less necessarily theoretical or normative in its formulation. It’s a movement started by philosophers which may, in fulfilling its goals, may become less philosophical and more pragmatic.
That’s a challenge. It’s a unique challenge. Effective altruism seems a suitable moral philosophy to me, for more reasons than the fact it can be made consistent with other ethical worldviews, whether deontological or consequentialist, religious or secular. From a practical perspective, I think effective altruism is “right”, but because it’s so odd among intellectual movements, I’m not sure what to compare it too.
“The fact there seems to be missing the way by which effective altruism determines which moral goals are worth pursuing … That seems to be the role of meta-ethics in effective altruism.”
Maybe the answer is not to be found in meta-ethics or in analysis generally, but in politics, that is, the raw realities of what people believe and want any given moment, and how consensus forms or doesn’t.
In other words, I think the answer to “what goals are worth pursuing” is, broadly, ask the people you propose to help what it is they want. Luckily, this happens regularly in all sorts of ways, including global scale surveys. This is part of what the value of “democracy” means to me.
A man named Horst Rittel—who also coined “wicked problem”—wrote a wonderful essay on the relationship between planning for solving social problems and politics which seems appropriate here http://www.cc.gatech.edu/~ellendo/rittel/rittel-reasoning.pdf
tl;dr some kinds of knowledge are instrumental, but visions for the future are unavoidably subjective and political.
I’m not averse to such an approach. I think the criticism how effective altruism determines a consensus of what defines or philosopically grounds “the good” comes from philosophers or other scholars who are weary of populist consensus on ethics when it’s in no way formalized. I’m bringing in David Moss to address this point; he’ll know more.
<Maybe the answer is not to be found in meta-ethics or in analysis generally, but in politics, that is, the raw realities of what people believe and want any given moment, and how consensus forms or doesn’t.
In other words, I think the answer to “what goals are worth pursuing” is, broadly, ask the people you propose to help what it is they want. Luckily, this happens regularly in all sorts of ways, including global scale surveys.>
I guess it depends on what you mean by “what people believe and want any given moment.” If you interpret this as: the results of a life satisfaction survey or maximising preferences or something, then the result will look pretty much like standard consequentialist EA.
If you mean something like: the output of people’s decisions based on collective deliberation, e.g. what a community decides they want collectively as the result of a political process, then it might be (probably will be) something totally different to what you would get if you were trying to maximise preferences.
Which of these is closest to the thing meant?