The problem with this definition is that someone who did absolutely nothing to help others could hold that belief and qualify. That seems quite strange and confusing. At the EA Summit this year I proposed the following on my slides:
Possible standards
‘Significant’ altruism. One of:
Money: 10% of income or more?
Time: 10% of hours or more?
Or a ‘significant’ change in career path?
Open-mindedness
Willing to change beliefs in response to evidence
Cause neutral: If given good reasons to believe that a
different cause area will do more good, will switch to that
Must hold a ‘reasonable’ view of what is good (no Nazi clause)
The problem with this definition is that someone who did absolutely nothing to help others could hold that belief and qualify.
Well zero can often be an awkward edge case but we don’t really need a definition to tell us that someone who does nothing for others isn’t an effective altruist. However, when someone does a small amount for others, if they’re giving a small amount to highly effective causes, they can be a very important part of the extended altruism community. Take Peter Thiel, who seems to give <1%, or think of Richard Posner or Martin Rees, who have made huge contributions to the GCR-reduction space over many years, using a small fraction of their working hours.
On a related note, a lot of people think like effective altruists but don’t act on it. I’ve found that it can be dangerous to write these kinds of people off because often you come back and meet them again in a few years and they take concrete actions by donating their time or other resources to help others.
Last, I just worry about the whole notion of applying ‘standards’ of effective altruism. The whole approach seems really wrong-headed. It doesn’t feel useful to try to appraise whether people are sufficiently generous or “open-minded” or “willing to update” to “count” as “one of us”. It’s pretty common for people to say to me that they’re not sure whether they “count” as an effective altruist. But that’s obviously not what it’s about. And I think we should be taking a loud and clear message from these kinds of cases that we’re doing something wrong.
I think this is exactly right. Encouraging people to do more is of course great, but while in theory excluding people for not meeting a certain standard might nudge people up to that standard, I think in practice it’s likely to result in a much smaller movement. Positive feedback for taking increasingly significant actions seems like a better motivator.
If we did spread the idea of effectiveness very widely but it didn’t have a high level of altruism attached to it, I think that would already achieve a lot, and I think it would also be a world in which it was easier to persuade many people to be more altruistic.
“while in theory excluding people for not meeting a certain standard might nudge people up to that standard, I think in practice it’s likely to result in a much smaller movement.”
What makes you think that? I just have no idea which of the effects (encouraging people do to more; discouraging them from taking a greater interest) dominates.
Thanks for asking this question. I found it helpful to introspect on my reasons for thinking this.
Roughly, I picture a model where I have huge uncertainty over how far the movement will spread (~ 5 orders of magnitude), and merely large uncertainty over how much the average person involved will do (< 2 orders of magnitude). This makes it more important right now to care about gaining percentile improvements in movement breadth than commitment. Additionally, the growth model likely includes an exponential component, so nudging up the rate has compounding effects.
To put that another way, I see a lot of the expected value of the movement coming from scenarios where it gets very big (even though these are unlikely), so it’s worth trying to maximise the chance of that happening. If we get to a point where it seems with high likelihood that it will become very big, it seems more worthwhile to start optimising value/person.
Two caveats here:
(i) It might be that demanding standards will help growth rather than hinder it. My intuition says not and that it’s important to make drivers feel positive rather than negative, but I’m not certain.
(ii) My reasoning suggests paying a lot more attention at the margin to the effects on growth than on individual altruism than you might first think, but it doesn’t say the ratio is infinite. We should err in both directions, taking at least some actions which push people to do more even if they hinder growth. The question is where we are on the spectrum right now. My perception is that we’re already making noticeable trade-offs in this direction, so perhaps going too far, but I might be persuadable otherwise.
I have a different reason for thinking this is true, which involves fewer numbers and more personal experience and intuition.
Having a high standard—either you make major changes in your life or your not an effective altruist—will probably fail because people aren’t used to or willing to make big, sudden changes in their lives. It’s hard to imagine donating half your income from the point of view of someone currently donating nothing; it’s much easier to imagine doing that if you’re already donating 20% or 30%. When I was first exposed to EA, I found it very weird and vaguely threatening, and I could definitely not have jumped from that state to earning to give. Not that I have since gone that far, but I do donate 10% and the idea of donating more is at least contemplatable. Even if you mostly care about the number of people who end up very highly committed, having low or medium standards gives people plausible first steps on a ladder towards that state.
As an analogy, take Catholics and nuns. There are many Catholics and very few nuns, and even fewer of those nuns were people who converted to Catholicism and then immediately became nuns. If there was no way to be Catholic except being a nun, the only people who could possibly be nuns would be the people who converted and then immediately became nuns.
Giving What We Can finds that the 10% bar is tough for people but not unimaginable. Certainly we shouldn’t be unfriendly to people who don’t do that—I’m not unfriendly even to people who don’t do anything to help strangers—but we could set it as the bar people should aspire to, and a bar most people in the community are achieving most of the time.
Yeah, it’s also a useful observation that when they talk to the general public, most charities ask for a small regular committment of funds, like $30 per month. If you’re asking people who already identify as effective altruists, it might make sense to ask for more but if you’re approaching new people, this would seem like a sensible starting point.
Whether it’s good to have a ‘standard’ is certainly unclear. But if we do, I don’t think it can only relate only to beliefs rather than actions.
Compare: “A Christian is someone who believes that in order to be a good Christian you should do X, Y and Z.” “An environmentalist is someone who thinks that if you wanted to help the environment, you would do X, Y, Z.”
Well, an effective environmentalist can be someone whose environmentalism is effective. Likewise, an evangelical Christian could be someone who is evangelical in their Christianity. You could argue that an evangelical Christian only counts as such if they spend 2% of their time on a soapbox or knock on fifty doors per week but that would be extreme. Can’t an (aspiring) effective altruist just be someone whose altruism is as effective as they can make it?
“Someone who believes that to be a good altruist, you should use evidence and reason to do the most good with your altruistic actions, and puts at least some time or money behind the things they therefore believe will do the most good”
Your examples don’t track my statements of what is required (merely having a belief about the definition of a term ‘good Christian’).
“(*) To be a good altruist, you should use evidence and reason to do the most good with your altruistic actions.”
What about someone who believes this but engages in only ineffective altruism because they don’t care much about being a ‘good altruist’? I can see there being many people like this. They realise that to be a ‘good altruist’ they should maximise their cost effectiveness, and they find it an interesting research area, but all of their actual altruism is related to people they know, causes they personally are invested in but aren’t terribly helpful, etc.
Ah, I see. You were thinking about the kind of attributes involved in affiliation: e.g. self-identification, belief, general action or specific stipulated actions.
I was arguing along a different axis—whether it would be better to restrict the standard to the domain of altruism or make it unrestricted.
The problem with this definition is that someone who did absolutely nothing to help others could hold that belief and qualify. That seems quite strange and confusing. At the EA Summit this year I proposed the following on my slides:
Possible standards
‘Significant’ altruism. One of:
Money: 10% of income or more?
Time: 10% of hours or more?
Or a ‘significant’ change in career path?
Open-mindedness
Willing to change beliefs in response to evidence
Cause neutral: If given good reasons to believe that a different cause area will do more good, will switch to that
Must hold a ‘reasonable’ view of what is good (no Nazi clause)
Read more: https://drive.google.com/file/d/0B8_48dde-9C3WUVkTGdoUEliQ0E/view?usp=sharing
Well zero can often be an awkward edge case but we don’t really need a definition to tell us that someone who does nothing for others isn’t an effective altruist. However, when someone does a small amount for others, if they’re giving a small amount to highly effective causes, they can be a very important part of the extended altruism community. Take Peter Thiel, who seems to give <1%, or think of Richard Posner or Martin Rees, who have made huge contributions to the GCR-reduction space over many years, using a small fraction of their working hours.
On a related note, a lot of people think like effective altruists but don’t act on it. I’ve found that it can be dangerous to write these kinds of people off because often you come back and meet them again in a few years and they take concrete actions by donating their time or other resources to help others.
Last, I just worry about the whole notion of applying ‘standards’ of effective altruism. The whole approach seems really wrong-headed. It doesn’t feel useful to try to appraise whether people are sufficiently generous or “open-minded” or “willing to update” to “count” as “one of us”. It’s pretty common for people to say to me that they’re not sure whether they “count” as an effective altruist. But that’s obviously not what it’s about. And I think we should be taking a loud and clear message from these kinds of cases that we’re doing something wrong.
I think this is exactly right. Encouraging people to do more is of course great, but while in theory excluding people for not meeting a certain standard might nudge people up to that standard, I think in practice it’s likely to result in a much smaller movement. Positive feedback for taking increasingly significant actions seems like a better motivator.
If we did spread the idea of effectiveness very widely but it didn’t have a high level of altruism attached to it, I think that would already achieve a lot, and I think it would also be a world in which it was easier to persuade many people to be more altruistic.
“while in theory excluding people for not meeting a certain standard might nudge people up to that standard, I think in practice it’s likely to result in a much smaller movement.”
What makes you think that? I just have no idea which of the effects (encouraging people do to more; discouraging them from taking a greater interest) dominates.
Thanks for asking this question. I found it helpful to introspect on my reasons for thinking this.
Roughly, I picture a model where I have huge uncertainty over how far the movement will spread (~ 5 orders of magnitude), and merely large uncertainty over how much the average person involved will do (< 2 orders of magnitude). This makes it more important right now to care about gaining percentile improvements in movement breadth than commitment. Additionally, the growth model likely includes an exponential component, so nudging up the rate has compounding effects.
To put that another way, I see a lot of the expected value of the movement coming from scenarios where it gets very big (even though these are unlikely), so it’s worth trying to maximise the chance of that happening. If we get to a point where it seems with high likelihood that it will become very big, it seems more worthwhile to start optimising value/person.
Two caveats here:
(i) It might be that demanding standards will help growth rather than hinder it. My intuition says not and that it’s important to make drivers feel positive rather than negative, but I’m not certain.
(ii) My reasoning suggests paying a lot more attention at the margin to the effects on growth than on individual altruism than you might first think, but it doesn’t say the ratio is infinite. We should err in both directions, taking at least some actions which push people to do more even if they hinder growth. The question is where we are on the spectrum right now. My perception is that we’re already making noticeable trade-offs in this direction, so perhaps going too far, but I might be persuadable otherwise.
I have a different reason for thinking this is true, which involves fewer numbers and more personal experience and intuition.
Having a high standard—either you make major changes in your life or your not an effective altruist—will probably fail because people aren’t used to or willing to make big, sudden changes in their lives. It’s hard to imagine donating half your income from the point of view of someone currently donating nothing; it’s much easier to imagine doing that if you’re already donating 20% or 30%. When I was first exposed to EA, I found it very weird and vaguely threatening, and I could definitely not have jumped from that state to earning to give. Not that I have since gone that far, but I do donate 10% and the idea of donating more is at least contemplatable. Even if you mostly care about the number of people who end up very highly committed, having low or medium standards gives people plausible first steps on a ladder towards that state.
As an analogy, take Catholics and nuns. There are many Catholics and very few nuns, and even fewer of those nuns were people who converted to Catholicism and then immediately became nuns. If there was no way to be Catholic except being a nun, the only people who could possibly be nuns would be the people who converted and then immediately became nuns.
Giving What We Can finds that the 10% bar is tough for people but not unimaginable. Certainly we shouldn’t be unfriendly to people who don’t do that—I’m not unfriendly even to people who don’t do anything to help strangers—but we could set it as the bar people should aspire to, and a bar most people in the community are achieving most of the time.
Yeah, it’s also a useful observation that when they talk to the general public, most charities ask for a small regular committment of funds, like $30 per month. If you’re asking people who already identify as effective altruists, it might make sense to ask for more but if you’re approaching new people, this would seem like a sensible starting point.
Whether it’s good to have a ‘standard’ is certainly unclear. But if we do, I don’t think it can only relate only to beliefs rather than actions.
Compare: “A Christian is someone who believes that in order to be a good Christian you should do X, Y and Z.” “An environmentalist is someone who thinks that if you wanted to help the environment, you would do X, Y, Z.”
Well, an effective environmentalist can be someone whose environmentalism is effective. Likewise, an evangelical Christian could be someone who is evangelical in their Christianity. You could argue that an evangelical Christian only counts as such if they spend 2% of their time on a soapbox or knock on fifty doors per week but that would be extreme. Can’t an (aspiring) effective altruist just be someone whose altruism is as effective as they can make it?
A stronger option is:
“Someone who believes that to be a good altruist, you should use evidence and reason to do the most good with your altruistic actions, and puts at least some time or money behind the things they therefore believe will do the most good”
Your examples don’t track my statements of what is required (merely having a belief about the definition of a term ‘good Christian’).
“(*) To be a good altruist, you should use evidence and reason to do the most good with your altruistic actions.”
What about someone who believes this but engages in only ineffective altruism because they don’t care much about being a ‘good altruist’? I can see there being many people like this. They realise that to be a ‘good altruist’ they should maximise their cost effectiveness, and they find it an interesting research area, but all of their actual altruism is related to people they know, causes they personally are invested in but aren’t terribly helpful, etc.
Ah, I see. You were thinking about the kind of attributes involved in affiliation: e.g. self-identification, belief, general action or specific stipulated actions.
I was arguing along a different axis—whether it would be better to restrict the standard to the domain of altruism or make it unrestricted.