Hey Richard, I agree with this, and I like the framing.
I want to add though, that these are basically the reasons why we created EA in the first place, rather than promoting ‘utilitarian charity’. The idea was that people with many ethical views can agree that the scale of effects on people’s lives matters, and so it’s a point of convergence that many can get behind, while also getting at a key empirical fact that’s not widely appreciated (differences in scope are larger than people think).
So, I’d say scope sensitive ethics is a reinvention of EA. It’s a regret of mine that we’ve not done a great job of communicating that so far. It’s possible we need to try introducing the core idea in lots of ways to get it across, and this seems like a good one.
I’d say scope sensitive ethics is a reinvention of EA.
This doesn’t seem quite right, because ethical theories and movements/ideologies are two different types of things. If you mean to say that scope sensitive ethics is a reinvention of the ethical intuitions which inspired EA, then I’m happy to agree; but the whole point of coining the term is to separate the ethical position from other empirical/methodological/community connotations that EA currently possesses, and which to me also seem like “core ideas” of EA.
That makes sense—it could be useful to define an ethical position that’s separate from effective altruism (which I’ve been pushing to be defined as a practical and intellectual project rather than ethical theory).
I’d be excited to see someone try to develop it, and would be happy to try to help if you do more in this area.
In the early days of EA, we actually toyed with a similar idea, called Positive Ethics—an analogy with positive psychology—which aimed to be the ethics of how to best benefit others, rather than more discussion of prohibitions.
I think my main concern is that I’m not sure that in public awareness there’s enough space in between EA, global priorities research and consequentialism for another field. (E.g. I also think it would be better if EA were framed more in terms of ‘let’s be scope sensitive’ rather than the other connotations you mention), but it could be interesting to write more about the idea to see where you end up.
PS If you push ahead more, you might want to frame it as also a core ethical intuition in non-utilitarian moral theories, rather than presenting it mainly as a more acceptable, watered-down utilitarianism. I think one of the exciting things about scope sensitivity is that it’s a moral principle that everyone should agree with, but also has potentially radical consequences for how we should act.
Hey Richard, I agree with this, and I like the framing.
I want to add though, that these are basically the reasons why we created EA in the first place, rather than promoting ‘utilitarian charity’. The idea was that people with many ethical views can agree that the scale of effects on people’s lives matters, and so it’s a point of convergence that many can get behind, while also getting at a key empirical fact that’s not widely appreciated (differences in scope are larger than people think).
So, I’d say scope sensitive ethics is a reinvention of EA. It’s a regret of mine that we’ve not done a great job of communicating that so far. It’s possible we need to try introducing the core idea in lots of ways to get it across, and this seems like a good one.
This doesn’t seem quite right, because ethical theories and movements/ideologies are two different types of things. If you mean to say that scope sensitive ethics is a reinvention of the ethical intuitions which inspired EA, then I’m happy to agree; but the whole point of coining the term is to separate the ethical position from other empirical/methodological/community connotations that EA currently possesses, and which to me also seem like “core ideas” of EA.
Hi Richard,
That makes sense—it could be useful to define an ethical position that’s separate from effective altruism (which I’ve been pushing to be defined as a practical and intellectual project rather than ethical theory).
I’d be excited to see someone try to develop it, and would be happy to try to help if you do more in this area.
In the early days of EA, we actually toyed with a similar idea, called Positive Ethics—an analogy with positive psychology—which aimed to be the ethics of how to best benefit others, rather than more discussion of prohibitions.
I think my main concern is that I’m not sure that in public awareness there’s enough space in between EA, global priorities research and consequentialism for another field. (E.g. I also think it would be better if EA were framed more in terms of ‘let’s be scope sensitive’ rather than the other connotations you mention), but it could be interesting to write more about the idea to see where you end up.
PS If you push ahead more, you might want to frame it as also a core ethical intuition in non-utilitarian moral theories, rather than presenting it mainly as a more acceptable, watered-down utilitarianism. I think one of the exciting things about scope sensitivity is that it’s a moral principle that everyone should agree with, but also has potentially radical consequences for how we should act.