I’d say scope sensitive ethics is a reinvention of EA.
This doesn’t seem quite right, because ethical theories and movements/ideologies are two different types of things. If you mean to say that scope sensitive ethics is a reinvention of the ethical intuitions which inspired EA, then I’m happy to agree; but the whole point of coining the term is to separate the ethical position from other empirical/methodological/community connotations that EA currently possesses, and which to me also seem like “core ideas” of EA.
That makes sense—it could be useful to define an ethical position that’s separate from effective altruism (which I’ve been pushing to be defined as a practical and intellectual project rather than ethical theory).
I’d be excited to see someone try to develop it, and would be happy to try to help if you do more in this area.
In the early days of EA, we actually toyed with a similar idea, called Positive Ethics—an analogy with positive psychology—which aimed to be the ethics of how to best benefit others, rather than more discussion of prohibitions.
I think my main concern is that I’m not sure that in public awareness there’s enough space in between EA, global priorities research and consequentialism for another field. (E.g. I also think it would be better if EA were framed more in terms of ‘let’s be scope sensitive’ rather than the other connotations you mention), but it could be interesting to write more about the idea to see where you end up.
PS If you push ahead more, you might want to frame it as also a core ethical intuition in non-utilitarian moral theories, rather than presenting it mainly as a more acceptable, watered-down utilitarianism. I think one of the exciting things about scope sensitivity is that it’s a moral principle that everyone should agree with, but also has potentially radical consequences for how we should act.
This doesn’t seem quite right, because ethical theories and movements/ideologies are two different types of things. If you mean to say that scope sensitive ethics is a reinvention of the ethical intuitions which inspired EA, then I’m happy to agree; but the whole point of coining the term is to separate the ethical position from other empirical/methodological/community connotations that EA currently possesses, and which to me also seem like “core ideas” of EA.
Hi Richard,
That makes sense—it could be useful to define an ethical position that’s separate from effective altruism (which I’ve been pushing to be defined as a practical and intellectual project rather than ethical theory).
I’d be excited to see someone try to develop it, and would be happy to try to help if you do more in this area.
In the early days of EA, we actually toyed with a similar idea, called Positive Ethics—an analogy with positive psychology—which aimed to be the ethics of how to best benefit others, rather than more discussion of prohibitions.
I think my main concern is that I’m not sure that in public awareness there’s enough space in between EA, global priorities research and consequentialism for another field. (E.g. I also think it would be better if EA were framed more in terms of ‘let’s be scope sensitive’ rather than the other connotations you mention), but it could be interesting to write more about the idea to see where you end up.
PS If you push ahead more, you might want to frame it as also a core ethical intuition in non-utilitarian moral theories, rather than presenting it mainly as a more acceptable, watered-down utilitarianism. I think one of the exciting things about scope sensitivity is that it’s a moral principle that everyone should agree with, but also has potentially radical consequences for how we should act.