I feel like there might be two things going on here:
an abstract argument that you need some altruism before you make it effective. This would have a threshold, but probably not a very broad one.
a feeling like there’s some important ingredient in the beliefs held by the cluster of people who associate with the label EA, which speaks to what their moral circles look like (at least moderately broad, but also probably somewhat narrowed in the sense of https://gwern.net/narrowing-circle ).
I in fact would advocate some version of EA according-to-their-own values to pretty much everyone, regardless of the breadth of their moral circle. And it seems maybe helpful to be able to talk about that? But it’s also helpful to be able to talk about the range of moral circles that people around EA tend to feel good about. It could be nice if someone named these things apart.
“EA-according-to-their-own values”, i.e. E, is just instrumental rationality, right?
ETA: or maybe you’re thinking instead of something like actually internalizing/adopting their explicit values as ends, which does seem like an important separate step?
I was meaning “instrumental rationality applied to whatever part of their values is other-affecting”.
I think this is especially important to pull out explicitly relative to regular instrumental rationality, because the feedback loops are less automatic (so a lot of the instrumental rationality people learn by default is in service of their prudential goals).
I feel like there might be two things going on here:
an abstract argument that you need some altruism before you make it effective. This would have a threshold, but probably not a very broad one.
a feeling like there’s some important ingredient in the beliefs held by the cluster of people who associate with the label EA, which speaks to what their moral circles look like (at least moderately broad, but also probably somewhat narrowed in the sense of https://gwern.net/narrowing-circle ).
I in fact would advocate some version of EA according-to-their-own values to pretty much everyone, regardless of the breadth of their moral circle. And it seems maybe helpful to be able to talk about that? But it’s also helpful to be able to talk about the range of moral circles that people around EA tend to feel good about. It could be nice if someone named these things apart.
“EA-according-to-their-own values”, i.e. E, is just instrumental rationality, right?
ETA: or maybe you’re thinking instead of something like actually internalizing/adopting their explicit values as ends, which does seem like an important separate step?
I was meaning “instrumental rationality applied to whatever part of their values is other-affecting”.
I think this is especially important to pull out explicitly relative to regular instrumental rationality, because the feedback loops are less automatic (so a lot of the instrumental rationality people learn by default is in service of their prudential goals).