Isn’t the point of EA that we are responsive to new arguments? So, unlike Extinction Rebellion where belief that climate change is a real and imminent risk is essential, our “belief system” is rather more about openness and willingness to update in response to 1) evidence, and 2) reasonable arguments about other world views?
Also I think a lot of the time when people say “value alignment”, they are in fact looking for signals like self-identification as EAs, or who they’re friends with or have collaborated / worked with. I also notice we conflate our aesthetic preferences for communication with good reasoning or value alignment; for example, someone who knows in-group terminology or uses non-emotive language is seen as aligned with EA values / reasoning (and by me as well often). But within social-justice circles, emotive language can be seen as a signal of value alignment. Basically, there’s a lot more to unpack with “value alignment” and what it means in reality vs. what we say it ostensibly means.
Also to tackle your response, and maybe I’m reading between the lines too hard here / being too harsh on you here, but I feel there’s goalpost shifting in your original post about EA value alignment and you now stating that people who understand broader principles are also “value aligned”.
Another reflection: the more we speak about “value alignment” being important, the more it incentivises people to signal “value alignment” even if they have good arguments to the contrary. If we speak about valuing different perspectives, we give permission and incentivise people to bring those.
Isn’t the point of EA that we are responsive to new arguments? So, unlike Extinction Rebellion where belief that climate change is a real and imminent risk is essential, our “belief system” is rather more about openness and willingness to update in response to 1) evidence, and 2) reasonable arguments about other world views?
Yes—but the issue plays itself out one level up.
For instance, most people aren’t very scope sensitive – firstly in their intuitions, and especially when it comes to acting on them.
I think scope sensitivity is a key part of effective altruism, so appointing people who are less scope sensitive to boards of EA orgs is similar to XR appointing people who are less concerned about climate change.
Also I think a lot of the time when people say “value alignment”, they are in fact looking for signals like self-identification as EAs, or who they’re friends with or have collaborated / worked with. I also notice we conflate our aesthetic preferences for communication with good reasoning or value alignment; for example, someone who knows in-group terminology or uses non-emotive language is seen as aligned with EA values / reasoning (and by me as well often).
I agree and think this is bad. Another common problem is interpreting agreement on what causes & interventions to prioritise as ‘value alignment’, whereas what actually matters are the underlying principles.
It’s tricky because I think these things do at least correlate with with the real thing. I don’t feel like I know what to do about it. Besides trying to encourage people to think more deeply, perhaps trying one or two steps harder to work with people one or two layers out from the current community is a good way to correct for this bias.
Also to tackle your response, and maybe I’m reading between the lines too hard here / being too harsh on you here, but I feel there’s goalpost shifting in your original post about EA value alignment and you now stating that people who understand broader principles are also “value aligned”.
That’s not my intention. I think a strong degree of wanting to act on the values is important for the majority of the board. That’s not the same as self-identifying as an EA, but merely understanding the broad principles is also not sufficient.
(Though I’m happy if a minority of the board are less dedicated to acting on the values.)
(Another clarification from earlier is that it also depends on the org. If you’re doing an evidence-based global health charity, then it’s fine to fill your board with people who are really into global health. I also think it’s good to have advisors from clearly outside of the community – they just don’t have to be board members.)
Another reflection: the more we speak about “value alignment” being important, the more it incentivises people to signal “value alignment” even if they have good arguments to the contrary. If we speak about valuing different perspectives, we give permission and incentivise people to bring those.
I agree and this is unfortunate.
To be clear I think we should try to value other perspectives about the question of how to do the most good, and we should aim to cooperate with those who have different values to our own. We should also try much harder to draw on operational skills from outside the community. But the question of board choice is firstly a question of who should be given legal control of EA organisations.
Now having read your reply, I think we’re likely closer together than apart on views. But...
But the question of board choice is firstly a question of who should be given legal control of EA organisations.
I don’t think this is how I see the question of board choice in practice. In theory yes, for the specific legal, hard mechanisms you mention. But in practice in my experience boards significantly check and challenge direction of the organisation, so the collective ability of board members to do this should be factored in appointment decisions which may trade off against legal control being put in the ‘safest pair of hands’.
That said, I feel back and forth responses on the EA forum may be exhausting their value here; I feel I’d have more to say in a brainstorm about potential trade-offs between legal control and ability to check and challenge, and open to discussing further if helpful to some concrete issue at hand :)
Yes, legal control is the first consideration, but governance requires skill not just value-alignment
I think in 2023 the skills you want largely exist within the community; it’s just that (a) people can’t find them easily (hence I founded the EA Good Governance Project) and (b) people need to be willing to appoint outside their clique
Isn’t the point of EA that we are responsive to new arguments? So, unlike Extinction Rebellion where belief that climate change is a real and imminent risk is essential, our “belief system” is rather more about openness and willingness to update in response to 1) evidence, and 2) reasonable arguments about other world views?
Also I think a lot of the time when people say “value alignment”, they are in fact looking for signals like self-identification as EAs, or who they’re friends with or have collaborated / worked with. I also notice we conflate our aesthetic preferences for communication with good reasoning or value alignment; for example, someone who knows in-group terminology or uses non-emotive language is seen as aligned with EA values / reasoning (and by me as well often). But within social-justice circles, emotive language can be seen as a signal of value alignment. Basically, there’s a lot more to unpack with “value alignment” and what it means in reality vs. what we say it ostensibly means.
Also to tackle your response, and maybe I’m reading between the lines too hard here / being too harsh on you here, but I feel there’s goalpost shifting in your original post about EA value alignment and you now stating that people who understand broader principles are also “value aligned”.
Another reflection: the more we speak about “value alignment” being important, the more it incentivises people to signal “value alignment” even if they have good arguments to the contrary. If we speak about valuing different perspectives, we give permission and incentivise people to bring those.
Yes—but the issue plays itself out one level up.
For instance, most people aren’t very scope sensitive – firstly in their intuitions, and especially when it comes to acting on them.
I think scope sensitivity is a key part of effective altruism, so appointing people who are less scope sensitive to boards of EA orgs is similar to XR appointing people who are less concerned about climate change.
I agree and think this is bad. Another common problem is interpreting agreement on what causes & interventions to prioritise as ‘value alignment’, whereas what actually matters are the underlying principles.
It’s tricky because I think these things do at least correlate with with the real thing. I don’t feel like I know what to do about it. Besides trying to encourage people to think more deeply, perhaps trying one or two steps harder to work with people one or two layers out from the current community is a good way to correct for this bias.
That’s not my intention. I think a strong degree of wanting to act on the values is important for the majority of the board. That’s not the same as self-identifying as an EA, but merely understanding the broad principles is also not sufficient.
(Though I’m happy if a minority of the board are less dedicated to acting on the values.)
(Another clarification from earlier is that it also depends on the org. If you’re doing an evidence-based global health charity, then it’s fine to fill your board with people who are really into global health. I also think it’s good to have advisors from clearly outside of the community – they just don’t have to be board members.)
I agree and this is unfortunate.
To be clear I think we should try to value other perspectives about the question of how to do the most good, and we should aim to cooperate with those who have different values to our own. We should also try much harder to draw on operational skills from outside the community. But the question of board choice is firstly a question of who should be given legal control of EA organisations.
Now having read your reply, I think we’re likely closer together than apart on views. But...
I don’t think this is how I see the question of board choice in practice. In theory yes, for the specific legal, hard mechanisms you mention. But in practice in my experience boards significantly check and challenge direction of the organisation, so the collective ability of board members to do this should be factored in appointment decisions which may trade off against legal control being put in the ‘safest pair of hands’.
That said, I feel back and forth responses on the EA forum may be exhausting their value here; I feel I’d have more to say in a brainstorm about potential trade-offs between legal control and ability to check and challenge, and open to discussing further if helpful to some concrete issue at hand :)
Two quick points:
Yes, legal control is the first consideration, but governance requires skill not just value-alignment
I think in 2023 the skills you want largely exist within the community; it’s just that (a) people can’t find them easily (hence I founded the EA Good Governance Project) and (b) people need to be willing to appoint outside their clique