Non-profit boards have 100% legal control of the organisation– they can do anything they want with it.
If you give people who aren’t very dedicated to EA values legal control over EA organisations, they won’t be EA organisations for very long.
There are under 5,000 EA community members in the world – most of them have no management experience.
Sure, you could give up 1⁄3 of the control to people outside of the community, but this doesn’t solve the problem (it only reduces the need for board members by 1⁄3).
The assumption that this 1⁄3 would come from outside the community seems to rely on an assumption that there are no lawyers/accountants/governance experts/etc. in the community. It would be more accurate, I think, to say that the 1⁄3 would come from outside what Jack called “high status core EAs.”
Sorry that’s what I meant. I was saying there are 5,000 community members. If you want the board to be controlled by people who are actually into EA, then you need 2⁄3 to come from something like that pool. Another 1⁄3 could come from outside (though not without risk). I wasn’t talking about what fraction of the board should have specific expertise.
Another clarification, what I care about is whether they deeply grok and are willing to act on the principles – not that they’re part of the community, or self-identify as EA. Those things are at best rough heuristics for the first thing. I was using the number of community members as a rough indication for how many people exist who actually apply the principles – I don’t mind if they actively participate in the community or not, and think it’s good if some people don’t.
Thanks, Ben. I agree with what you are saying. However, I think that on a practical level, what you are arguing for is not what happens. EA boards tend to be filled with people who work full-time in EA roles, not by fully-aligned talent individuals from the private sector (e.g. lawyers, corporate managers) who might be earning to give having followed 80k’s advice 10 years ago
Ultimately, if you think there is enough value within EA arguments about how to do good, you should be able to find smart people from other walks of life who have: 1) enough overlap with EA thinking (because EA isn’t 100% original after all) to have a reasonable starting point along with 2) more relevant leadership experience and demonstrably good judgement, and linked to the two previous 3) mature enough in their opinions and / or achievements to be less susceptible to herding.
If you think that EA orgs won’t remain EA orgs if you don’t appoint “value aligned” people, it implies out arguments aren’t strong enough for people who we think should be convinced by them. If that’s the case, it’s a real good indicator your argument might not be that good and to reconsider.
To be concrete, I expect a board of 50% card-carrying EAs and 50% experienced high achievement non-EAs with good understanding of similar topics (e.g. x-risk, evidence based interventions) to appraise arguments of what high-/lower-risk options to fund much better than a board of 100% EAs with the same epistemic and discourse background and limited prior career / board experience.
I agree that there’s a broader circle of people who get the ideas but aren’t “card carrying” community members, and having some of those on the board is good. A board definitely doesn’t need to be 100% self-identified EAs.
Another clarification is that what I care about is whether they deeply grok and are willing to act on the principles – not that they’re part of the community, or self-identify as EA. Those things are at best rough heuristics for the first thing.
This said, I think there are surprisingly few people out there like that. And due to the huge scope differences in the impact of different actions, there can be huge differences between what someone who is e.g. 60% into applying EA principles would do compared to someone who is 90% into it (using a made up scale).
I think a thing that wouldn’t make sense if for, say, Extinction Rebellion, to appoint people to their board who “aren’t so sold on climate change being the world’s biggest problem”. Due to the point above, you can end up in something that feels like this more quickly than it first seems or is intuitive.
Isn’t the point of EA that we are responsive to new arguments? So, unlike Extinction Rebellion where belief that climate change is a real and imminent risk is essential, our “belief system” is rather more about openness and willingness to update in response to 1) evidence, and 2) reasonable arguments about other world views?
Also I think a lot of the time when people say “value alignment”, they are in fact looking for signals like self-identification as EAs, or who they’re friends with or have collaborated / worked with. I also notice we conflate our aesthetic preferences for communication with good reasoning or value alignment; for example, someone who knows in-group terminology or uses non-emotive language is seen as aligned with EA values / reasoning (and by me as well often). But within social-justice circles, emotive language can be seen as a signal of value alignment. Basically, there’s a lot more to unpack with “value alignment” and what it means in reality vs. what we say it ostensibly means.
Also to tackle your response, and maybe I’m reading between the lines too hard here / being too harsh on you here, but I feel there’s goalpost shifting in your original post about EA value alignment and you now stating that people who understand broader principles are also “value aligned”.
Another reflection: the more we speak about “value alignment” being important, the more it incentivises people to signal “value alignment” even if they have good arguments to the contrary. If we speak about valuing different perspectives, we give permission and incentivise people to bring those.
Isn’t the point of EA that we are responsive to new arguments? So, unlike Extinction Rebellion where belief that climate change is a real and imminent risk is essential, our “belief system” is rather more about openness and willingness to update in response to 1) evidence, and 2) reasonable arguments about other world views?
Yes—but the issue plays itself out one level up.
For instance, most people aren’t very scope sensitive – firstly in their intuitions, and especially when it comes to acting on them.
I think scope sensitivity is a key part of effective altruism, so appointing people who are less scope sensitive to boards of EA orgs is similar to XR appointing people who are less concerned about climate change.
Also I think a lot of the time when people say “value alignment”, they are in fact looking for signals like self-identification as EAs, or who they’re friends with or have collaborated / worked with. I also notice we conflate our aesthetic preferences for communication with good reasoning or value alignment; for example, someone who knows in-group terminology or uses non-emotive language is seen as aligned with EA values / reasoning (and by me as well often).
I agree and think this is bad. Another common problem is interpreting agreement on what causes & interventions to prioritise as ‘value alignment’, whereas what actually matters are the underlying principles.
It’s tricky because I think these things do at least correlate with with the real thing. I don’t feel like I know what to do about it. Besides trying to encourage people to think more deeply, perhaps trying one or two steps harder to work with people one or two layers out from the current community is a good way to correct for this bias.
Also to tackle your response, and maybe I’m reading between the lines too hard here / being too harsh on you here, but I feel there’s goalpost shifting in your original post about EA value alignment and you now stating that people who understand broader principles are also “value aligned”.
That’s not my intention. I think a strong degree of wanting to act on the values is important for the majority of the board. That’s not the same as self-identifying as an EA, but merely understanding the broad principles is also not sufficient.
(Though I’m happy if a minority of the board are less dedicated to acting on the values.)
(Another clarification from earlier is that it also depends on the org. If you’re doing an evidence-based global health charity, then it’s fine to fill your board with people who are really into global health. I also think it’s good to have advisors from clearly outside of the community – they just don’t have to be board members.)
Another reflection: the more we speak about “value alignment” being important, the more it incentivises people to signal “value alignment” even if they have good arguments to the contrary. If we speak about valuing different perspectives, we give permission and incentivise people to bring those.
I agree and this is unfortunate.
To be clear I think we should try to value other perspectives about the question of how to do the most good, and we should aim to cooperate with those who have different values to our own. We should also try much harder to draw on operational skills from outside the community. But the question of board choice is firstly a question of who should be given legal control of EA organisations.
Now having read your reply, I think we’re likely closer together than apart on views. But...
But the question of board choice is firstly a question of who should be given legal control of EA organisations.
I don’t think this is how I see the question of board choice in practice. In theory yes, for the specific legal, hard mechanisms you mention. But in practice in my experience boards significantly check and challenge direction of the organisation, so the collective ability of board members to do this should be factored in appointment decisions which may trade off against legal control being put in the ‘safest pair of hands’.
That said, I feel back and forth responses on the EA forum may be exhausting their value here; I feel I’d have more to say in a brainstorm about potential trade-offs between legal control and ability to check and challenge, and open to discussing further if helpful to some concrete issue at hand :)
Yes, legal control is the first consideration, but governance requires skill not just value-alignment
I think in 2023 the skills you want largely exist within the community; it’s just that (a) people can’t find them easily (hence I founded the EA Good Governance Project) and (b) people need to be willing to appoint outside their clique
Non-profit boards have 100% legal control of the organisation– they can do anything they want with it.
If you give people who aren’t very dedicated to EA values legal control over EA organisations, they won’t be EA organisations for very long.
There are under 5,000 EA community members in the world – most of them have no management experience.
Sure, you could give up 1⁄3 of the control to people outside of the community, but this doesn’t solve the problem (it only reduces the need for board members by 1⁄3).
The assumption that this 1⁄3 would come from outside the community seems to rely on an assumption that there are no lawyers/accountants/governance experts/etc. in the community. It would be more accurate, I think, to say that the 1⁄3 would come from outside what Jack called “high status core EAs.”
Sorry that’s what I meant. I was saying there are 5,000 community members. If you want the board to be controlled by people who are actually into EA, then you need 2⁄3 to come from something like that pool. Another 1⁄3 could come from outside (though not without risk). I wasn’t talking about what fraction of the board should have specific expertise.
Another clarification, what I care about is whether they deeply grok and are willing to act on the principles – not that they’re part of the community, or self-identify as EA. Those things are at best rough heuristics for the first thing. I was using the number of community members as a rough indication for how many people exist who actually apply the principles – I don’t mind if they actively participate in the community or not, and think it’s good if some people don’t.
Thanks, Ben. I agree with what you are saying. However, I think that on a practical level, what you are arguing for is not what happens. EA boards tend to be filled with people who work full-time in EA roles, not by fully-aligned talent individuals from the private sector (e.g. lawyers, corporate managers) who might be earning to give having followed 80k’s advice 10 years ago
Ultimately, if you think there is enough value within EA arguments about how to do good, you should be able to find smart people from other walks of life who have: 1) enough overlap with EA thinking (because EA isn’t 100% original after all) to have a reasonable starting point along with 2) more relevant leadership experience and demonstrably good judgement, and linked to the two previous 3) mature enough in their opinions and / or achievements to be less susceptible to herding.
If you think that EA orgs won’t remain EA orgs if you don’t appoint “value aligned” people, it implies out arguments aren’t strong enough for people who we think should be convinced by them. If that’s the case, it’s a real good indicator your argument might not be that good and to reconsider.
To be concrete, I expect a board of 50% card-carrying EAs and 50% experienced high achievement non-EAs with good understanding of similar topics (e.g. x-risk, evidence based interventions) to appraise arguments of what high-/lower-risk options to fund much better than a board of 100% EAs with the same epistemic and discourse background and limited prior career / board experience.
Edit- clarity and typos
I agree that there’s a broader circle of people who get the ideas but aren’t “card carrying” community members, and having some of those on the board is good. A board definitely doesn’t need to be 100% self-identified EAs.
Another clarification is that what I care about is whether they deeply grok and are willing to act on the principles – not that they’re part of the community, or self-identify as EA. Those things are at best rough heuristics for the first thing.
This said, I think there are surprisingly few people out there like that. And due to the huge scope differences in the impact of different actions, there can be huge differences between what someone who is e.g. 60% into applying EA principles would do compared to someone who is 90% into it (using a made up scale).
I think a thing that wouldn’t make sense if for, say, Extinction Rebellion, to appoint people to their board who “aren’t so sold on climate change being the world’s biggest problem”. Due to the point above, you can end up in something that feels like this more quickly than it first seems or is intuitive.
Isn’t the point of EA that we are responsive to new arguments? So, unlike Extinction Rebellion where belief that climate change is a real and imminent risk is essential, our “belief system” is rather more about openness and willingness to update in response to 1) evidence, and 2) reasonable arguments about other world views?
Also I think a lot of the time when people say “value alignment”, they are in fact looking for signals like self-identification as EAs, or who they’re friends with or have collaborated / worked with. I also notice we conflate our aesthetic preferences for communication with good reasoning or value alignment; for example, someone who knows in-group terminology or uses non-emotive language is seen as aligned with EA values / reasoning (and by me as well often). But within social-justice circles, emotive language can be seen as a signal of value alignment. Basically, there’s a lot more to unpack with “value alignment” and what it means in reality vs. what we say it ostensibly means.
Also to tackle your response, and maybe I’m reading between the lines too hard here / being too harsh on you here, but I feel there’s goalpost shifting in your original post about EA value alignment and you now stating that people who understand broader principles are also “value aligned”.
Another reflection: the more we speak about “value alignment” being important, the more it incentivises people to signal “value alignment” even if they have good arguments to the contrary. If we speak about valuing different perspectives, we give permission and incentivise people to bring those.
Yes—but the issue plays itself out one level up.
For instance, most people aren’t very scope sensitive – firstly in their intuitions, and especially when it comes to acting on them.
I think scope sensitivity is a key part of effective altruism, so appointing people who are less scope sensitive to boards of EA orgs is similar to XR appointing people who are less concerned about climate change.
I agree and think this is bad. Another common problem is interpreting agreement on what causes & interventions to prioritise as ‘value alignment’, whereas what actually matters are the underlying principles.
It’s tricky because I think these things do at least correlate with with the real thing. I don’t feel like I know what to do about it. Besides trying to encourage people to think more deeply, perhaps trying one or two steps harder to work with people one or two layers out from the current community is a good way to correct for this bias.
That’s not my intention. I think a strong degree of wanting to act on the values is important for the majority of the board. That’s not the same as self-identifying as an EA, but merely understanding the broad principles is also not sufficient.
(Though I’m happy if a minority of the board are less dedicated to acting on the values.)
(Another clarification from earlier is that it also depends on the org. If you’re doing an evidence-based global health charity, then it’s fine to fill your board with people who are really into global health. I also think it’s good to have advisors from clearly outside of the community – they just don’t have to be board members.)
I agree and this is unfortunate.
To be clear I think we should try to value other perspectives about the question of how to do the most good, and we should aim to cooperate with those who have different values to our own. We should also try much harder to draw on operational skills from outside the community. But the question of board choice is firstly a question of who should be given legal control of EA organisations.
Now having read your reply, I think we’re likely closer together than apart on views. But...
I don’t think this is how I see the question of board choice in practice. In theory yes, for the specific legal, hard mechanisms you mention. But in practice in my experience boards significantly check and challenge direction of the organisation, so the collective ability of board members to do this should be factored in appointment decisions which may trade off against legal control being put in the ‘safest pair of hands’.
That said, I feel back and forth responses on the EA forum may be exhausting their value here; I feel I’d have more to say in a brainstorm about potential trade-offs between legal control and ability to check and challenge, and open to discussing further if helpful to some concrete issue at hand :)
Two quick points:
Yes, legal control is the first consideration, but governance requires skill not just value-alignment
I think in 2023 the skills you want largely exist within the community; it’s just that (a) people can’t find them easily (hence I founded the EA Good Governance Project) and (b) people need to be willing to appoint outside their clique