Thanks for making the case. I’m not qualified to say how good a Board member Nick is, but want to pick up on something you said which is widely believed and which I’m highly confident is false.
Namely—it isn’t hard to find competent Board members. There are literally thousands of them out there, and charities outside EA appoint thousands of qualified, diligent Board members every year. I’ve recruited ~20 very good Board members in my career and have never run an open process that didn’t find at least some qualified, diligent people, who did a good job.
EA makes it hard because it’s weirdly resistant to looking outside a very small group of people, usually high status core EAs. This seems to me like one of those unfortunate examples of EA exceptionalism, where EA thinks its process for finding Board members needs to be sui generis. EA makes Board recruitment hard for itself by prioritising ‘alignment’ (which usually means high status core EAs) over competence, sometimes with very bad results (e.g. ending up with a Board that has a lot of philosophers and no lawyers/accountants/governance experts).
It also sometimes sounds like EA orgs think their Boards have higher entry requirements than the Boards of other well-run charities. Ironically, this typically produces very low quality EA Boards, mainly made up of inexperienced people without relevant professional skills, but who are thought of as ‘smart’ and ‘aligned’.
Of course, it will be hard to find new Board members right now, because CEA’s reputation is in tatters and few people will want to join an organisation that is under serious legal threat. But it seems at best a toss up whether it’s worth keeping tainted Board member(s) because they might be tricky to replace, especially when they have recused themselves from literally the single biggest issue facing the charity.
And even if one really values “alignment,” I suspect that a board’s alignment is mostly that of its median member. That may have been less true at EVF where there were no CEOs, but boards are supposed to exercise their power collectively.
On the other hand, a board’s level of legal, accounting, etc. knowledge is not based on the mean or median; it is mainly a function of the most knowledgeable one or two members.
So if one really values alignment on say a 9-member board, select six members with an alignment emphasis and three with a business skills emphasis. (The +1 over a bare majority is to keep an alignment majority if someone has to leave.)
You seem to imply that it’s fine if some board members are not value-aligned as long as the median board member is. I strongly disagree: This seems a brittle setup because the median board member could easily become non-value-aligned if some of the more aligned board members become busy and step down, or have to recuse due to a COI (which happens frequently), or similar.
I’m very surprised that you think a 3 person Board is less brittle than a bigger Board with varying levels of value alignment. How do 3 person Boards deal with all the things you list that can affect Board make up? They can’t, because the Board becomes instantly non-quorate.
I expect a 3-person board with a deep understanding of and commitment to the mission to do a better job selecting new board members than a 9-person board with people less committed to the mission. I also expect the 9-person board members to be less engaged on average.
(I avoid the term “value-alignment” because different people interpret it very differently.)
On my 6⁄3 model, you’d need four recusals among the heavily aligned six and zero among the other three for the median member to be other; three for the median to be between heavily aligned and other. If you’re having four of six need to recuse on COI grounds, there are likely other problems with board composition at play.
Also, suggesting that alignment is not the “emphasis” for each and every board seat doesn’t mean that you should put misaligned or truly random people in any seat. One still should expect a degree of alignment, especially in seat seven of the nine-seat model. Just like one should expect a certain level of general board-member competence in the six seats with alignment emphasis.
I think 9-member boards are often a bad idea because they tend to have lots of people who are shallowly engaged, rather than a smaller number of people who are deeply engaged, tend to have more diffusion of responsibility, and tend to have much less productive meetings than smaller groups of people. While this can be mitigated somewhat with subcommittees and specialization, I think the optimal number of board members for most EA orgs is 3–6.
Non-profit boards have 100% legal control of the organisation– they can do anything they want with it.
If you give people who aren’t very dedicated to EA values legal control over EA organisations, they won’t be EA organisations for very long.
There are under 5,000 EA community members in the world – most of them have no management experience.
Sure, you could give up 1⁄3 of the control to people outside of the community, but this doesn’t solve the problem (it only reduces the need for board members by 1⁄3).
The assumption that this 1⁄3 would come from outside the community seems to rely on an assumption that there are no lawyers/accountants/governance experts/etc. in the community. It would be more accurate, I think, to say that the 1⁄3 would come from outside what Jack called “high status core EAs.”
Sorry that’s what I meant. I was saying there are 5,000 community members. If you want the board to be controlled by people who are actually into EA, then you need 2⁄3 to come from something like that pool. Another 1⁄3 could come from outside (though not without risk). I wasn’t talking about what fraction of the board should have specific expertise.
Another clarification, what I care about is whether they deeply grok and are willing to act on the principles – not that they’re part of the community, or self-identify as EA. Those things are at best rough heuristics for the first thing. I was using the number of community members as a rough indication for how many people exist who actually apply the principles – I don’t mind if they actively participate in the community or not, and think it’s good if some people don’t.
Thanks, Ben. I agree with what you are saying. However, I think that on a practical level, what you are arguing for is not what happens. EA boards tend to be filled with people who work full-time in EA roles, not by fully-aligned talent individuals from the private sector (e.g. lawyers, corporate managers) who might be earning to give having followed 80k’s advice 10 years ago
Ultimately, if you think there is enough value within EA arguments about how to do good, you should be able to find smart people from other walks of life who have: 1) enough overlap with EA thinking (because EA isn’t 100% original after all) to have a reasonable starting point along with 2) more relevant leadership experience and demonstrably good judgement, and linked to the two previous 3) mature enough in their opinions and / or achievements to be less susceptible to herding.
If you think that EA orgs won’t remain EA orgs if you don’t appoint “value aligned” people, it implies out arguments aren’t strong enough for people who we think should be convinced by them. If that’s the case, it’s a real good indicator your argument might not be that good and to reconsider.
To be concrete, I expect a board of 50% card-carrying EAs and 50% experienced high achievement non-EAs with good understanding of similar topics (e.g. x-risk, evidence based interventions) to appraise arguments of what high-/lower-risk options to fund much better than a board of 100% EAs with the same epistemic and discourse background and limited prior career / board experience.
I agree that there’s a broader circle of people who get the ideas but aren’t “card carrying” community members, and having some of those on the board is good. A board definitely doesn’t need to be 100% self-identified EAs.
Another clarification is that what I care about is whether they deeply grok and are willing to act on the principles – not that they’re part of the community, or self-identify as EA. Those things are at best rough heuristics for the first thing.
This said, I think there are surprisingly few people out there like that. And due to the huge scope differences in the impact of different actions, there can be huge differences between what someone who is e.g. 60% into applying EA principles would do compared to someone who is 90% into it (using a made up scale).
I think a thing that wouldn’t make sense if for, say, Extinction Rebellion, to appoint people to their board who “aren’t so sold on climate change being the world’s biggest problem”. Due to the point above, you can end up in something that feels like this more quickly than it first seems or is intuitive.
Isn’t the point of EA that we are responsive to new arguments? So, unlike Extinction Rebellion where belief that climate change is a real and imminent risk is essential, our “belief system” is rather more about openness and willingness to update in response to 1) evidence, and 2) reasonable arguments about other world views?
Also I think a lot of the time when people say “value alignment”, they are in fact looking for signals like self-identification as EAs, or who they’re friends with or have collaborated / worked with. I also notice we conflate our aesthetic preferences for communication with good reasoning or value alignment; for example, someone who knows in-group terminology or uses non-emotive language is seen as aligned with EA values / reasoning (and by me as well often). But within social-justice circles, emotive language can be seen as a signal of value alignment. Basically, there’s a lot more to unpack with “value alignment” and what it means in reality vs. what we say it ostensibly means.
Also to tackle your response, and maybe I’m reading between the lines too hard here / being too harsh on you here, but I feel there’s goalpost shifting in your original post about EA value alignment and you now stating that people who understand broader principles are also “value aligned”.
Another reflection: the more we speak about “value alignment” being important, the more it incentivises people to signal “value alignment” even if they have good arguments to the contrary. If we speak about valuing different perspectives, we give permission and incentivise people to bring those.
Isn’t the point of EA that we are responsive to new arguments? So, unlike Extinction Rebellion where belief that climate change is a real and imminent risk is essential, our “belief system” is rather more about openness and willingness to update in response to 1) evidence, and 2) reasonable arguments about other world views?
Yes—but the issue plays itself out one level up.
For instance, most people aren’t very scope sensitive – firstly in their intuitions, and especially when it comes to acting on them.
I think scope sensitivity is a key part of effective altruism, so appointing people who are less scope sensitive to boards of EA orgs is similar to XR appointing people who are less concerned about climate change.
Also I think a lot of the time when people say “value alignment”, they are in fact looking for signals like self-identification as EAs, or who they’re friends with or have collaborated / worked with. I also notice we conflate our aesthetic preferences for communication with good reasoning or value alignment; for example, someone who knows in-group terminology or uses non-emotive language is seen as aligned with EA values / reasoning (and by me as well often).
I agree and think this is bad. Another common problem is interpreting agreement on what causes & interventions to prioritise as ‘value alignment’, whereas what actually matters are the underlying principles.
It’s tricky because I think these things do at least correlate with with the real thing. I don’t feel like I know what to do about it. Besides trying to encourage people to think more deeply, perhaps trying one or two steps harder to work with people one or two layers out from the current community is a good way to correct for this bias.
Also to tackle your response, and maybe I’m reading between the lines too hard here / being too harsh on you here, but I feel there’s goalpost shifting in your original post about EA value alignment and you now stating that people who understand broader principles are also “value aligned”.
That’s not my intention. I think a strong degree of wanting to act on the values is important for the majority of the board. That’s not the same as self-identifying as an EA, but merely understanding the broad principles is also not sufficient.
(Though I’m happy if a minority of the board are less dedicated to acting on the values.)
(Another clarification from earlier is that it also depends on the org. If you’re doing an evidence-based global health charity, then it’s fine to fill your board with people who are really into global health. I also think it’s good to have advisors from clearly outside of the community – they just don’t have to be board members.)
Another reflection: the more we speak about “value alignment” being important, the more it incentivises people to signal “value alignment” even if they have good arguments to the contrary. If we speak about valuing different perspectives, we give permission and incentivise people to bring those.
I agree and this is unfortunate.
To be clear I think we should try to value other perspectives about the question of how to do the most good, and we should aim to cooperate with those who have different values to our own. We should also try much harder to draw on operational skills from outside the community. But the question of board choice is firstly a question of who should be given legal control of EA organisations.
Now having read your reply, I think we’re likely closer together than apart on views. But...
But the question of board choice is firstly a question of who should be given legal control of EA organisations.
I don’t think this is how I see the question of board choice in practice. In theory yes, for the specific legal, hard mechanisms you mention. But in practice in my experience boards significantly check and challenge direction of the organisation, so the collective ability of board members to do this should be factored in appointment decisions which may trade off against legal control being put in the ‘safest pair of hands’.
That said, I feel back and forth responses on the EA forum may be exhausting their value here; I feel I’d have more to say in a brainstorm about potential trade-offs between legal control and ability to check and challenge, and open to discussing further if helpful to some concrete issue at hand :)
Yes, legal control is the first consideration, but governance requires skill not just value-alignment
I think in 2023 the skills you want largely exist within the community; it’s just that (a) people can’t find them easily (hence I founded the EA Good Governance Project) and (b) people need to be willing to appoint outside their clique
Alignment is super-important for EA organisations, I would put it as priority number 1, because if you’re aligned to EA values then you’re at least trying to do the most good for the world, whereas if you’re not, you may not be even trying to do that.
Hi Robin—thanks for this and I see your point. I think Jason put it perfectly above—alignment is often about the median Board member, where expertise is about the best Board member in a given context. So you can have both.
I have also seen a lot of trustees learn about the mission of the charity as part of the recruitment process and we shouldn’t assume the only aligned people are people who already identify as EAs.
The downsides of prioritising alignment almost to the exclusion of all else are pretty clear, I think, and harder to mitigate than the downsides or lacking technical expertise, which takes years to develop.
The nature of most EA funding also provides a check on misalignment. An EA organization that became significantly misaligned from its major funders would quickly find itself unfunded. As opposed to Wikimedia, which had/has a different funding structure as I understand it.
TL;DR: You’re incorrectlyassuming I’m into Nick mainly because of value alignment, and while that’s a relevant factor, the main factor is that he has an unusually deep understanding of EA/x-risk work that competent EA-adjacent professionals lack.
I might write a longer response. For now, I’ll say the following:
I think a lot of EA work is pretty high-context, and most people don’t understand it very well. E.g., when I ran EA Funds work tests for potential grantmakers (which I think is somewhat similar to being a board member), I observed that highly skilled professionals consistently failed to identify many important considerations for deciding on a grant. But, after engaging with EA content at an unusual level of depth for 1-2 years, they can improve a lot (i.e., there were some examples of people improving their grantmaking skills a lot). Most such people never end up attaining this level of engagement, so they never reach the level of competence I think would be required.
I agree with you that too much of a focus on high status core EAs seems problematic.
I think value-alignment in a broader sense (not tracking status, but actual altruistic commitment) matters a great deal. E.g., given the choice between personal prestige and impact, would the person reliably choose the latter? I think some high-status core EAs who were on EA boards were not value-aligned in this sense, and this seems bad.
EDIT: Relevant quote—I think this is where Nick shines as a board member:
For example, if a nonprofit’s mission is “Help animals everywhere,” does this mean “Help as many animals as possible” (which might indicate a move toward focusing on farm animals) or “Help animals in the same way the nonprofit traditionally has” or something else? How does it imply the nonprofit should make tradeoffs between helping e.g. dogs, cats, elephants, chickens, fish or even insects? How a board member answers questions like this seems central to how their presence on the board is going to affect the nonprofit.
@Jack Lewars is spot on. If you don’t believe him, take a look at the list of ~70 individuals on the EA Good Governance Project’s trustee directory. In order to effectively govern you need competence and no collective blindspots, not just value alignment.
I have a fair amount of accounting / legal / governance knowledge and as part of my board commitments think it’s a lot less relevant than deeply understanding the mission and strategy of the relevant organization (along with other more relevant generalist skills like management, HR, etc.). Edit: Though I do think if you’re tied up in the decade’s biggest bankruptcy, legal knowledge is actually really useful, but this seems more like a one-off weird situation.
It seems intuitive that your chances of ending up in a one off weird situation are reduced if you have people who understand the risks properly in advance. I think a lot of what people with technical expertise do on Boards is reduce blind spots.
I think that’s false; I think the FTX bankruptcy was hard to anticipate or prevent (despite warning flags), and accepting FTX money was the right judgment call ex ante.
I think Jack’s point was that having some technical expertise reduces the odds of a Bad Situation happening at a general level, not that it would have prevented exposure to the FTX bankruptcy specifically.
If one really does not want technical expertise on the board, a possible alternative is hiring someone with the right background to serve as in-house counsel, corporate secretary, or a similar role—and then listening to that person. Of course, that costs money.
It’s clear to me that the pre-FTX collapse EVF board, at least, needed more “lawyers/accountants/governance” expertise. If someone had been there to insist on good governance norms, I don’t believe that statutory inquiry would likely have been opened—at a minimum it would have been narrower. Given the very low base rate of SIs, I conclude that the external evidence suggests the EVF UK board was very weak in legal/accounting/governance etc. capabilities.
Thanks for making the case. I’m not qualified to say how good a Board member Nick is, but want to pick up on something you said which is widely believed and which I’m highly confident is false.
Namely—it isn’t hard to find competent Board members. There are literally thousands of them out there, and charities outside EA appoint thousands of qualified, diligent Board members every year. I’ve recruited ~20 very good Board members in my career and have never run an open process that didn’t find at least some qualified, diligent people, who did a good job.
EA makes it hard because it’s weirdly resistant to looking outside a very small group of people, usually high status core EAs. This seems to me like one of those unfortunate examples of EA exceptionalism, where EA thinks its process for finding Board members needs to be sui generis. EA makes Board recruitment hard for itself by prioritising ‘alignment’ (which usually means high status core EAs) over competence, sometimes with very bad results (e.g. ending up with a Board that has a lot of philosophers and no lawyers/accountants/governance experts).
It also sometimes sounds like EA orgs think their Boards have higher entry requirements than the Boards of other well-run charities. Ironically, this typically produces very low quality EA Boards, mainly made up of inexperienced people without relevant professional skills, but who are thought of as ‘smart’ and ‘aligned’.
Of course, it will be hard to find new Board members right now, because CEA’s reputation is in tatters and few people will want to join an organisation that is under serious legal threat. But it seems at best a toss up whether it’s worth keeping tainted Board member(s) because they might be tricky to replace, especially when they have recused themselves from literally the single biggest issue facing the charity.
And even if one really values “alignment,” I suspect that a board’s alignment is mostly that of its median member. That may have been less true at EVF where there were no CEOs, but boards are supposed to exercise their power collectively.
On the other hand, a board’s level of legal, accounting, etc. knowledge is not based on the mean or median; it is mainly a function of the most knowledgeable one or two members.
So if one really values alignment on say a 9-member board, select six members with an alignment emphasis and three with a business skills emphasis. (The +1 over a bare majority is to keep an alignment majority if someone has to leave.)
You seem to imply that it’s fine if some board members are not value-aligned as long as the median board member is. I strongly disagree: This seems a brittle setup because the median board member could easily become non-value-aligned if some of the more aligned board members become busy and step down, or have to recuse due to a COI (which happens frequently), or similar.
I’m very surprised that you think a 3 person Board is less brittle than a bigger Board with varying levels of value alignment. How do 3 person Boards deal with all the things you list that can affect Board make up? They can’t, because the Board becomes instantly non-quorate.
I expect a 3-person board with a deep understanding of and commitment to the mission to do a better job selecting new board members than a 9-person board with people less committed to the mission. I also expect the 9-person board members to be less engaged on average.
(I avoid the term “value-alignment” because different people interpret it very differently.)
I don’t agree with that characterization.
On my 6⁄3 model, you’d need four recusals among the heavily aligned six and zero among the other three for the median member to be other; three for the median to be between heavily aligned and other. If you’re having four of six need to recuse on COI grounds, there are likely other problems with board composition at play.
Also, suggesting that alignment is not the “emphasis” for each and every board seat doesn’t mean that you should put misaligned or truly random people in any seat. One still should expect a degree of alignment, especially in seat seven of the nine-seat model. Just like one should expect a certain level of general board-member competence in the six seats with alignment emphasis.
I think 9-member boards are often a bad idea because they tend to have lots of people who are shallowly engaged, rather than a smaller number of people who are deeply engaged, tend to have more diffusion of responsibility, and tend to have much less productive meetings than smaller groups of people. While this can be mitigated somewhat with subcommittees and specialization, I think the optimal number of board members for most EA orgs is 3–6.
This is a really good comment!
Non-profit boards have 100% legal control of the organisation– they can do anything they want with it.
If you give people who aren’t very dedicated to EA values legal control over EA organisations, they won’t be EA organisations for very long.
There are under 5,000 EA community members in the world – most of them have no management experience.
Sure, you could give up 1⁄3 of the control to people outside of the community, but this doesn’t solve the problem (it only reduces the need for board members by 1⁄3).
The assumption that this 1⁄3 would come from outside the community seems to rely on an assumption that there are no lawyers/accountants/governance experts/etc. in the community. It would be more accurate, I think, to say that the 1⁄3 would come from outside what Jack called “high status core EAs.”
Sorry that’s what I meant. I was saying there are 5,000 community members. If you want the board to be controlled by people who are actually into EA, then you need 2⁄3 to come from something like that pool. Another 1⁄3 could come from outside (though not without risk). I wasn’t talking about what fraction of the board should have specific expertise.
Another clarification, what I care about is whether they deeply grok and are willing to act on the principles – not that they’re part of the community, or self-identify as EA. Those things are at best rough heuristics for the first thing. I was using the number of community members as a rough indication for how many people exist who actually apply the principles – I don’t mind if they actively participate in the community or not, and think it’s good if some people don’t.
Thanks, Ben. I agree with what you are saying. However, I think that on a practical level, what you are arguing for is not what happens. EA boards tend to be filled with people who work full-time in EA roles, not by fully-aligned talent individuals from the private sector (e.g. lawyers, corporate managers) who might be earning to give having followed 80k’s advice 10 years ago
Ultimately, if you think there is enough value within EA arguments about how to do good, you should be able to find smart people from other walks of life who have: 1) enough overlap with EA thinking (because EA isn’t 100% original after all) to have a reasonable starting point along with 2) more relevant leadership experience and demonstrably good judgement, and linked to the two previous 3) mature enough in their opinions and / or achievements to be less susceptible to herding.
If you think that EA orgs won’t remain EA orgs if you don’t appoint “value aligned” people, it implies out arguments aren’t strong enough for people who we think should be convinced by them. If that’s the case, it’s a real good indicator your argument might not be that good and to reconsider.
To be concrete, I expect a board of 50% card-carrying EAs and 50% experienced high achievement non-EAs with good understanding of similar topics (e.g. x-risk, evidence based interventions) to appraise arguments of what high-/lower-risk options to fund much better than a board of 100% EAs with the same epistemic and discourse background and limited prior career / board experience.
Edit- clarity and typos
I agree that there’s a broader circle of people who get the ideas but aren’t “card carrying” community members, and having some of those on the board is good. A board definitely doesn’t need to be 100% self-identified EAs.
Another clarification is that what I care about is whether they deeply grok and are willing to act on the principles – not that they’re part of the community, or self-identify as EA. Those things are at best rough heuristics for the first thing.
This said, I think there are surprisingly few people out there like that. And due to the huge scope differences in the impact of different actions, there can be huge differences between what someone who is e.g. 60% into applying EA principles would do compared to someone who is 90% into it (using a made up scale).
I think a thing that wouldn’t make sense if for, say, Extinction Rebellion, to appoint people to their board who “aren’t so sold on climate change being the world’s biggest problem”. Due to the point above, you can end up in something that feels like this more quickly than it first seems or is intuitive.
Isn’t the point of EA that we are responsive to new arguments? So, unlike Extinction Rebellion where belief that climate change is a real and imminent risk is essential, our “belief system” is rather more about openness and willingness to update in response to 1) evidence, and 2) reasonable arguments about other world views?
Also I think a lot of the time when people say “value alignment”, they are in fact looking for signals like self-identification as EAs, or who they’re friends with or have collaborated / worked with. I also notice we conflate our aesthetic preferences for communication with good reasoning or value alignment; for example, someone who knows in-group terminology or uses non-emotive language is seen as aligned with EA values / reasoning (and by me as well often). But within social-justice circles, emotive language can be seen as a signal of value alignment. Basically, there’s a lot more to unpack with “value alignment” and what it means in reality vs. what we say it ostensibly means.
Also to tackle your response, and maybe I’m reading between the lines too hard here / being too harsh on you here, but I feel there’s goalpost shifting in your original post about EA value alignment and you now stating that people who understand broader principles are also “value aligned”.
Another reflection: the more we speak about “value alignment” being important, the more it incentivises people to signal “value alignment” even if they have good arguments to the contrary. If we speak about valuing different perspectives, we give permission and incentivise people to bring those.
Yes—but the issue plays itself out one level up.
For instance, most people aren’t very scope sensitive – firstly in their intuitions, and especially when it comes to acting on them.
I think scope sensitivity is a key part of effective altruism, so appointing people who are less scope sensitive to boards of EA orgs is similar to XR appointing people who are less concerned about climate change.
I agree and think this is bad. Another common problem is interpreting agreement on what causes & interventions to prioritise as ‘value alignment’, whereas what actually matters are the underlying principles.
It’s tricky because I think these things do at least correlate with with the real thing. I don’t feel like I know what to do about it. Besides trying to encourage people to think more deeply, perhaps trying one or two steps harder to work with people one or two layers out from the current community is a good way to correct for this bias.
That’s not my intention. I think a strong degree of wanting to act on the values is important for the majority of the board. That’s not the same as self-identifying as an EA, but merely understanding the broad principles is also not sufficient.
(Though I’m happy if a minority of the board are less dedicated to acting on the values.)
(Another clarification from earlier is that it also depends on the org. If you’re doing an evidence-based global health charity, then it’s fine to fill your board with people who are really into global health. I also think it’s good to have advisors from clearly outside of the community – they just don’t have to be board members.)
I agree and this is unfortunate.
To be clear I think we should try to value other perspectives about the question of how to do the most good, and we should aim to cooperate with those who have different values to our own. We should also try much harder to draw on operational skills from outside the community. But the question of board choice is firstly a question of who should be given legal control of EA organisations.
Now having read your reply, I think we’re likely closer together than apart on views. But...
I don’t think this is how I see the question of board choice in practice. In theory yes, for the specific legal, hard mechanisms you mention. But in practice in my experience boards significantly check and challenge direction of the organisation, so the collective ability of board members to do this should be factored in appointment decisions which may trade off against legal control being put in the ‘safest pair of hands’.
That said, I feel back and forth responses on the EA forum may be exhausting their value here; I feel I’d have more to say in a brainstorm about potential trade-offs between legal control and ability to check and challenge, and open to discussing further if helpful to some concrete issue at hand :)
Two quick points:
Yes, legal control is the first consideration, but governance requires skill not just value-alignment
I think in 2023 the skills you want largely exist within the community; it’s just that (a) people can’t find them easily (hence I founded the EA Good Governance Project) and (b) people need to be willing to appoint outside their clique
Alignment is super-important for EA organisations, I would put it as priority number 1, because if you’re aligned to EA values then you’re at least trying to do the most good for the world, whereas if you’re not, you may not be even trying to do that.
For an example of a not-for-profit non-EA organisation that has suffered from a lack of alignment in recent times, I would point to the Wikimedia Foundation, which has regranted excess funds to extremely dubious organisations: https://twitter.com/echetus/status/1579776106034757633 (see also: https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2022-10-31/News_and_notes ). This is quite apart from the encyclopedia project itself arguably deviating from its stated goals of maintaining a neutral point of view, which is a whole other level of misalignment, but I won’t get into that here.
Hi Robin—thanks for this and I see your point. I think Jason put it perfectly above—alignment is often about the median Board member, where expertise is about the best Board member in a given context. So you can have both.
I have also seen a lot of trustees learn about the mission of the charity as part of the recruitment process and we shouldn’t assume the only aligned people are people who already identify as EAs.
The downsides of prioritising alignment almost to the exclusion of all else are pretty clear, I think, and harder to mitigate than the downsides or lacking technical expertise, which takes years to develop.
The nature of most EA funding also provides a check on misalignment. An EA organization that became significantly misaligned from its major funders would quickly find itself unfunded. As opposed to Wikimedia, which had/has a different funding structure as I understand it.
TL;DR: You’re incorrectly assuming I’m into Nick mainly because of value alignment, and while that’s a relevant factor, the main factor is that he has an unusually deep understanding of EA/x-risk work that competent EA-adjacent professionals lack.
I might write a longer response. For now, I’ll say the following:
I think a lot of EA work is pretty high-context, and most people don’t understand it very well. E.g., when I ran EA Funds work tests for potential grantmakers (which I think is somewhat similar to being a board member), I observed that highly skilled professionals consistently failed to identify many important considerations for deciding on a grant. But, after engaging with EA content at an unusual level of depth for 1-2 years, they can improve a lot (i.e., there were some examples of people improving their grantmaking skills a lot). Most such people never end up attaining this level of engagement, so they never reach the level of competence I think would be required.
I agree with you that too much of a focus on high status core EAs seems problematic.
I think value-alignment in a broader sense (not tracking status, but actual altruistic commitment) matters a great deal. E.g., given the choice between personal prestige and impact, would the person reliably choose the latter? I think some high-status core EAs who were on EA boards were not value-aligned in this sense, and this seems bad.
EDIT: Relevant quote—I think this is where Nick shines as a board member:
@Jack Lewars is spot on. If you don’t believe him, take a look at the list of ~70 individuals on the EA Good Governance Project’s trustee directory. In order to effectively govern you need competence and no collective blindspots, not just value alignment.
I’m definitely not saying value alignment is the only thing to consider.
I have a fair amount of accounting / legal / governance knowledge and as part of my board commitments think it’s a lot less relevant than deeply understanding the mission and strategy of the relevant organization (along with other more relevant generalist skills like management, HR, etc.). Edit: Though I do think if you’re tied up in the decade’s biggest bankruptcy, legal knowledge is actually really useful, but this seems more like a one-off weird situation.
It seems intuitive that your chances of ending up in a one off weird situation are reduced if you have people who understand the risks properly in advance. I think a lot of what people with technical expertise do on Boards is reduce blind spots.
I think that’s false; I think the FTX bankruptcy was hard to anticipate or prevent (despite warning flags), and accepting FTX money was the right judgment call ex ante.
I think Jack’s point was that having some technical expertise reduces the odds of a Bad Situation happening at a general level, not that it would have prevented exposure to the FTX bankruptcy specifically.
If one really does not want technical expertise on the board, a possible alternative is hiring someone with the right background to serve as in-house counsel, corporate secretary, or a similar role—and then listening to that person. Of course, that costs money.
I read his comment differently, but I’ll stop engaging now as I don’t really have time for this many follow-ups, sorry!
It’s clear to me that the pre-FTX collapse EVF board, at least, needed more “lawyers/accountants/governance” expertise. If someone had been there to insist on good governance norms, I don’t believe that statutory inquiry would likely have been opened—at a minimum it would have been narrower. Given the very low base rate of SIs, I conclude that the external evidence suggests the EVF UK board was very weak in legal/accounting/governance etc. capabilities.