People can inadvertently do bad things with very good intentions due to poor judgement, there is even the proverb ‘the road to hell is paved in good intentions’.
EA emphasises doing good with evidence, with reasoning transparency being considered highly important. People are fallible, and in the case of EA often young and of similar backgrounds, and particularly given the potential consequences (working on the world’s biggest issues including x risks) big decisions should be open to scrutiny. And I think it is a good idea to look at what other companies are doing and taking the best bits from the expertise of others.
For example, the Wytham Abbey purchase may (I haven’t seen any numbers myself) make sense from a cost effectiveness perspective, but it really should have been expected that people would ask questions given how grand the venue seems. I think the communication (and at least a basic public cost effectiveness analysis) should have been done more proactively.
I agree. To take the distinctions of trust one step further—there’s a difference between trust in the intentions and judgements of people, and trust in the systemsthey operate in.
Like, I think you could be trusting of the intentions and judgement of EA leadership, but still recognise that people are human, and humans make mistakes, and that transparency and more open governance leads to more voices being heard in decision making processes, which leads to better decisions. It’s the ‘Wisdom of Crowds’ kind of argument.
Agreed, particularly as bad bureaucracy could have bad results even if everyone has good intention and good judgement. For example, if someone makes the best decision possible given the information they have available, but it has unintended negative consequences as due to the way the organisation/system was set up they are missing key information which would have led to a different conclusion.
I think this is a key point of disagreement. Most of the proposed governance changes seem to me like they would have some protective effect against bad actors, but very little effect on promoting good decision-making. I’d be much more on board if people had proposals that I actually thought would help leadership make better decisions, but I don’t think most of the transparency-oriented proposals would actually do that.
(I think actually justifying this statement would be a whole other post so I’ll just state it as an opinion.)
The way I think EA orgs can improve decision-making is by introducing some kind of meaningful competition. At the moment, the umbrella structure, multi-project orgs, and lack of transparency makes that all but impossible.
Split that into a cluster including an org that, say, just runs events and isn’t officially supported by EVF and you have a level playing field and a useful comparison with another org that also started to run events—while having a big enough space of event types that both could semi-cooperate.
If both orgs are also transparent enough that substantial discrepancies between how well they operate are visible, then you have a real possibility of funders reacting to such discrepancies in a way that incentivises the orgs and their staff to perform well. At the moment I just don’t feel like these incentives exist.
I think this is an important distinction.
People can inadvertently do bad things with very good intentions due to poor judgement, there is even the proverb ‘the road to hell is paved in good intentions’.
EA emphasises doing good with evidence, with reasoning transparency being considered highly important. People are fallible, and in the case of EA often young and of similar backgrounds, and particularly given the potential consequences (working on the world’s biggest issues including x risks) big decisions should be open to scrutiny. And I think it is a good idea to look at what other companies are doing and taking the best bits from the expertise of others.
For example, the Wytham Abbey purchase may (I haven’t seen any numbers myself) make sense from a cost effectiveness perspective, but it really should have been expected that people would ask questions given how grand the venue seems. I think the communication (and at least a basic public cost effectiveness analysis) should have been done more proactively.
I agree. To take the distinctions of trust one step further—there’s a difference between trust in the intentions and judgements of people, and trust in the systems they operate in.
Like, I think you could be trusting of the intentions and judgement of EA leadership, but still recognise that people are human, and humans make mistakes, and that transparency and more open governance leads to more voices being heard in decision making processes, which leads to better decisions. It’s the ‘Wisdom of Crowds’ kind of argument.
Perhaps I’m just a die-hard technocrat, but I’m very unconvinced that this is actually true. Do we have any good examples either way?
Agreed, particularly as bad bureaucracy could have bad results even if everyone has good intention and good judgement. For example, if someone makes the best decision possible given the information they have available, but it has unintended negative consequences as due to the way the organisation/system was set up they are missing key information which would have led to a different conclusion.
I think this is a key point of disagreement. Most of the proposed governance changes seem to me like they would have some protective effect against bad actors, but very little effect on promoting good decision-making. I’d be much more on board if people had proposals that I actually thought would help leadership make better decisions, but I don’t think most of the transparency-oriented proposals would actually do that.
(I think actually justifying this statement would be a whole other post so I’ll just state it as an opinion.)
The way I think EA orgs can improve decision-making is by introducing some kind of meaningful competition. At the moment, the umbrella structure, multi-project orgs, and lack of transparency makes that all but impossible.
Split that into a cluster including an org that, say, just runs events and isn’t officially supported by EVF and you have a level playing field and a useful comparison with another org that also started to run events—while having a big enough space of event types that both could semi-cooperate.
If both orgs are also transparent enough that substantial discrepancies between how well they operate are visible, then you have a real possibility of funders reacting to such discrepancies in a way that incentivises the orgs and their staff to perform well. At the moment I just don’t feel like these incentives exist.