I think this is a really important area, and it is great someone is thinking whether EAIF could expand more into this sort of area.
To provide some thoughts of my own having explored working with a few EA entities through our consultancy (https://www.daymark-di.com/) that works to help organisations improve their decision making processes, along with discussions I’ve had with others on similar endeavours:
Funding is a key bottleneck, which isn’t surpising. I think there is naturally an aversion to consultancy-type support in EA organisations, mostly driven by lack of funds to pay for it and partly because I think they are concerned how it’ll look in progress reports if they spend money on consultants.
EAIF funding could make this easier, as it’ll remove the entire (or a large part) of the cost above.
There does appear a fairly frequent assumption that EA organisations suffer less from poor epistemics and decision making practices, which my experience suggests is somewhat true but unfortunately not entirely. I want to repeat what Jona from cFactual commented below that there are lots of actions EA organisations take that are very positive, such as BOTECs, likely failure models, and using decision matrices. This should be praised as many organisations don’t even do these. However, the simple existence of these is too often assumed to mean good analysis and judgment will naturally occur and the systems/processes that are needed to make them useful are often lacking. To be more concrete, as an example few BOTECs incorporate second order probability/confidence correctly (or it is conflated with first order probability) and they subsequently fail to properly account for the uncertainty of the calculation and the accurate comparisons that can be made between options.
It has been surprising to observe the difference between some EA organisations and non-EA institutions when it comes to interest in improving their epistemics/decision making. With large institutions (including governmental) being more receptive and proactive in trying to improve—often those institutions are mostly being constrained by their slow procurement processes as opposed appetite.
When it comes to future projects, my recommendations of those with highest value add would be:
Projects that help organisations improve their accountability, incentive, and prioritisation mechanisms. In particular helping to identify and implement internal processes that properly link workstreams/actions/decisions/judgments to the goals of the organisation and that role’s objectives. This is something which is most useful to larger organisations (and especially those that make funding/granting decisions or recommendations), but can also be something which smaller organisations would benefit from.
Projects that help organisations improve their processes to more accurately and efficiently reason under uncertainty, namely assisting with defining the prediction/diagnostic they are trying to make a judgment on, identify the causal chain and predictors (and their relative importance) that underpins their judgment, and provide them with a framework to communicate their uncertainty and reasoning in a transparent and consistent way.
Projects that provide external scrutiny and advanced decision modelling capabilities. I think at a low level there are some relatively easy wins that can be gained from having an entity/service which red teams and provides external assessment of big decisions that organisations are looking at making (e.g. new requests for new funding). At a greater level there should be more advanced modelling (including using tools such as Bayesian models) which can provide transparent and updatable analysis which can expose conditional relationships that we can’t properly account for in our minds.
I think funding entities like EA Funds could utilise such an entity/entities to inform their grant making decisions (e.g. if such analysis was instrumental in the decision EA Funds made on a grant request, they’d pay half of the cost of the analysis).
I think this is a really important area, and it is great someone is thinking whether EAIF could expand more into this sort of area.
To provide some thoughts of my own having explored working with a few EA entities through our consultancy (https://www.daymark-di.com/) that works to help organisations improve their decision making processes, along with discussions I’ve had with others on similar endeavours:
Funding is a key bottleneck, which isn’t surpising. I think there is naturally an aversion to consultancy-type support in EA organisations, mostly driven by lack of funds to pay for it and partly because I think they are concerned how it’ll look in progress reports if they spend money on consultants.
EAIF funding could make this easier, as it’ll remove the entire (or a large part) of the cost above.
There does appear a fairly frequent assumption that EA organisations suffer less from poor epistemics and decision making practices, which my experience suggests is somewhat true but unfortunately not entirely. I want to repeat what Jona from cFactual commented below that there are lots of actions EA organisations take that are very positive, such as BOTECs, likely failure models, and using decision matrices. This should be praised as many organisations don’t even do these. However, the simple existence of these is too often assumed to mean good analysis and judgment will naturally occur and the systems/processes that are needed to make them useful are often lacking. To be more concrete, as an example few BOTECs incorporate second order probability/confidence correctly (or it is conflated with first order probability) and they subsequently fail to properly account for the uncertainty of the calculation and the accurate comparisons that can be made between options.
It has been surprising to observe the difference between some EA organisations and non-EA institutions when it comes to interest in improving their epistemics/decision making. With large institutions (including governmental) being more receptive and proactive in trying to improve—often those institutions are mostly being constrained by their slow procurement processes as opposed appetite.
When it comes to future projects, my recommendations of those with highest value add would be:
Projects that help organisations improve their accountability, incentive, and prioritisation mechanisms. In particular helping to identify and implement internal processes that properly link workstreams/actions/decisions/judgments to the goals of the organisation and that role’s objectives. This is something which is most useful to larger organisations (and especially those that make funding/granting decisions or recommendations), but can also be something which smaller organisations would benefit from.
Projects that help organisations improve their processes to more accurately and efficiently reason under uncertainty, namely assisting with defining the prediction/diagnostic they are trying to make a judgment on, identify the causal chain and predictors (and their relative importance) that underpins their judgment, and provide them with a framework to communicate their uncertainty and reasoning in a transparent and consistent way.
Projects that provide external scrutiny and advanced decision modelling capabilities. I think at a low level there are some relatively easy wins that can be gained from having an entity/service which red teams and provides external assessment of big decisions that organisations are looking at making (e.g. new requests for new funding). At a greater level there should be more advanced modelling (including using tools such as Bayesian models) which can provide transparent and updatable analysis which can expose conditional relationships that we can’t properly account for in our minds.
I think funding entities like EA Funds could utilise such an entity/entities to inform their grant making decisions (e.g. if such analysis was instrumental in the decision EA Funds made on a grant request, they’d pay half of the cost of the analysis).