where P() is the probability of the total (across all times) WELLBY gains and losses?
Is there a probability threshold value that can inform whether a strategy is recommended? For example, if there is a 0.1% chance of success (assuming no expected WELLBY loss), would you refrain from endorsing the strategy, regardless of the size of the WELLBY gain? Or, if there is a 3% chance of a significant WELLBY loss, even if that is outweighed by the magnitude of the expected WELLBY gain, would you suggest involvement in that strategy?
Which counterfactuals are you considering? The alternative use of resources to involvement in a strategy (or, you are mapping all involvement combinations and just selecting the best one) and the WELLBY lost and gained due to inaction or limited involvement in any strategy (this can be relevant especially to institutionalized dystopia or partial dystopia for some groups)?
Are you converting all non-financial constraints into cost? For example, the cost of paying people to develop networks (for example, by reducing their workload), the cost of developing convincing narratives in a low-risk way, the cost of developing solutions, the amount needed to flexibly gain influence momentum in relevant circles as opportunities arise (if this is needed, maybe this can fall under network development, but more internal as opposed to external advocacy). What else is needed to influence decisions which can be generalizable across different political systems?
How are timeframes considered in your model? For example, if developing different networks takes various amounts of time (assuming equivalent cost and expected WELLBY gains and losses), which one do you choose?
To what extent do you aim for impartiality in wellbeing achieved in terms of individuals? How are you relatively weighting (different amounts of) suffering and wellbeing?
Is continuous optimization assumed? For example, if the predicted WELLBY loss probability increases or decreases after some steps, are you re-running your calculations?
In addition to your calculation, I wanted to ask about the institutions that you would suggest that EA prioritizes in its influence. I think, for example, the UN because the institution already seeks to benefit others and reputational loss risk from offering innovative solutions can be limited (these solutions may just not be perpetuated to more influential ranks) (the advocacy should be for the One Health approach (which includes non-human animals) and developing preventive suffering frameworks in addition to convening decisionmakers when issues escalate), developing countries governments (because they can be highly underresourced and could use skilled volunteer work for various tasks, while EA-related tasks could be prioritized by volunteers), in addition to the governments of major global economies (assuming that economic and military power is convertible) and large MNC (because they influence large proportions of global production).
In summary, what specific strategies are you recommending?
where P() is the probability of the total (across all times) WELLBY gains and losses?
The best way to understand the calculation is to look at the case study we constructed for the City of Zurich, which has the full Guesstimate model linked.
Is there a probability threshold value that can inform whether a strategy is recommended? For example, if there is a 0.1% chance of success (assuming no expected WELLBY loss), would you refrain from endorsing the strategy, regardless of the size of the WELLBY gain? Or, if there is a 3% chance of a significant WELLBY loss, even if that is outweighed by the magnitude of the expected WELLBY gain, would you suggest involvement in that strategy?
I don’t anticipate that we would refrain from endorsing a strategy with a low probability of success unless we thought there was a possibility for accidental harm (which the model should account for) or there were other strategies with a higher expected value that we want to recommend instead. The question about how to weigh downside risks against expected gains is harder, and I think would involve careful exploration with advisors and potential users of our recommendations (e.g. funders) about the appropriate level of risk tolerance for that specific situation.
Are you converting all non-financial constraints into cost? For example, the cost of paying people to develop networks (for example, by reducing their workload), the cost of developing convincing narratives in a low-risk way, the cost of developing solutions, the amount needed to flexibly gain influence momentum in relevant circles as opportunities arise (if this is needed, maybe this can fall under network development, but more internal as opposed to external advocacy). What else is needed to influence decisions which can be generalizable across different political systems?
Some would likely be converted to cost and others to the success metric of interest (e.g., WELLBYs) -- for example, the challenge of developing convincing narratives in a low-risk way might be better expressed as a component of the overall risk that the project won’t meet its objectives given a particular plan of action and level of resources committed to it. The framework in the article can certainly be broken down into more parts and components if helpful for generating the variables needed to make an estimate, and you can see some instances of that in the case study.
How are timeframes considered in your model? For example, if developing different networks takes various amounts of time (assuming equivalent cost and expected WELLBY gains and losses), which one do you choose?
In the specific-strategy model, the user can choose the timeframe they think is most appropriate and add a discount rate if they like. If there is a delay from realizing the benefits of the success of the strategy, the cost of the delay would then be reflected through the use of the discount rate. Again, see the Guesstimate model for an example.
To what extent do you aim for impartiality in wellbeing achieved in terms of individuals? How are you relatively weighting (different amounts of) suffering and wellbeing?
The way we (as in Effective Institutions Project) are currently using the model treats all changes in wellbeing as equivalent and doesn’t distinguish between populations. But another user could certainly add weights to prioritize certain kinds of changes more than others if they wished.
Is continuous optimization assumed? For example, if the predicted WELLBY loss probability increases or decreases after some steps, are you re-running your calculations?
Great question! My current intuition is that we would only re-run the calculations if doing so would be decision-relevant—e.g., if we’re deciding whether to continue recommending the option or not. But there could be some other reason to do it that I haven’t considered; happy to hear other perspectives on this.
In addition to your calculation, I wanted to ask about the institutions that you would suggest that EA prioritizes in its influence. I think, for example, the UN because the institution already seeks to benefit others and reputational loss risk from offering innovative solutions can be limited (these solutions may just not be perpetuated to more influential ranks) (the advocacy should be for the One Health approach (which includes non-human animals) and developing preventive suffering frameworks in addition to convening decisionmakers when issues escalate), developing countries governments (because they can be highly underresourced and could use skilled volunteer work for various tasks, while EA-related tasks could be prioritized by volunteers), in addition to the governments of major global economies (assuming that economic and military power is convertible) and large MNC (because they influence large proportions of global production).
Yep, this is exactly where we’re headed next. We’ve spent the last several months conducting a landscape analysis of “opportunity spaces” associated with particular institutions around the world and assessing which ones are most promising to investigate further. We’re almost done with our preliminary analysis and should be ready to release it later this month.
In summary, what specific strategies are you recommending?
So, you are basically assuming that institutional change impact is the expected gain (measured by e. g. economic or health improvements, simplified as GD and AMF WELLBY impact estimates, or using other calculations, or measured by other metrics, which can but do not have to be impartial) of a shift to more effective causes, considering discount rate (selected by the user), minus risk (which can be weighted more compared to gain, in some cases, and which includes suboptimal/unintended investment outcomes), and plus leverage and substitutability effects. I would also add complementarity (the impact of influencing other actors’ efficiencies in pursuing their objectives, unless that is considered by leverage).
You would re-run your calculations if it is decision-relevant. I think that re-running the calculation is relevant whenever the persons involved in the strategy or external informants identify a possibility of substantially changed variable (benefit, risk, leverage, or substitutability) (for example, due to a (substantial) change in the government, relevant press release, general discourse change, scientific finding, or partnership changes).
A comment on your calculation: Is it
[WELLBYgained*P(WELLBYgained)-WELLBYlost*P(WELLBYlost)]/Cost,
where P() is the probability of the total (across all times) WELLBY gains and losses?
Is there a probability threshold value that can inform whether a strategy is recommended? For example, if there is a 0.1% chance of success (assuming no expected WELLBY loss), would you refrain from endorsing the strategy, regardless of the size of the WELLBY gain? Or, if there is a 3% chance of a significant WELLBY loss, even if that is outweighed by the magnitude of the expected WELLBY gain, would you suggest involvement in that strategy?
Which counterfactuals are you considering? The alternative use of resources to involvement in a strategy (or, you are mapping all involvement combinations and just selecting the best one) and the WELLBY lost and gained due to inaction or limited involvement in any strategy (this can be relevant especially to institutionalized dystopia or partial dystopia for some groups)?
Are you converting all non-financial constraints into cost? For example, the cost of paying people to develop networks (for example, by reducing their workload), the cost of developing convincing narratives in a low-risk way, the cost of developing solutions, the amount needed to flexibly gain influence momentum in relevant circles as opportunities arise (if this is needed, maybe this can fall under network development, but more internal as opposed to external advocacy). What else is needed to influence decisions which can be generalizable across different political systems?
How are timeframes considered in your model? For example, if developing different networks takes various amounts of time (assuming equivalent cost and expected WELLBY gains and losses), which one do you choose?
To what extent do you aim for impartiality in wellbeing achieved in terms of individuals? How are you relatively weighting (different amounts of) suffering and wellbeing?
Is continuous optimization assumed? For example, if the predicted WELLBY loss probability increases or decreases after some steps, are you re-running your calculations?
In addition to your calculation, I wanted to ask about the institutions that you would suggest that EA prioritizes in its influence. I think, for example, the UN because the institution already seeks to benefit others and reputational loss risk from offering innovative solutions can be limited (these solutions may just not be perpetuated to more influential ranks) (the advocacy should be for the One Health approach (which includes non-human animals) and developing preventive suffering frameworks in addition to convening decisionmakers when issues escalate), developing countries governments (because they can be highly underresourced and could use skilled volunteer work for various tasks, while EA-related tasks could be prioritized by volunteers), in addition to the governments of major global economies (assuming that economic and military power is convertible) and large MNC (because they influence large proportions of global production).
In summary, what specific strategies are you recommending?
Thanks for the detailed questions! I’ll do my best to answer them in turn:
The best way to understand the calculation is to look at the case study we constructed for the City of Zurich, which has the full Guesstimate model linked.
I don’t anticipate that we would refrain from endorsing a strategy with a low probability of success unless we thought there was a possibility for accidental harm (which the model should account for) or there were other strategies with a higher expected value that we want to recommend instead. The question about how to weigh downside risks against expected gains is harder, and I think would involve careful exploration with advisors and potential users of our recommendations (e.g. funders) about the appropriate level of risk tolerance for that specific situation.
Some would likely be converted to cost and others to the success metric of interest (e.g., WELLBYs) -- for example, the challenge of developing convincing narratives in a low-risk way might be better expressed as a component of the overall risk that the project won’t meet its objectives given a particular plan of action and level of resources committed to it. The framework in the article can certainly be broken down into more parts and components if helpful for generating the variables needed to make an estimate, and you can see some instances of that in the case study.
In the specific-strategy model, the user can choose the timeframe they think is most appropriate and add a discount rate if they like. If there is a delay from realizing the benefits of the success of the strategy, the cost of the delay would then be reflected through the use of the discount rate. Again, see the Guesstimate model for an example.
The way we (as in Effective Institutions Project) are currently using the model treats all changes in wellbeing as equivalent and doesn’t distinguish between populations. But another user could certainly add weights to prioritize certain kinds of changes more than others if they wished.
Great question! My current intuition is that we would only re-run the calculations if doing so would be decision-relevant—e.g., if we’re deciding whether to continue recommending the option or not. But there could be some other reason to do it that I haven’t considered; happy to hear other perspectives on this.
Yep, this is exactly where we’re headed next. We’ve spent the last several months conducting a landscape analysis of “opportunity spaces” associated with particular institutions around the world and assessing which ones are most promising to investigate further. We’re almost done with our preliminary analysis and should be ready to release it later this month.
Stay tuned! :)
Ok, thank you!
So, you are basically assuming that institutional change impact is the expected gain (measured by e. g. economic or health improvements, simplified as GD and AMF WELLBY impact estimates, or using other calculations, or measured by other metrics, which can but do not have to be impartial) of a shift to more effective causes, considering discount rate (selected by the user), minus risk (which can be weighted more compared to gain, in some cases, and which includes suboptimal/unintended investment outcomes), and plus leverage and substitutability effects. I would also add complementarity (the impact of influencing other actors’ efficiencies in pursuing their objectives, unless that is considered by leverage).
You would re-run your calculations if it is decision-relevant. I think that re-running the calculation is relevant whenever the persons involved in the strategy or external informants identify a possibility of substantially changed variable (benefit, risk, leverage, or substitutability) (for example, due to a (substantial) change in the government, relevant press release, general discourse change, scientific finding, or partnership changes).
Yaaay opportunity spaces.