I appreciate your exploration of the strategic complexity inherent in prioritizing effectiveness. A crucial aspect involves recognizing that impact often occurs in significant âchunks.â Identifying key thresholds and accurately assessing their likelihood of being pivotal is essential for effective resource allocation. For instance, in farmed animal advocacy, securing cage-free commitments from major corporations can lead to disproportionate industry-wide improvements, making precise strategic targeting crucial. In these contexts, there might appear to be little impact until the critical moment. However, openly communicating these threshold calculations might inadvertently strengthen adversariesâ resistance. Drawing from game theoryâs âmadmanâ approach, an actor sometimes gains strategic advantage if adversaries believe it may irrationally commit excessive resources or accept high risks to achieve its goals, thus deterring aggressive opposition.
On a related semantic note, describing strategic resilience or integrating adversarial responses as âless effectiveâ could oversimplify this nuanced issue. I would think when people say âeffectiveâ that they are talking about what best achieves oneâs goals, and integrating adversarial responses would help in doing so.
Interesting ideas! A hypothesis I found relevant to this phenomenon, similar to yours:
The problem âmaximize impact per resources spentâ is not well-defined a priori.
For instance, it depends on the time frame and scale: there could be very cost-effective smallish interventions, that canât scale that much, versus very large scale interventions that require massive coordination, investment, âstubbornessâ, etc.
[Of course, you should try to see if such things actually exist in the real world; FWIW, I suspect they do]
It also depends on the entity you consider: is it you as an individual? The small group of people who are willing to listen and do a project with you? The whole EA community? Humanity? You might be able to build a coherent system that takes into account these various levels though.
Another remark, that has more to do with execution than general principles, which you also touch upon: sharing all the information you have is not always a good idea. Unfortunately, the possible fixes (restricting information access to trusted people/âgroups) seem to go against the [EA/ârationalist/â...] culture of truth-seeking, open communication, etc.
I appreciate your exploration of the strategic complexity inherent in prioritizing effectiveness. A crucial aspect involves recognizing that impact often occurs in significant âchunks.â Identifying key thresholds and accurately assessing their likelihood of being pivotal is essential for effective resource allocation. For instance, in farmed animal advocacy, securing cage-free commitments from major corporations can lead to disproportionate industry-wide improvements, making precise strategic targeting crucial. In these contexts, there might appear to be little impact until the critical moment. However, openly communicating these threshold calculations might inadvertently strengthen adversariesâ resistance. Drawing from game theoryâs âmadmanâ approach, an actor sometimes gains strategic advantage if adversaries believe it may irrationally commit excessive resources or accept high risks to achieve its goals, thus deterring aggressive opposition.
On a related semantic note, describing strategic resilience or integrating adversarial responses as âless effectiveâ could oversimplify this nuanced issue. I would think when people say âeffectiveâ that they are talking about what best achieves oneâs goals, and integrating adversarial responses would help in doing so.
Interesting ideas!
A hypothesis I found relevant to this phenomenon, similar to yours:
The problem âmaximize impact per resources spentâ is not well-defined a priori.
For instance, it depends on the time frame and scale: there could be very cost-effective smallish interventions, that canât scale that much, versus very large scale interventions that require massive coordination, investment, âstubbornessâ, etc.
[Of course, you should try to see if such things actually exist in the real world; FWIW, I suspect they do]
It also depends on the entity you consider: is it you as an individual? The small group of people who are willing to listen and do a project with you? The whole EA community? Humanity?
You might be able to build a coherent system that takes into account these various levels though.
Another remark, that has more to do with execution than general principles, which you also touch upon: sharing all the information you have is not always a good idea. Unfortunately, the possible fixes (restricting information access to trusted people/âgroups) seem to go against the [EA/ârationalist/â...] culture of truth-seeking, open communication, etc.