Strategic Risks and Unlikely Benefits

[[epistemic status: I am confident in my assessment of the distinctions I present; while the relative value of each class of activity is a huge open question.]]

TL;DR—I’m not trying to pull anyone from their fervor. I present a perspective I’ve nurtured for a while, which has informed my own efforts, and explains the underlying reasoning for the topics I present here on the EA Forum. Briefly, while EA focuses on existential risks of low probability, I focus on strategic risks that have high probability; and though EA focuses on benefits which are of great certainty, I focus on neglected and unlikely benefits. Finally, moving “from impossible to possible” is critical; yet strategy displays the importance of moving “from possible to VIABLE.” I explain why below.

Paths or Photographs?

We can imagine the world after a catastrophe, using that ‘photograph’ of the future to assess the expected cost of that outcome, multiply by likelihood and—BOOM! Value, measured. Yet, there is a different framing of the decision-problem: “If we take path A or B, then path C or D, then...” That formulation is sensitive to strategic considerations along the path from here to there. And, it can identify “Zugzwang” positions in our future. (In chess, if “it’s your turn to move, and all your moves are bad.” Another version of the same problem—when you are ‘snookered’ playing Snooker!)

What would that look like, at large? If we developed such a partisan and faithless government that we became incapable of responding to those Existential Risks folks care about. Then, we got snookered.

The inversion of a Zugzwang is equally important: if we look at a path where we direct all our energy toward solving an Existential Risk with current practice only, we might fail because we did not have an option available. (Zugzwang!) Yet, given enough time before that risk occurs, it is strategically superior to focus upon increasing our potency and options. Then, we are more likely to be capable of overcoming the problem.

This motivates exploration of design and policy spaces, for the sake of discovering improvements. These are distinct from ‘causes’ - I’m not expecting any money to change hands philanthropically. Like Buckminster Fuller, I look at the historical shifts—things change of themselves when an improvement brought something above the threshold of VIABILITY. Targeting those problem-sets which yield strategic advantage can put us ahead of a wave of problems, instead of treading water on each issue simultaneously.

The allocation to basic research into plastics recycling, as opposed to the construction of thousands of ‘recycling’ facilities which incinerate most of their material, is a prime example of this kind of strategic focus. The same process occurred a generation ago: poor nations were swamped with the rusted metal from industrialized countries, piled high. Then, a cheaper, smaller furnace allowed locals to melt that metal back into the good stuff. They became the metal recycling companies still there today, employing thousands and developing their skills. Due to recent research to improve plastics recycling, we can expect poor people in Vietnam to jump into the river to GRAB plastic, because that’s MONEY floating away.

[[Note: I am NOT an arch-capitalist who thinks only profits matter; I will be posting work on methods for internalizing externalities, next! If externalities are priced-in with reasonable accuracy, I see no issue with saying “focus on making solutions VIABLE for the public.”]]

The Strategic Value of Viable

I do not claim that this next critique rests on EA—I only mention it as a point of contrast, to illuminate my view. Often, when we move from impossible to possible, the True Fans of that goal say “ah, now that it is POSSIBLE, we should focus on FORCING everyone to do this.” Decades of protests, litigation, legislation, back-lash, and subsidy result. It is a disservice to the goal, to be unwilling to make it work for others. If we act that way, we are commanding their sacrifice.

Instead, those efforts would be better spent moving possible to viable. If protesters measured their costs and time, earning instead to fund targeted research directly, then those researchers wouldn’t be bottlenecked or hamstrung. Targeting funding to the research necessary for reaching viability makes sense to me. Once you get to viability, it pulls latent resources toward it which would never have been granted philanthropically, because the people themselves see direct value. They switch, without being convinced or commanded. They pay for that switch, themselves, because they earn it back and more. Not a charity or subsidy.

The Long Tail of Improvements

I’ve already noticed a discussion of how “finding a better way to help has a long tail”—that MOST ‘solutions’ do little to no actual good, and the distribution tapers far away, with a sprinkling of solutions both mediocre and monumental. This thinking was used on the EA Forum for a rough assessment of “how long should you spend, researching ways to help?” There was an important consideration missed:

While charitable causes have their own distribution, tapering slowly, the set of all improvements tapers SLOWER. That is, if you looked at thousands of charities, the portion of those generating a decent return would be lower than the distribution from the same number of patents, discoveries, proofs, and innovations. The charity, by its nature, can only achieve to the extent that it is funded, limiting its maximum. Meanwhile, when innovations make something viable, the upper-bound is the size of the whole market, at little added cost. That means there are more “ultra-high-value” targets among innovations, compared to giving dollars. (For example, the low cost spent to support Paul Erdos, bumming around at his friend’s house, was enough to generate the Graph Theory which supports electrical grids, shipping, airlines, the internet and mobile telecommunications, recommendation algorithms, ad revenue,… Generating trillions of dollars annually, as well as spreading the technologies which made so many other works possible.) Strategically, if we have time to solve a problem, we are likely to have greater capacity to do so if we first look to non-Existential problems which have strategic value.

[[Additionally, we are most likely to be able to make strategic innovations real, as well as tilt the outcome of their roll-out, when we focus on the newer options. Things which have just become possible have many paths before them; we can bend that path best when the idea has just arrived.]]

I don’t expect anyone to follow me down this road. You are assessing the best option from your side, and I assume the real answer is somewhere between us. I only hope this post explains why I’ll keep talking about non-Existential risks and neglected innovations, instead of trying to pick an existing charity.

No comments.