Existential risk reduction and handling the technological transformation are therefore not just questions of the ‘far future’ or the ‘long-term’; it is also a ‘near-term’ concern.
Some risks are pretty low for the next 2 generations, so that they’re neglected in favor of more present concerns (welfare, inequality, financial risks, etc.)
I was wondering: perhaps one could model human cooperation across the time as analogous to pensions schemes (or Ponzi schemes, if they fail) – as a network of successive games of partial conflict, so that foreseeing the end of the chain of cooperation would, by backward induction reasoning, entail its collapse. So, if we predict that an x-risk will increase, you can anticipate that the probability of future generations refusing to cooperate with previous ones will increase, too: my grand-grandchildren won’t take care of my grandchildren (or of whatever they value), who so will see no point in cooperating with my children, who won’t cooperate with me. Does it make sense?
Some risks are pretty low for the next 2 generations, so that they’re neglected in favor of more present concerns (welfare, inequality, financial risks, etc.)
I was wondering: perhaps one could model human cooperation across the time as analogous to pensions schemes (or Ponzi schemes, if they fail) – as a network of successive games of partial conflict, so that foreseeing the end of the chain of cooperation would, by backward induction reasoning, entail its collapse. So, if we predict that an x-risk will increase, you can anticipate that the probability of future generations refusing to cooperate with previous ones will increase, too: my grand-grandchildren won’t take care of my grandchildren (or of whatever they value), who so will see no point in cooperating with my children, who won’t cooperate with me. Does it make sense?