Most near-term interventions likely won’t be pivotal for the far future, so we can ignore their long-term effects to cooperate with near-term focused value systems.
Balance steering capacity with object-level action.
Unexpected outcomes will largely fall into two categories: those we think we should have anticipated, and those we don’t think we reasonably could have anticipated. For the first category, I think we could do better at brainstorming unusual reasons why our plans might fail. I have a draft post on how to do this. For the second category, I don’t think there is much to do. Maybe there will be a blizzard during midsummer all over California this year, and I will hold Californian authorities blameless for their failure to prepare for that blizzard.
I stumbled across this today; haven’t had a chance to read it but it looks relevant.
Nice. I’ve already written a sequence on it (first post here) – curious for your thoughts on it!
Also, I think Richard Ngo’s working on a piece on the topic, building off my sequence & the academic work that Hilary Greaves has done.
I wrote some comments on your sequence:
Most near-term interventions likely won’t be pivotal for the far future, so we can ignore their long-term effects to cooperate with near-term focused value systems.
Fight ambiguity aversion.
Fight status quo bias.
Balance steering capacity with object-level action.
Unexpected outcomes will largely fall into two categories: those we think we should have anticipated, and those we don’t think we reasonably could have anticipated. For the first category, I think we could do better at brainstorming unusual reasons why our plans might fail. I have a draft post on how to do this. For the second category, I don’t think there is much to do. Maybe there will be a blizzard during midsummer all over California this year, and I will hold Californian authorities blameless for their failure to prepare for that blizzard.
I stumbled across this today; haven’t had a chance to read it but it looks relevant.