I’d love to collaborate with folks on the cluelessness aspect of this.
I believe GPI is doing work on further specifying what we mean by cluelessness & developing a taxonomy of it.
I’m personally interested in better understanding on-the-ground implications of cluelessness, e.g. what does it imply about which areas to focus on presently? Some preliminary work in that direction here.
Most near-term interventions likely won’t be pivotal for the far future, so we can ignore their long-term effects to cooperate with near-term focused value systems.
Balance steering capacity with object-level action.
Unexpected outcomes will largely fall into two categories: those we think we should have anticipated, and those we don’t think we reasonably could have anticipated. For the first category, I think we could do better at brainstorming unusual reasons why our plans might fail. I have a draft post on how to do this. For the second category, I don’t think there is much to do. Maybe there will be a blizzard during midsummer all over California this year, and I will hold Californian authorities blameless for their failure to prepare for that blizzard.
I stumbled across this today; haven’t had a chance to read it but it looks relevant.
+1, thank you for highlighting this.
I’d love to collaborate with folks on the cluelessness aspect of this.
I believe GPI is doing work on further specifying what we mean by cluelessness & developing a taxonomy of it.
I’m personally interested in better understanding on-the-ground implications of cluelessness, e.g. what does it imply about which areas to focus on presently? Some preliminary work in that direction here.
I’ve thought a lot about cluelessness, and I could give you feedback on something you’re thinking of writing.
Nice. I’ve already written a sequence on it (first post here) – curious for your thoughts on it!
Also, I think Richard Ngo’s working on a piece on the topic, building off my sequence & the academic work that Hilary Greaves has done.
I wrote some comments on your sequence:
Most near-term interventions likely won’t be pivotal for the far future, so we can ignore their long-term effects to cooperate with near-term focused value systems.
Fight ambiguity aversion.
Fight status quo bias.
Balance steering capacity with object-level action.
Unexpected outcomes will largely fall into two categories: those we think we should have anticipated, and those we don’t think we reasonably could have anticipated. For the first category, I think we could do better at brainstorming unusual reasons why our plans might fail. I have a draft post on how to do this. For the second category, I don’t think there is much to do. Maybe there will be a blizzard during midsummer all over California this year, and I will hold Californian authorities blameless for their failure to prepare for that blizzard.
I stumbled across this today; haven’t had a chance to read it but it looks relevant.