As someone who’s spent a significant amount of time thinking about possible rearrangements of civilization, reading On Saving The World was both tantalizing and frustrating (as well as cementing your position as one of the most impressive people I am aware of). I understand building up from the ground, covering all the pre-requisites and inferential distance, would be a huge effort and currently not worth your time, but I feel like even a terse summary without any detailed justifications for suggestions based on of all those years of thought would be highly interesting, and a pointer towards areas worth exploring.
Would you be willing to at least summarize some of your high-level conclusions, with the understanding that you’re not going to attempt to defend, justify, or develop them in any depth since you have higher priorities?
As someone who’s spent a significant amount of time thinking about possible rearrangements of civilization, reading On Saving The World was both tantalizing and frustrating (as well as cementing your position as one of the most impressive people I am aware of). I understand building up from the ground, covering all the pre-requisites and inferential distance, would be a huge effort and currently not worth your time, but I feel like even a terse summary without any detailed justifications for suggestions based on of all those years of thought would be highly interesting, and a pointer towards areas worth exploring.
Would you be willing to at least summarize some of your high-level conclusions, with the understanding that you’re not going to attempt to defend, justify, or develop them in any depth since you have higher priorities?
Or at least laying out the inferential steps you see most lacking within EA groups you meet? Or less-wrongians