On Dichotomy:
Because you’ve picked a particularly strong form of Maxipok to argue against, you’re pushed into choosing a particularly strong form of Dichotomy that would be necessary to support it
But I think that this strong form of Dichotomy is relatively implausible to start with
And I would guess that Bostrom at the time of writing the article would not have supported it; certainly the passages you quote feel to me like they’re supporting something weaker
Here’s a weaker form of Dichotomy that I feel much more intuitive sympathy for:
Most things that could be “locked in” such that they have predictable long-term effects on the total value of our future civilization, and move us away from the best outcomes, actually constrain us to worlds which are <10% as good than the worlds without any such lock-in (and would therefore count as existential catastrophes in their own right)
The word “most” is doing work there, and I definitely don’t think it’s absolute (e.g. as you point out, the idea of diving the universe up 50⁄50 between a civilization that will do good things with it and one that won’t); but it could plausibly still be enough to guide a lot of our actions
Yep, I guess I’m into people trying to figure out what they think and which arguments seem convincing, and I think that it’s good to highlight sources of perspectives that people might find helpful-according-to-their-own-judgement for that. I do think I have found Drexler’s writing on AI singularly helpful on my inside-view judgements.
That said: absolutely seems good for you to offer counterarguments! Not trying to dismiss that (but I did want to explain why the counterargument wasn’t landing for me).