Hey again, sorry to spam you as I just commented on another piece of yours but am really vibing with your content!
I’m really hoping we can get something like this, I’ve been calling this “existential compromise.”
I worry that it may be difficult to get humanity to agree that we should use even a small fraction of future the resources optimally (see my research on this here), as I agree it seems like it will be a very weird[1] thing that is optimal.
I think a compromise like this, with things split between optimal use and more (trans-) human friendly world makes a lot of sense and could perhaps be achieved if we can get society onto a good viatopian path, I’ve described what I hope that could look like here.
I would also say that I think Will Macaskill’s “grand bargain” and suggestions that we should try to aim for compromise in order to achieve “the good de dicto” feels to me like he actually is arguing that we need to aim for what is in fact best with significant fraction of future resources.
Nick Bostrom has also argued (page 14) that it should be possible for humans to get almost all of what they want and yet most resources could be optimized for “super-beneficiaries”.
Essentially I think we need to get whatever process[2] is used to decide what we do with future resources to be extremely careful and thorough. I think we should target that process specifically, but it might also be good to broadly try to differentially accelerate society‘s wisdom (perhaps with AI).
Additionally, it could be really good to delay or slow things down so that we have more time to realize the existential stakes giving us the time that it takes to collectively mature and wisen.
Recently Will Macaskill’s research debated whether or not the best possible use of resources is something very extreme, calling this thesis “extremity of the best” (EOTB). I think this is very likely to be the case.
I think this includes things like AGI/superintelligence governance, global constitutional conventions, and obviously any kind of long reflection or coherent extrapolated volition that we collectively decide to undergo.
Hey again, sorry to spam you as I just commented on another piece of yours but am really vibing with your content!
I’m really hoping we can get something like this, I’ve been calling this “existential compromise.”
I worry that it may be difficult to get humanity to agree that we should use even a small fraction of future the resources optimally (see my research on this here), as I agree it seems like it will be a very weird[1] thing that is optimal.
I think a compromise like this, with things split between optimal use and more (trans-) human friendly world makes a lot of sense and could perhaps be achieved if we can get society onto a good viatopian path, I’ve described what I hope that could look like here.
I would also say that I think Will Macaskill’s “grand bargain” and suggestions that we should try to aim for compromise in order to achieve “the good de dicto” feels to me like he actually is arguing that we need to aim for what is in fact best with significant fraction of future resources.
Nick Bostrom has also argued (page 14) that it should be possible for humans to get almost all of what they want and yet most resources could be optimized for “super-beneficiaries”.
Essentially I think we need to get whatever process[2] is used to decide what we do with future resources to be extremely careful and thorough. I think we should target that process specifically, but it might also be good to broadly try to differentially accelerate society‘s wisdom (perhaps with AI).
Additionally, it could be really good to delay or slow things down so that we have more time to realize the existential stakes giving us the time that it takes to collectively mature and wisen.
Recently Will Macaskill’s research debated whether or not the best possible use of resources is something very extreme, calling this thesis “extremity of the best” (EOTB). I think this is very likely to be the case.
I think this includes things like AGI/superintelligence governance, global constitutional conventions, and obviously any kind of long reflection or coherent extrapolated volition that we collectively decide to undergo.