I want to argue with the Litany of Gendlin here, but what work on s-risks really looks like in the end is writing open source game theory simulations and writing papers. All try academic stuff that makes it easy to block out thoughts of suffering itself. Just give it a try! (E.g., at a CLR fellowship.)
I don’t know if that’s the case, but s-risks can be reframed:
We want to unlock positive-sum trades for the flourishing of our descendants (biological or not).
We want to distribute the progress and welfare gains from AI equitably (i.e. not have some sizable fractions of future beings suffer extremely).
Our economy only works thanks to trust in institutions and jurisprudence. The flourishing of the AI economy will require that new frameworks be developed that live up to the challenges of the new era!
These reframings should of course be followed up with a detailed explanation so as not to be dishonest. Their purpose is just to show that one can pivot one’s thinking about s-risks such that the suffering is not so front and center. This would, if anything, reduce my motivation to work on them, but that’s just me.
but what work on s-risks really looks like in the end is writing open source game theory simulations and writing papers
Research that involves game theory simulations can be net-positive, but it also seems very dangerous, and should not be done unilaterally. Especially when it involves publishing papers and source code.
Heh, I actually agree. I’m currently wondering whether it’s net positive or negative if all this research, though unpublished, still ends up in the training data of at least one AI. It could help that AI avoid coordination failures. But there will be other AIs that haven’t read it, and they’re too many, maybe it’s unhelpful or worse? Also it probably depends a lot on what exactly the research says. Wdyt?
I am very unclear on why research that involves game theory simulations seems dangerous to you. I think I’m ignorant of something leading you to this conclusion. Would you be willing to explain your reasoning or send me a link to something so I can better understand where you’re coming from?
I want to argue with the Litany of Gendlin here, but what work on s-risks really looks like in the end is writing open source game theory simulations and writing papers. All try academic stuff that makes it easy to block out thoughts of suffering itself. Just give it a try! (E.g., at a CLR fellowship.)
I don’t know if that’s the case, but s-risks can be reframed:
We want to unlock positive-sum trades for the flourishing of our descendants (biological or not).
We want to distribute the progress and welfare gains from AI equitably (i.e. not have some sizable fractions of future beings suffer extremely).
Our economy only works thanks to trust in institutions and jurisprudence. The flourishing of the AI economy will require that new frameworks be developed that live up to the challenges of the new era!
These reframings should of course be followed up with a detailed explanation so as not to be dishonest. Their purpose is just to show that one can pivot one’s thinking about s-risks such that the suffering is not so front and center. This would, if anything, reduce my motivation to work on them, but that’s just me.
Research that involves game theory simulations can be net-positive, but it also seems very dangerous, and should not be done unilaterally. Especially when it involves publishing papers and source code.
Heh, I actually agree. I’m currently wondering whether it’s net positive or negative if all this research, though unpublished, still ends up in the training data of at least one AI. It could help that AI avoid coordination failures. But there will be other AIs that haven’t read it, and they’re too many, maybe it’s unhelpful or worse? Also it probably depends a lot on what exactly the research says. Wdyt?
I am very unclear on why research that involves game theory simulations seems dangerous to you. I think I’m ignorant of something leading you to this conclusion. Would you be willing to explain your reasoning or send me a link to something so I can better understand where you’re coming from?