Too sad. Some people think that maybe working on s-risks is unpopular because suffering is too emotionally draining to think about, so people prefer to ignore it.
Another version of this concern is that sad topics are not in vogue with the rich tech founders who bankroll our think tanks; that they’re selected to be the sort of people who are excited about incredible moonshots rather than prudent risk management. If these people hear about averting suffering, reducing risks, etc. too often from EA circles, they’ll become uninterested in EA-aligned thinking and think tanks.
I want to argue with the Litany of Gendlin here, but what work on s-risks really looks like in the end is writing open source game theory simulations and writing papers. All try academic stuff that makes it easy to block out thoughts of suffering itself. Just give it a try! (E.g., at a CLR fellowship.)
I don’t know if that’s the case, but s-risks can be reframed:
We want to unlock positive-sum trades for the flourishing of our descendants (biological or not).
We want to distribute the progress and welfare gains from AI equitably (i.e. not have some sizable fractions of future beings suffer extremely).
Our economy only works thanks to trust in institutions and jurisprudence. The flourishing of the AI economy will require that new frameworks be developed that live up to the challenges of the new era!
These reframings should of course be followed up with a detailed explanation so as not to be dishonest. Their purpose is just to show that one can pivot one’s thinking about s-risks such that the suffering is not so front and center. This would, if anything, reduce my motivation to work on them, but that’s just me.
but what work on s-risks really looks like in the end is writing open source game theory simulations and writing papers
Research that involves game theory simulations can be net-positive, but it also seems very dangerous, and should not be done unilaterally. Especially when it involves publishing papers and source code.
Heh, I actually agree. I’m currently wondering whether it’s net positive or negative if all this research, though unpublished, still ends up in the training data of at least one AI. It could help that AI avoid coordination failures. But there will be other AIs that haven’t read it, and they’re too many, maybe it’s unhelpful or worse? Also it probably depends a lot on what exactly the research says. Wdyt?
I am very unclear on why research that involves game theory simulations seems dangerous to you. I think I’m ignorant of something leading you to this conclusion. Would you be willing to explain your reasoning or send me a link to something so I can better understand where you’re coming from?
Too sad. Some people think that maybe working on s-risks is unpopular because suffering is too emotionally draining to think about, so people prefer to ignore it.
Another version of this concern is that sad topics are not in vogue with the rich tech founders who bankroll our think tanks; that they’re selected to be the sort of people who are excited about incredible moonshots rather than prudent risk management. If these people hear about averting suffering, reducing risks, etc. too often from EA circles, they’ll become uninterested in EA-aligned thinking and think tanks.
I want to argue with the Litany of Gendlin here, but what work on s-risks really looks like in the end is writing open source game theory simulations and writing papers. All try academic stuff that makes it easy to block out thoughts of suffering itself. Just give it a try! (E.g., at a CLR fellowship.)
I don’t know if that’s the case, but s-risks can be reframed:
We want to unlock positive-sum trades for the flourishing of our descendants (biological or not).
We want to distribute the progress and welfare gains from AI equitably (i.e. not have some sizable fractions of future beings suffer extremely).
Our economy only works thanks to trust in institutions and jurisprudence. The flourishing of the AI economy will require that new frameworks be developed that live up to the challenges of the new era!
These reframings should of course be followed up with a detailed explanation so as not to be dishonest. Their purpose is just to show that one can pivot one’s thinking about s-risks such that the suffering is not so front and center. This would, if anything, reduce my motivation to work on them, but that’s just me.
Research that involves game theory simulations can be net-positive, but it also seems very dangerous, and should not be done unilaterally. Especially when it involves publishing papers and source code.
Heh, I actually agree. I’m currently wondering whether it’s net positive or negative if all this research, though unpublished, still ends up in the training data of at least one AI. It could help that AI avoid coordination failures. But there will be other AIs that haven’t read it, and they’re too many, maybe it’s unhelpful or worse? Also it probably depends a lot on what exactly the research says. Wdyt?
I am very unclear on why research that involves game theory simulations seems dangerous to you. I think I’m ignorant of something leading you to this conclusion. Would you be willing to explain your reasoning or send me a link to something so I can better understand where you’re coming from?