but what work on s-risks really looks like in the end is writing open source game theory simulations and writing papers
Research that involves game theory simulations can be net-positive, but it also seems very dangerous, and should not be done unilaterally. Especially when it involves publishing papers and source code.
Heh, I actually agree. I’m currently wondering whether it’s net positive or negative if all this research, though unpublished, still ends up in the training data of at least one AI. It could help that AI avoid coordination failures. But there will be other AIs that haven’t read it, and they’re too many, maybe it’s unhelpful or worse? Also it probably depends a lot on what exactly the research says. Wdyt?
I am very unclear on why research that involves game theory simulations seems dangerous to you. I think I’m ignorant of something leading you to this conclusion. Would you be willing to explain your reasoning or send me a link to something so I can better understand where you’re coming from?
Research that involves game theory simulations can be net-positive, but it also seems very dangerous, and should not be done unilaterally. Especially when it involves publishing papers and source code.
Heh, I actually agree. I’m currently wondering whether it’s net positive or negative if all this research, though unpublished, still ends up in the training data of at least one AI. It could help that AI avoid coordination failures. But there will be other AIs that haven’t read it, and they’re too many, maybe it’s unhelpful or worse? Also it probably depends a lot on what exactly the research says. Wdyt?
I am very unclear on why research that involves game theory simulations seems dangerous to you. I think I’m ignorant of something leading you to this conclusion. Would you be willing to explain your reasoning or send me a link to something so I can better understand where you’re coming from?