One thing I’d want to do is to create an organisation that builds networks with add many AI research communities as possible, monitors AI research as comprehensively as possible and assesses the risk posed by different lines of research.
Some major challenges:
a lot of labs want to keep substantial parts of their work secret, even more so for e.g. military
encouraging sharing of more knowledge might inadvertently spread knowledge of how to do risky stuff
even knowing someone is doing something risky, might be hard to get them to change
might be hard to see in advance what lines of research are risky
I think networking + monitoring + risk assessing together can help with some of these challenges. Risk assessing + monitoring: we have a better idea of what we do and don’t need to know, which helps with the first and second issues. Also, if we have good relationships with labs we are probably better placed to come up with proposals that reduce risk while not hindering lab goals too much.
Networking might also help know where relatively unmonitored research is taking place, even if we can’t find out much more about it.
It would still be quite hard to have a big effect, but I think even knowing partially who is taking risks is pretty valuable in your scenario.
One thing I’d want to do is to create an organisation that builds networks with add many AI research communities as possible, monitors AI research as comprehensively as possible and assesses the risk posed by different lines of research.
Some major challenges:
a lot of labs want to keep substantial parts of their work secret, even more so for e.g. military
encouraging sharing of more knowledge might inadvertently spread knowledge of how to do risky stuff
even knowing someone is doing something risky, might be hard to get them to change
might be hard to see in advance what lines of research are risky
I think networking + monitoring + risk assessing together can help with some of these challenges. Risk assessing + monitoring: we have a better idea of what we do and don’t need to know, which helps with the first and second issues. Also, if we have good relationships with labs we are probably better placed to come up with proposals that reduce risk while not hindering lab goals too much.
Networking might also help know where relatively unmonitored research is taking place, even if we can’t find out much more about it.
It would still be quite hard to have a big effect, but I think even knowing partially who is taking risks is pretty valuable in your scenario.