I think it’s a really bad idea to try to slow down AI research. In addition to the fact that you’ll antagonize almost all of the AI community and make them not take AI safety research as seriously, consider what would happen on the off chance that you actually succeeded.
There are a lot of AI firms, so if you’re able to convince some to slow down, then the ones that don’t slow down would be the ones that care less about AI safety. Much better idea to get the ones who care about AI safety to focus on AI safety than to potentially cede their cutting-edge research position to others who care less.
I think creating more Stuart Russells is just about the best thing that can be done for AI Safety. What he has different from others who care about AI Safety is that he’s a prestigious CS professor, while many who focus on AI Safety, even if they have good ideas, aren’t affiliated with a well-known and well-respected institution. Even when Nick Bostrom or Steven Hawking talk about AI, they’re often dismissed by people who say “well sure they’re smart, but they’re not computer scientists, so what do they know?”
I’m actually a little surprised that they seemed so resistant to your idea. It seems to me that there is so much noise on this topic, that the marginal negative from creating more noise is basically zero, and if there’s a chance you could cut through the noise and provide a platform to people who know what they’re talking about here then that would be good.
I think it’s a really bad idea to try to slow down AI research. In addition to the fact that you’ll antagonize almost all of the AI community and make them not take AI safety research as seriously, consider what would happen on the off chance that you actually succeeded.
There are a lot of AI firms, so if you’re able to convince some to slow down, then the ones that don’t slow down would be the ones that care less about AI safety. Much better idea to get the ones who care about AI safety to focus on AI safety than to potentially cede their cutting-edge research position to others who care less.
I think creating more Stuart Russells is just about the best thing that can be done for AI Safety. What he has different from others who care about AI Safety is that he’s a prestigious CS professor, while many who focus on AI Safety, even if they have good ideas, aren’t affiliated with a well-known and well-respected institution. Even when Nick Bostrom or Steven Hawking talk about AI, they’re often dismissed by people who say “well sure they’re smart, but they’re not computer scientists, so what do they know?”
I’m actually a little surprised that they seemed so resistant to your idea. It seems to me that there is so much noise on this topic, that the marginal negative from creating more noise is basically zero, and if there’s a chance you could cut through the noise and provide a platform to people who know what they’re talking about here then that would be good.