If your claim is that ‘applying AI models for economically valuable tasks seems dangerous, i.e. the AIs themselves could be dangerous’ then I agree. A scrappy applications company might be more likely to end the world than OpenAI/DeepMind… it seems like it would be good, then, if more of these companies were run by safety conscious people.
A separate claim is the one about capabilities externalities. I basically agree that AI startups will have capabilities externalities, even if I don’t expect them to be very large. The question, then, is how much expected money we would be trading for expected time and what is the relative value between these two currencies.
If your claim is that ‘applying AI models for economically valuable tasks seems dangerous, i.e. the AIs themselves could be dangerous’ then I agree. A scrappy applications company might be more likely to end the world than OpenAI/DeepMind… it seems like it would be good, then, if more of these companies were run by safety conscious people.
A separate claim is the one about capabilities externalities. I basically agree that AI startups will have capabilities externalities, even if I don’t expect them to be very large. The question, then, is how much expected money we would be trading for expected time and what is the relative value between these two currencies.