I think it’s quite unlikely that the current distribution will be set in place. And actually on purely current distributions I’m not sure the MIRI approach is underrepresented. On the other hand I think it’s likely that the current distribution will influence future distribution, which is what’s relevant; I’m trying to push back a little against an expected trend towards ML-based approaches representing a very large share of the work.
Yes, you’re right about it not being ‘set in place’. I more meant to say that, while funding and interest has grown significantly (OpenAI and DeepMind have in principle billions of dollars of spending power each and are now significantly interested in this topic), MIRI failed to reach it’s $800k minimal fundraising target this year, and so I expect that the main approaches to AI that are being followed elsewhere will get the most attention in the future.
OpenAI and DeepMind have in principle billions of dollars of spending power each and are now significantly interested in this topic)
While I think there is a true point in this vicinity (it will be a lot easier to fund ML-based approaches, including at these organizations, but also others), this seems to be overstating the relevant resources and the effort going into safety topics. OpenAI has been funded with a billion dollars (although it might receive more funding later), and its annual spending must of course be lower. And both of these organizations have primary aims of advancing AI, with limited efforts on safety issues thus far.
I think it’s quite unlikely that the current distribution will be set in place. And actually on purely current distributions I’m not sure the MIRI approach is underrepresented. On the other hand I think it’s likely that the current distribution will influence future distribution, which is what’s relevant; I’m trying to push back a little against an expected trend towards ML-based approaches representing a very large share of the work.
Yes, you’re right about it not being ‘set in place’. I more meant to say that, while funding and interest has grown significantly (OpenAI and DeepMind have in principle billions of dollars of spending power each and are now significantly interested in this topic), MIRI failed to reach it’s $800k minimal fundraising target this year, and so I expect that the main approaches to AI that are being followed elsewhere will get the most attention in the future.
While I think there is a true point in this vicinity (it will be a lot easier to fund ML-based approaches, including at these organizations, but also others), this seems to be overstating the relevant resources and the effort going into safety topics. OpenAI has been funded with a billion dollars (although it might receive more funding later), and its annual spending must of course be lower. And both of these organizations have primary aims of advancing AI, with limited efforts on safety issues thus far.