It depends on what you mean. If you mean trying to help developing countries achieve SDG goals, then this won’t work for a variety of reasons, the most straightforward of which is that using data-based approaches to build statistical models is different enough from cutting edge machine learning or alignment research that it will be very likely useless to the task, and the vast majority of the benefit from such work is found in the standard benefits to people living in developing countries.
If you mean advocating for policies which subsidize good safety research, or advocate for interpretability research in ML models, then I think a better term would be “AI governance” or some other term which specifies that it’s non-technical alignment work, focused on building institutions which are more likely to use solutions rather than finding those solutions.
OK, makes sense—since this is basically mostly benefit of individuals it is like AI and impact—interpretability—well sure some of the areas can relate to that, such as social media wellbeing optimization. Yes, probably the level of thinking is at the ‘governance’ level, not technical alignment (e. g. not quite at a place where a poorly coded drone could decide to advance selfish objectives instead of SDGs..).
It depends on what you mean. If you mean trying to help developing countries achieve SDG goals, then this won’t work for a variety of reasons, the most straightforward of which is that using data-based approaches to build statistical models is different enough from cutting edge machine learning or alignment research that it will be very likely useless to the task, and the vast majority of the benefit from such work is found in the standard benefits to people living in developing countries.
If you mean advocating for policies which subsidize good safety research, or advocate for interpretability research in ML models, then I think a better term would be “AI governance” or some other term which specifies that it’s non-technical alignment work, focused on building institutions which are more likely to use solutions rather than finding those solutions.
OK, makes sense—since this is basically mostly benefit of individuals it is like AI and impact—interpretability—well sure some of the areas can relate to that, such as social media wellbeing optimization. Yes, probably the level of thinking is at the ‘governance’ level, not technical alignment (e. g. not quite at a place where a poorly coded drone could decide to advance selfish objectives instead of SDGs..).