My main objection is to number 5: Wisdom and intelligence interventions are promising enough to justify significant work in prioritization.
The objection is a combination of: --Changing societal values/culture/ habits is hard. Society is big and many powerful groups are already trying to change it in various ways. --When you try, often people will interpret that as a threatening political move and push back.
--We don’t have much time left.
Overall I still think this is promising, I just thought I’d say what the main crux is for me.
On “don’t have much time left”, this is a very specific and precise question. If you think that AGI will happen in 5 years, I’d agree that advancing wisdom and intelligence probably isn’t particularly useful. However, if AGI happens to be 30-100+ years away, then it really gets to be. Even if there’s a <30% chance that AGI is 30+ years away, that’s considerable.
In the very short-time-frames, “education about AI safety” seems urgent, though is more tenuously “wisdom and intelligence”.
My main objection is to number 5: Wisdom and intelligence interventions are promising enough to justify significant work in prioritization.
The objection is a combination of:
--Changing societal values/culture/ habits is hard. Society is big and many powerful groups are already trying to change it in various ways.
--When you try, often people will interpret that as a threatening political move and push back.
--We don’t have much time left.
Overall I still think this is promising, I just thought I’d say what the main crux is for me.
Some of the interventions don’t have to do with changing societal values/culture/ habits though; e.g. those falling under hardware/medical.
But maybe you think they’ll take time, and that we don’t have enough time to work on them either.
+1 for Stefan’s point.
On “don’t have much time left”, this is a very specific and precise question. If you think that AGI will happen in 5 years, I’d agree that advancing wisdom and intelligence probably isn’t particularly useful. However, if AGI happens to be 30-100+ years away, then it really gets to be. Even if there’s a <30% chance that AGI is 30+ years away, that’s considerable.
In the very short-time-frames, “education about AI safety” seems urgent, though is more tenuously “wisdom and intelligence”.