We’ve had several researchers who have been working on technical AI alignment for multiple years, and no consensus on a solution, although some might think some systems are less risky than others, and we’ve made progress on those. Say 20 researchers working 20 hours a week, 50 weeks a year, for 5 years. That’s 20 * 20 * 5 * 50 = 100,000 hours of work. I think the number of researchers is much larger now. This also excludes a lot of the background studying, which would be duplicated.
Maybe AI alignment is not “one problem”, and it’s not exactly rigorously posed yet (it’s pre-paradigmatic), but those are also reasons to think it’s especially hard. Technical AI alignment has required building a new field of research, not just using existing tools.
We’ve had several researchers who have been working on technical AI alignment for multiple years, and no consensus on a solution, although some might think some systems are less risky than others, and we’ve made progress on those. Say 20 researchers working 20 hours a week, 50 weeks a year, for 5 years. That’s 20 * 20 * 5 * 50 = 100,000 hours of work. I think the number of researchers is much larger now. This also excludes a lot of the background studying, which would be duplicated.
Maybe AI alignment is not “one problem”, and it’s not exactly rigorously posed yet (it’s pre-paradigmatic), but those are also reasons to think it’s especially hard. Technical AI alignment has required building a new field of research, not just using existing tools.