I’d say alignment research is not going very well! There have been successes in areas that help products get to market (e.g. RLHF) and on problems of academic interest that leave key problems unsolved (e.g. adversarial robustness), but there are several “core problems” that have not seen much progress over the years.
I’d say alignment research is not going very well! There have been successes in areas that help products get to market (e.g. RLHF) and on problems of academic interest that leave key problems unsolved (e.g. adversarial robustness), but there are several “core problems” that have not seen much progress over the years.
Good overview of this topic: https://www.forourposterity.com/nobodys-on-the-ball-on-agi-alignment/