I think it’s still good for some people to work on alignment research. The future is hard to predict, and we can’t totally rule out a string of technical breakthroughs, and the overall option space looks gloomy enough (at least from my perspective) that we should be pursuing multiple options in parallel rather than putting all our eggs in one basket.
That said, I think “alignment research pans out to the level of letting us safely wield vastly superhuman AGI in the near future” is sufficiently unlikely that we definitely shouldn’t be predicating our plans on that working out. AFAICT Leopold’s proposal is that we just lay down and die in the worlds where we can’t align vastly superhuman AI, in exchange for doing better in the worlds where we can align it; that seems extremely reckless and backwards to me, throwing away higher-probability success worlds in exchange for more niche and unlikely success worlds.
I also think alignment researchers thus far, as a group, have mainly had the effect of shortening timelines. I want alignment research to happen, but not at the cost of reducing our hope in the worlds where alignment doesn’t pan out, and thus far a lot of work labeled “alignment” has either seemed to accelerate the field toward AGI, or seemed to provide justification/cover for increasing the heat and competitiveness of the field, which seems pretty counterproductive to me.
I think it’s still good for some people to work on alignment research. The future is hard to predict, and we can’t totally rule out a string of technical breakthroughs, and the overall option space looks gloomy enough (at least from my perspective) that we should be pursuing multiple options in parallel rather than putting all our eggs in one basket.
That said, I think “alignment research pans out to the level of letting us safely wield vastly superhuman AGI in the near future” is sufficiently unlikely that we definitely shouldn’t be predicating our plans on that working out. AFAICT Leopold’s proposal is that we just lay down and die in the worlds where we can’t align vastly superhuman AI, in exchange for doing better in the worlds where we can align it; that seems extremely reckless and backwards to me, throwing away higher-probability success worlds in exchange for more niche and unlikely success worlds.
I also think alignment researchers thus far, as a group, have mainly had the effect of shortening timelines. I want alignment research to happen, but not at the cost of reducing our hope in the worlds where alignment doesn’t pan out, and thus far a lot of work labeled “alignment” has either seemed to accelerate the field toward AGI, or seemed to provide justification/cover for increasing the heat and competitiveness of the field, which seems pretty counterproductive to me.
Yep. 100% agree!