This does not bode well to me. One of my personal concerns about the usefulness of AI safety technical research is the extent to which the fruits of such research would actually be utilized by the frontier labs in practice. Just because some hypothetical researcher or lab figures out a solution to the Alignment problem, it doesn’t mean the actual eventual creators of AGI will care enough to actually use it if it, for instance, comes with an alignment tax that slows down their capabilities work and leads to less profit, or worse, causes the loss of first mover advantage to a less scrupulous competitor.
OpenAI seems like the front runner right now, and the fact they had a substantial Alignment Team with substantial compute resources devoted to them, at least made it seem like maybe they’d care enough to use any effective alignment techniques that do get developed and ensure that things go well. The gutting of the Alignment Team does not look good in this regard.
This does not bode well to me. One of my personal concerns about the usefulness of AI safety technical research is the extent to which the fruits of such research would actually be utilized by the frontier labs in practice. Just because some hypothetical researcher or lab figures out a solution to the Alignment problem, it doesn’t mean the actual eventual creators of AGI will care enough to actually use it if it, for instance, comes with an alignment tax that slows down their capabilities work and leads to less profit, or worse, causes the loss of first mover advantage to a less scrupulous competitor.
OpenAI seems like the front runner right now, and the fact they had a substantial Alignment Team with substantial compute resources devoted to them, at least made it seem like maybe they’d care enough to use any effective alignment techniques that do get developed and ensure that things go well. The gutting of the Alignment Team does not look good in this regard.