I think your heart is in the right place. But, a lot of these concerns, and also OpenAI’s efforts, are very premature. Good to be cautious, sure. Yet extrapolating too far ahead usually doesn’t produce useful results. Safety at every step at the way and a response in proportion to the threat will likely work better.
4 years doesn’t seem like a whole lot of time though. it no extrapolation is required to so see OAIs intention is to not to treat alignment as an “intersection between moral philosophy, moral psychology, and other behavioral sciences...”. From the perspective of someone who finds any of this ethically problematic,, now would be a great time to talk about it.
I think your heart is in the right place. But, a lot of these concerns, and also OpenAI’s efforts, are very premature. Good to be cautious, sure. Yet extrapolating too far ahead usually doesn’t produce useful results. Safety at every step at the way and a response in proportion to the threat will likely work better.
In what sense are these efforts ‘premature’? AGI capabilities research is already far surpassing AI alignment research.
4 years doesn’t seem like a whole lot of time though. it no extrapolation is required to so see OAIs intention is to not to treat alignment as an “intersection between moral philosophy, moral psychology, and other behavioral sciences...”. From the perspective of someone who finds any of this ethically problematic,, now would be a great time to talk about it.