What makes you think that the average person rates saving humanity highly enough to make it worth doing alignment research rather than capabilities? That seems like a pretty conservative statement from my experience. Most people I know would definitely take a small-moderate amount of extra money rather than doing more valuable work for humanity. Also building something could feel like more rewarding work than safety work as well.
Maybe I’m missing something, what do you think are the assumptions that that statement makes?
Maybe my comment is off, since your article is specifically about AI alignment vs. capabilities research and I was taking the single sentence I quoted out of context. Will remove .
Maybe I’m missing something, what do you think are the assumptions that that statement makes?
Maybe my comment is off, since your article is specifically about AI alignment vs. capabilities research and I was taking the single sentence I quoted out of context. Will remove .
Reply