80,000 Hours has a bunch of ideas on their AI problem profile.
(I’m not trying to be facetious. This main purpose of this post to me seems to be motivational: “I’m just trying to puncture the complacency I feel like many people I encounter have.” Plus nudging existing alignment researchers towards more empirical work. [Edit: This post could also be concrete career advice if you’re someone like Sanjay who read 80,000 Hours’ post on the number alignment researchers and was left wondering ”...so...is that basically enough, or...? After reading this post, I’m assuming that leopold’s answer at least is “HELL NO.”])
80,000 Hours has a bunch of ideas on their AI problem profile.
(I’m not trying to be facetious. This main purpose of this post to me seems to be motivational: “I’m just trying to puncture the complacency I feel like many people I encounter have.” Plus nudging existing alignment researchers towards more empirical work. [Edit: This post could also be concrete career advice if you’re someone like Sanjay who read 80,000 Hours’ post on the number alignment researchers and was left wondering ”...so...is that basically enough, or...? After reading this post, I’m assuming that leopold’s answer at least is “HELL NO.”])