I appreciate that you are putting out numbers and explain the current research landscape, but I am missing clear actions.
The closest you are coming to proposing them is here:
We need a concerted effort that matches the gravity of the challenge. The best ML researchers in the world should be working on this! There should be billion-dollar, large-scale efforts with the scale and ambition of Operation Warp Speed or the moon landing or even OpenAI’s GPT-4 team itself working on this problem.[17] Right now, there’s too much fretting, too much idle talk, and way too little “let’s roll up our sleeves and actually solve this problem.”
But that still isn’t an action plan. Say you convince me, most of the EA Forum and half of all university educated professionals in your city that this is a big deal. What, concretely, should we do now?
I think the suggestion of ELK work along the lines of Collin Burns et al counted as a concrete step that alignment researchers could take.
There may be other types of influence available for those who are not alignment researchers, which Leopold wasn’t precise about. E.g. those working in the financial system may be able to use their influence to encourage more alignment work.
80,000 Hours has a bunch of ideas on their AI problem profile.
(I’m not trying to be facetious. This main purpose of this post to me seems to be motivational: “I’m just trying to puncture the complacency I feel like many people I encounter have.” Plus nudging existing alignment researchers towards more empirical work. [Edit: This post could also be concrete career advice if you’re someone like Sanjay who read 80,000 Hours’ post on the number alignment researchers and was left wondering ”...so...is that basically enough, or...? After reading this post, I’m assuming that leopold’s answer at least is “HELL NO.”])
I appreciate that you are putting out numbers and explain the current research landscape, but I am missing clear actions.
The closest you are coming to proposing them is here:
But that still isn’t an action plan. Say you convince me, most of the EA Forum and half of all university educated professionals in your city that this is a big deal. What, concretely, should we do now?
I think the suggestion of ELK work along the lines of Collin Burns et al counted as a concrete step that alignment researchers could take.
There may be other types of influence available for those who are not alignment researchers, which Leopold wasn’t precise about. E.g. those working in the financial system may be able to use their influence to encourage more alignment work.
80,000 Hours has a bunch of ideas on their AI problem profile.
(I’m not trying to be facetious. This main purpose of this post to me seems to be motivational: “I’m just trying to puncture the complacency I feel like many people I encounter have.” Plus nudging existing alignment researchers towards more empirical work. [Edit: This post could also be concrete career advice if you’re someone like Sanjay who read 80,000 Hours’ post on the number alignment researchers and was left wondering ”...so...is that basically enough, or...? After reading this post, I’m assuming that leopold’s answer at least is “HELL NO.”])