And then it’s like, what do you do to help? When I was writing my blog post series, “The most important century,” I freely admitted the lamest part was, so what do I do? I had this blog post called “Call to vigilance” instead of “Call to action” — because I was like, I don’t have any actions for you. You can follow the news and wait for something to happen, wait for something to do.
I think people got used to that. People in the AI safety community got used to the idea that the thing you do in AI safety is you either work on AI alignment — which at that time means you theorise, you try to be very conceptual; you don’t actually have AIs that are capable enough to be interesting in any way, so you’re solving a lot of theoretical problems, you’re coming up with research agendas someone could pursue, you’re torturously creating experiments that might sort of tell you something, but it’s just almost all conceptual work — or you’re raising awareness, or you’re community building, or you’re message spreading.
These are kind of the things you can do. In order to do them, you have to have a high tolerance for just going around doing stuff, and you don’t know if it’s working. You have to be kind of self-driven.
He goes on to clarify that today, he sees many ways to contribute that are much more straightforward:
[...]. So that’s the state we’ve been in for a long time, and I think a lot of people are really used to that, and they’re still assuming it’s that way. But it’s not that way. I think now if you work in AI, you can do a lot of work that looks much more like: you have a thing you’re trying to do, you have a boss, you’re at an organisation, the organisation is supporting the thing you’re trying to do, you’re going to try and do it. If it works, you’ll know it worked. If it doesn’t work, you’ll know it didn’t work. And you’re not just measuring success in whether you convinced other people to agree with you; you’re measuring success in whether you got some technical measure to work or something like that.
Holden Karnofsky has described his current thoughts on these topics “how to help with longermism/AI” on the 80000hours pocast – and there are some important changes:
He goes on to clarify that today, he sees many ways to contribute that are much more straightforward:
Then, from ~1:20:00 the topic continues with “Great things to do in technical AI safety”