One thought I had is that do-ocracy (as opposed to “someone will have got this covered, right?”) describes other areas, as well as EA. On the recent 80k podcast, Lennart Heim describes a similar dynamic within AI governance:
“at some point, I would discover that compute seems really important as an input to these AI systems — so maybe just understanding this seems useful for understanding the development of AI. And I really saw nobody working on this. So I was like, “I guess I must be wrong if nobody’s working on this. All these smart people, they’re on the ball, they got it,” right? But no, they’re not. If you don’t see something covered, my cold take is like, cool, maybe it’s actually not that impactful, maybe it’s not a good idea. But whatever: try to push it, get feedback, put it out there, talk to people and see if this is a useful thing to do.
You should, in general, expect there are more unsolved problems than solved problems, particularly in such a young field, and where we just need so many people to work on this. So yeah, if you have some ideas of how your niche can contribute, or certain things where you don’t think it’s impactful just because we haven’t covered it yet, that does not mean it’s not a good thing to go for. I encourage you to try it and put it out there.”
(The conversation continues in a helpful way beyond that point.)
“Observing from afar, it’s easy to think there’s an abundance of people working on AGI safety. Everyone on your timeline is fretting about AI risk, and it seems like there is a well-funded EA-industrial-complex that has elevated this to their main issue. Maybe you’ve even developed a slight distaste for it all—it reminds you a bit too much of the woke and FDA bureaucrats, and Eliezer seems pretty crazy to you.
That’s what I used to think too, a couple of years ago. Then I got to see things more up close. And here’s the thing: nobody’s actually on the friggin’ ball on this one!”
Thanks for this comment, it’s very inspiring!
One thought I had is that do-ocracy (as opposed to “someone will have got this covered, right?”) describes other areas, as well as EA. On the recent 80k podcast, Lennart Heim describes a similar dynamic within AI governance:
“at some point, I would discover that compute seems really important as an input to these AI systems — so maybe just understanding this seems useful for understanding the development of AI. And I really saw nobody working on this. So I was like, “I guess I must be wrong if nobody’s working on this. All these smart people, they’re on the ball, they got it,” right? But no, they’re not. If you don’t see something covered, my cold take is like, cool, maybe it’s actually not that impactful, maybe it’s not a good idea. But whatever: try to push it, get feedback, put it out there, talk to people and see if this is a useful thing to do.
You should, in general, expect there are more unsolved problems than solved problems, particularly in such a young field, and where we just need so many people to work on this. So yeah, if you have some ideas of how your niche can contribute, or certain things where you don’t think it’s impactful just because we haven’t covered it yet, that does not mean it’s not a good thing to go for. I encourage you to try it and put it out there.”
(The conversation continues in a helpful way beyond that point.)
Leopold Aschenbrenner points to a somewhat similar dynamic on the technical side in Nobody’s on the ball on AGI alignment:
“Observing from afar, it’s easy to think there’s an abundance of people working on AGI safety. Everyone on your timeline is fretting about AI risk, and it seems like there is a well-funded EA-industrial-complex that has elevated this to their main issue. Maybe you’ve even developed a slight distaste for it all—it reminds you a bit too much of the woke and FDA bureaucrats, and Eliezer seems pretty crazy to you.
That’s what I used to think too, a couple of years ago. Then I got to see things more up close. And here’s the thing: nobody’s actually on the friggin’ ball on this one!”