He more recently mentioned that he noticed “people continuously vanishing higher into the tower,” that is, focusing on more abstract and harder to evaluate issues, and that very few people have done the opposite. One commenter, Ben Weinstein-Raun, suggested several reasons, among them that longer-loop work is more visible, and higher status.
I disagree that longer-loop work is more visible and higher status, I think the opposite is true. In AI, agent foundations researchers are less visible and lower status than prosaic AI alignment researchers, who are less visible and lower status than capabilities researchers. In my own life, I got a huge boost of status & visibility when I did less agent foundationsy stuff and more forecasting stuff (timelines, takeoff speeds, predicting ML benchmarks, etc.).
If you are working with fast feedback loops, you can make things and then show people the things. If you’re working with slow feedback loops, you have nothing to show and people don’t really know what you’re doing. The former intuitively seems much better if your goal is status-seeking (which is somewhat my goal in practice, even if ideally it shouldn’t be).
I don’t even know that it’s more interesting. What’s interesting is different for different people, but if I’m honest with myself I probably find timelines forecasting more interesting than decision theory, even though I find decision theory pretty damn interesting.
If it is any indication, I’m plenty skeptical of the value of marginal timelines work—it’s a very busy area, and one that probably needs less work than it gets - I suspect partly because it has lots of visibility.
I disagree that longer-loop work is more visible and higher status, I think the opposite is true. In AI, agent foundations researchers are less visible and lower status than prosaic AI alignment researchers, who are less visible and lower status than capabilities researchers. In my own life, I got a huge boost of status & visibility when I did less agent foundationsy stuff and more forecasting stuff (timelines, takeoff speeds, predicting ML benchmarks, etc.).
If you are working with fast feedback loops, you can make things and then show people the things. If you’re working with slow feedback loops, you have nothing to show and people don’t really know what you’re doing. The former intuitively seems much better if your goal is status-seeking (which is somewhat my goal in practice, even if ideally it shouldn’t be).
Yes, in AI safety that seems correct—it’s still probably more interesting to do more theoretical work, but it is less prestigious or visible.
I don’t even know that it’s more interesting. What’s interesting is different for different people, but if I’m honest with myself I probably find timelines forecasting more interesting than decision theory, even though I find decision theory pretty damn interesting.
If it is any indication, I’m plenty skeptical of the value of marginal timelines work—it’s a very busy area, and one that probably needs less work than it gets - I suspect partly because it has lots of visibility.
Fair enough!