I suspect that things like the Alignment Newsletter are causing AI safety researchers to understand and engage with each other’s work more; this seems good.
This is the goal, but it’s unclear that it’s having much of an effect. I feel like I relatively often have conversations with AI safety researchers where I mention something I highlighted in the newsletter, and the other person hasn’t heard of it, or has a very superficial / wrong understanding of it (one that I think would be corrected by reading just the summary in the newsletter).
This is very anecdotal; even if there are times when I talk to people and they do know the paper that I’m talking about because of the newsletter, I probably wouldn’t notice / learn that fact.
(In contrast, junior researchers are often more informed than I would expect, at least about the landscape, even if not the underlying reasons / arguments.)
This is the goal, but it’s unclear that it’s having much of an effect. I feel like I relatively often have conversations with AI safety researchers where I mention something I highlighted in the newsletter, and the other person hasn’t heard of it, or has a very superficial / wrong understanding of it (one that I think would be corrected by reading just the summary in the newsletter).
This is very anecdotal; even if there are times when I talk to people and they do know the paper that I’m talking about because of the newsletter, I probably wouldn’t notice / learn that fact.
(In contrast, junior researchers are often more informed than I would expect, at least about the landscape, even if not the underlying reasons / arguments.)