Agreed. I’m sure many people on this Forum will be a better fit to answer this question than myself, but in general, your best bet is probably to figure out whether the program(s) and advisor(s) you’re applying to work with do work in technical alignment. And mention your interests in alignment if they do, and don’t if they don’t.
For example, at Berkeley, CHAI and Jacob Steinhardt’s group do work in technical alignment. At Cambridge, David Krueger’s lab. I believe there’s a handful of others.
or is it to be expected that it has become such a basic thing, that it is just useless (or even negatively) impacting my application?
(Low confidence) I would not guess “basic” would be the main issue with mentioning alignment. Bigger problems may include:
many ML academics are probably skeptical that useful work can be done in alignment, and/or find x-risky arguments kooky.
I expect there’s a negative correlation between technical alignment interest and ML ability, conditional upon applying to ML grad school.
Agreed. I’m sure many people on this Forum will be a better fit to answer this question than myself, but in general, your best bet is probably to figure out whether the program(s) and advisor(s) you’re applying to work with do work in technical alignment. And mention your interests in alignment if they do, and don’t if they don’t.
For example, at Berkeley, CHAI and Jacob Steinhardt’s group do work in technical alignment. At Cambridge, David Krueger’s lab. I believe there’s a handful of others.
(Low confidence) I would not guess “basic” would be the main issue with mentioning alignment. Bigger problems may include:
many ML academics are probably skeptical that useful work can be done in alignment, and/or find x-risky arguments kooky.
I expect there’s a negative correlation between technical alignment interest and ML ability, conditional upon applying to ML grad school.