It was definitely not my goal to describe how experienced people might “unlearn what they have learned”, but I’m not sure that much of the advice changes for experienced people.
“Unlearning” seems instrumentally useful if it makes it easier for you to contribute/think well but using your previous experience might also be valuable. For example, REFINE thinks that conceptual research is not varied enough and is looking for people with diverse backgrounds.
For example, apart from young adults often starting with the same few bad ideas about AI alignment, established researchers from particular fields might often start with their own distinctive bad ideas about AI alignment—but those might be quite field-dependent. For example, psych professors like me might have different failure modes in learning about AI safety than economics professors, or moral philosophy professors.
This is a good example and I think generally I haven’t addressed that failure mode in this article. I’m not aware of any resources for mid or late-career professionals transitioning into alignment but I will comment here if I hear of such a resource, or someone else might suggest a link.
Thanks, Geoffrey, I appreciate the response.
It was definitely not my goal to describe how experienced people might “unlearn what they have learned”, but I’m not sure that much of the advice changes for experienced people.
“Unlearning” seems instrumentally useful if it makes it easier for you to contribute/think well but using your previous experience might also be valuable. For example, REFINE thinks that conceptual research is not varied enough and is looking for people with diverse backgrounds.
This is a good example and I think generally I haven’t addressed that failure mode in this article. I’m not aware of any resources for mid or late-career professionals transitioning into alignment but I will comment here if I hear of such a resource, or someone else might suggest a link.