In my perspective, new and useful innovations in the past, especially in new fields, came from people with a wide and deep education and skillset that takes years to learn; and from fragmented research where no-one is necessarily thinking of a very high level terminal goal.
How sure are you that advice like “don’t pursue proxy goals” or “don’t spend years getting a degree” are useful for generating a productive field of AI alignment research, and not just for generating people who are vaguely similar to existing researchers who are thought of as successful? Or who can engage with existing research but will struggle with stepping outside its box?
After all:
Many existing researchers who have made interesting and important contributions do have PhDs,
And it doesn’t seem like we’re anywhere close to “solving alignment”, so we don’t actually know that being able to engage with their research without a much broader understanding is really that useful.
In my perspective, new and useful innovations in the past, especially in new fields, came from people with a wide and deep education and skillset that takes years to learn; and from fragmented research where no-one is necessarily thinking of a very high level terminal goal.
How sure are you that advice like “don’t pursue proxy goals” or “don’t spend years getting a degree” are useful for generating a productive field of AI alignment research, and not just for generating people who are vaguely similar to existing researchers who are thought of as successful? Or who can engage with existing research but will struggle with stepping outside its box?
After all:
Many existing researchers who have made interesting and important contributions do have PhDs,
And it doesn’t seem like we’re anywhere close to “solving alignment”, so we don’t actually know that being able to engage with their research without a much broader understanding is really that useful.