Many people believe that AI will be transformative, but choose not to work on it due to factors such a (perceived) lack of personal fit or opportunity, personal circumstances, or other practical considerations.
There may be various other reasons why people choose to work on other areas, despite believing transformative AI is very likely, e.g. decision-theoretic or normative/meta-normative uncertainty.
Thanks for adding this! I definitely didn’t want to suggest the list of reasons was exhaustive or that the division between the two ‘camps’ is clear-cut.
There may be various other reasons why people choose to work on other areas, despite believing transformative AI is very likely, e.g. decision-theoretic or normative/meta-normative uncertainty.
Thanks for adding this! I definitely didn’t want to suggest the list of reasons was exhaustive or that the division between the two ‘camps’ is clear-cut.