As a side note, I think there’s an analogous selection bias within longtermism, where many of our best and brightest people end up doing technical alignment, making it harder to have clearer thinking about other longtermist issues (including issues directly related to making the development of transformative AI go well, like understanding the AI strategic landscape and AI safety recruitment strategy)
As a side note, I think there’s an analogous selection bias within longtermism, where many of our best and brightest people end up doing technical alignment, making it harder to have clearer thinking about other longtermist issues (including issues directly related to making the development of transformative AI go well, like understanding the AI strategic landscape and AI safety recruitment strategy)