Most AI safety outreach should be done without presenting EA ideas or assuming EA frameworks
Agree “on principle”, clueless (and concerned) on consequences.
From my superficial understanding of the current psychological research on EA (by Caviola and Althaus), a lot of core EA ideas are unlikely to really resonate with the majority of individuals, while the case for building safer AI seems to have broader appeal. Nonetheless, I do worry that AI Safety with a lack of EA ideas involved is more likely to favor an ethics of survival rather than a welfarist ethic, is unlikely to take S-risks / digital sentience into account, so it also seems possible that scaling in that way could have very negative outcomes.
Agree “on principle”, clueless (and concerned) on consequences.
From my superficial understanding of the current psychological research on EA (by Caviola and Althaus), a lot of core EA ideas are unlikely to really resonate with the majority of individuals, while the case for building safer AI seems to have broader appeal. Nonetheless, I do worry that AI Safety with a lack of EA ideas involved is more likely to favor an ethics of survival rather than a welfarist ethic, is unlikely to take S-risks / digital sentience into account, so it also seems possible that scaling in that way could have very negative outcomes.