People interested in AI risk and this post might be interested in applying to the researcher or software engineer roles at the Alignment Research Center, a non-profit organization focused on theoretical research to align future machine learning systems with human interests.
This is a test by the EA Forum Team to gauge interest in job ads relevant to posts - give us feedback here.
People interested in AI risk and this post might be interested in applying to the researcher or software engineer roles at the Alignment Research Center, a non-profit organization focused on theoretical research to align future machine learning systems with human interests.
This is a test by the EA Forum Team to gauge interest in job ads relevant to posts - give us feedback here.