Here are some big and common questions I’ve received from early-stage AI Safety focused people, with at least some knowledge of EA.
They probably don’t spend most of their time thinking about AIS, but it is their cause area of focus. Unsure if that meets the criteria you’re looking for, exactly.
What evidence would be needed for EA to depriotise AI Safety as a cause area, at least relative to other x-risks?
What is the most impactful direction of research within AIS (common amongst people looking for their first project/opportunity. I usually point them at this lesswrong series, as a starting point).
Here are some big and common questions I’ve received from early-stage AI Safety focused people, with at least some knowledge of EA.
They probably don’t spend most of their time thinking about AIS, but it is their cause area of focus. Unsure if that meets the criteria you’re looking for, exactly.
What evidence would be needed for EA to depriotise AI Safety as a cause area, at least relative to other x-risks?
What is the most impactful direction of research within AIS (common amongst people looking for their first project/opportunity. I usually point them at this lesswrong series, as a starting point).