You’ve given lots of reasons here, and cited posts which also give several reasons. However, I feel like this hasn’t stated the real & genuine crux—which is that you are sceptical that AI safety is an important area to work on.
Would you agree this is a fair summary of your perspective?
As shown in this table 0% of CE staff (including me) identify AI as their top cause area. I think across the team people’s reasons are varied but cluster around something close to epistemic scepticism. My personal perspective is also in line with that.
You’ve given lots of reasons here, and cited posts which also give several reasons. However, I feel like this hasn’t stated the real & genuine crux—which is that you are sceptical that AI safety is an important area to work on.
Would you agree this is a fair summary of your perspective?
As shown in this table 0% of CE staff (including me) identify AI as their top cause area. I think across the team people’s reasons are varied but cluster around something close to epistemic scepticism. My personal perspective is also in line with that.
I really want to get to the bottom of this, because it seems like the dominant consideration here (i.e. the crux).
Not a top cause area ≠ Not important
At the risk of being too direct, do you as an individual, believe AI safety is an important cause area for EA’s to be working on?