Focus on AI risks/x-risks/longtermism: Mainly a subset of the cause prioritization category, consisting of specific references to an overemphasis on AI risk and existential risks as a cause area, as well as longtermist thinking in the EA community.
Does “mainly a subset” mean that a significant majority of responses coded this way were also coded as cause prio?
I’m trying to understand if the cause prio category not being much bigger than this category implies that “general concerns about cause prioritization and specific concerns such as an overemphasis on certain causes . . . and ideas” other than AI/x-risk/longtermism were fairly infrequent.
Does “mainly a subset” mean that a significant majority of responses coded this way were also coded as cause prio?
That’s right, as we note here:
The Cause Prioritization and Focus on AI categories were largely, but not entirely, overlapping. The responses within the Cause Prioritization category which did not explicitly refer to too much focus on AI, were focused on insufficient attention being paid to other causes, primarily animals and GHD.
Specifically, of those who mention Cause Prioritization, around 68% were also coded as part of the AI/x-risk/longtermism category. That said, a large portion of the remainder mentioned “insufficient attention being paid to other causes, primarily animals and GHD” (which one may or may not think is just another side of the same coin). Conversely, around 8% of comments in the AI/x-risk/longtermism category were not also classified as Cause Prioritization (for example, just expressing annoyance about supporters of certain causes wouldn’t count as about Cause Prioritization per se).
So over 2/3rds of Cause Prioritization was explicitly about too much AI/x-risk/longtermism. A large part of the remainder is probably connected, as part of a ‘too much x-risk/too little not x-risk’ category. The overlap between categories is probably larger than implied by the raw numbers, but we had to rely on what people actually wrote in their comments, without making too many suppositions.
Does “mainly a subset” mean that a significant majority of responses coded this way were also coded as cause prio?
I’m trying to understand if the cause prio category not being much bigger than this category implies that “general concerns about cause prioritization and specific concerns such as an overemphasis on certain causes . . . and ideas” other than AI/x-risk/longtermism were fairly infrequent.
That’s right, as we note here:
Specifically, of those who mention Cause Prioritization, around 68% were also coded as part of the AI/x-risk/longtermism category. That said, a large portion of the remainder mentioned “insufficient attention being paid to other causes, primarily animals and GHD” (which one may or may not think is just another side of the same coin). Conversely, around 8% of comments in the AI/x-risk/longtermism category were not also classified as Cause Prioritization (for example, just expressing annoyance about supporters of certain causes wouldn’t count as about Cause Prioritization per se).
So over 2/3rds of Cause Prioritization was explicitly about too much AI/x-risk/longtermism. A large part of the remainder is probably connected, as part of a ‘too much x-risk/too little not x-risk’ category. The overlap between categories is probably larger than implied by the raw numbers, but we had to rely on what people actually wrote in their comments, without making too many suppositions.