Does “mainly a subset” mean that a significant majority of responses coded this way were also coded as cause prio?
That’s right, as we note here:
The Cause Prioritization and Focus on AI categories were largely, but not entirely, overlapping. The responses within the Cause Prioritization category which did not explicitly refer to too much focus on AI, were focused on insufficient attention being paid to other causes, primarily animals and GHD.
Specifically, of those who mention Cause Prioritization, around 68% were also coded as part of the AI/x-risk/longtermism category. That said, a large portion of the remainder mentioned “insufficient attention being paid to other causes, primarily animals and GHD” (which one may or may not think is just another side of the same coin). Conversely, around 8% of comments in the AI/x-risk/longtermism category were not also classified as Cause Prioritization (for example, just expressing annoyance about supporters of certain causes wouldn’t count as about Cause Prioritization per se).
So over 2/3rds of Cause Prioritization was explicitly about too much AI/x-risk/longtermism. A large part of the remainder is probably connected, as part of a ‘too much x-risk/too little not x-risk’ category. The overlap between categories is probably larger than implied by the raw numbers, but we had to rely on what people actually wrote in their comments, without making too many suppositions.
That’s right, as we note here:
Specifically, of those who mention Cause Prioritization, around 68% were also coded as part of the AI/x-risk/longtermism category. That said, a large portion of the remainder mentioned “insufficient attention being paid to other causes, primarily animals and GHD” (which one may or may not think is just another side of the same coin). Conversely, around 8% of comments in the AI/x-risk/longtermism category were not also classified as Cause Prioritization (for example, just expressing annoyance about supporters of certain causes wouldn’t count as about Cause Prioritization per se).
So over 2/3rds of Cause Prioritization was explicitly about too much AI/x-risk/longtermism. A large part of the remainder is probably connected, as part of a ‘too much x-risk/too little not x-risk’ category. The overlap between categories is probably larger than implied by the raw numbers, but we had to rely on what people actually wrote in their comments, without making too many suppositions.