Thanks for releasing this. I’m curious what is the more interesting sample here: somewhat established alignment researchers (measured by the proxy that they have published a paper), or the general population of who filled out the sample (including those with briefer prior engagement)?
I filled out this survey because it got signal boosted in the AI Safety Camp slack. At the same time, there were questions about the funding viability of AI Safety Camp, so I was strongly motivated to fill it out for the $40 donation. At the same time, I’m not sure that I have engaged deeply enough with Alignment Research to be called an “alignment researcher.” Given that AISC was the most common place donated to, this may have skewed the population in general.
Skimming through the visualization tool (very cool, thank you!), the personality questions didn’t seem to be affected, but the political questions do seem to vary a little bit. For instance, among alignment researchers who have published a paper, around 55% support or strongly support pausing. On the other hand, among those who haven’t published a paper, around 75% support or strongly support pausing. Which population should this analysis rely on?
Good points and thanks for the question. One point to consider is that AISC publicly noted that they need more funding, which may have been a significant part of the reason that they were the most common donation recipient in the alignment survey. We also found that a small subset of the sample explicitly indicated they were involved with AISC (7 out of 124 participants). This is just to provide some additional context/potential explanation to what you note in your comment.
As we note in the post, we were generally cautious to exclude data from the analysis and opted to prioritize releasing the visualization/analysis tool that enables people sort and filter the data however they please. That way, we do not have to choose between findings like the ones you report about pause support x quantity of published work; both statistics you cite are interesting in their own right and should be considered by the community. We generally find though that the key results reported are robust to these sorts of filtering perturbations (let me know if you discover anything different!). Overall, ~80% of the alignment sample is currently receiving funding of some form to pursue their work, and ~75% have been doing this work for >1 year, which is the general population we are intending to sample.
Thanks for releasing this. I’m curious what is the more interesting sample here: somewhat established alignment researchers (measured by the proxy that they have published a paper), or the general population of who filled out the sample (including those with briefer prior engagement)?
I filled out this survey because it got signal boosted in the AI Safety Camp slack. At the same time, there were questions about the funding viability of AI Safety Camp, so I was strongly motivated to fill it out for the $40 donation. At the same time, I’m not sure that I have engaged deeply enough with Alignment Research to be called an “alignment researcher.” Given that AISC was the most common place donated to, this may have skewed the population in general.
Skimming through the visualization tool (very cool, thank you!), the personality questions didn’t seem to be affected, but the political questions do seem to vary a little bit. For instance, among alignment researchers who have published a paper, around 55% support or strongly support pausing. On the other hand, among those who haven’t published a paper, around 75% support or strongly support pausing. Which population should this analysis rely on?
Good points and thanks for the question. One point to consider is that AISC publicly noted that they need more funding, which may have been a significant part of the reason that they were the most common donation recipient in the alignment survey. We also found that a small subset of the sample explicitly indicated they were involved with AISC (7 out of 124 participants). This is just to provide some additional context/potential explanation to what you note in your comment.
As we note in the post, we were generally cautious to exclude data from the analysis and opted to prioritize releasing the visualization/analysis tool that enables people sort and filter the data however they please. That way, we do not have to choose between findings like the ones you report about pause support x quantity of published work; both statistics you cite are interesting in their own right and should be considered by the community. We generally find though that the key results reported are robust to these sorts of filtering perturbations (let me know if you discover anything different!). Overall, ~80% of the alignment sample is currently receiving funding of some form to pursue their work, and ~75% have been doing this work for >1 year, which is the general population we are intending to sample.