Nice, yeah! I wouldn’t have expected a statistically significant difference between a mean of 5.7 and 5.4 with those standard errors, but it’s nice to see it here.
I considered doing a statistical test, and then spent some time googling how to do something like a “3-paired” ANOVA on data that looks like (“s” is subject, “r” is reading):
because I’d like to do an ANOVA on the raw scores, rather than the means. I did not resolve my confusion about about what to do about the 3-paired data (I guess you could lump each subject’s data in one column, or do it separately by “like”, “agreement”, and “informative”, but I’m interested in how good each of the readings are summed across the three metrics). I then gave up and just presented the summary statistics. (You can extract the raw scores from the Appendix if you put some work into it though, or I could pass along the raw scores, or you could tell me how to do this sort of analysis in Python if you wanted me to do it!)
When I look at these tables, I’m also usually squinting at the median rather than mean, though I look at both. You can see the distributions in the Appendix, which I like even better. But point taken about how it’d be nice to have stats.
You can extract the raw scores from the Appendix if you put some work into it though, or I could pass along the raw scores, or you could tell me how to do this sort of analysis in Python if you wanted me to do it!
Ah, thanks for the suggestion! To be honest, I only have basic knowledge about stats, so I do not know to do the more complex analysis you described. My (quite possibly flawed) intuition for analysing all questions would be:
Determine, for each subject, “overall score” = (“score of question 1“ + “score of question 2” + “score of question 3”)/3.
If some subjects did not answer to all 3 questions, “overall score” = “sum of the scores of the answered questions”/”number of answered questions”.
Calculate the mean and standard error for each of the AI safety materials.
Repeat the calculation of the p-value as I illustrated above for the pairs of AI safety materials (best, 2nd best), (2nd best, 3rd best), …, and (2nd worst, worst), or just analyse all possible pairs.
Nice, yeah! I wouldn’t have expected a statistically significant difference between a mean of 5.7 and 5.4 with those standard errors, but it’s nice to see it here.
I considered doing a statistical test, and then spent some time googling how to do something like a “3-paired” ANOVA on data that looks like (“s” is subject, “r” is reading):
[s1 r1 “like”] [s1 r1 “agreement”] [s1 r1 “informative”]
[s2 r1 “like”] [s2 r1 “agreement”] [s2 r1 “informative”]
… [s28 r1 “like”] [s28 r1 “agreement”] [s28 r1 “informative”]
[s1 r2 “like”] [s1 r2″agreement”] [s1 r2 “informative”]
[s2 r2 “like”] [s2 r2 “agreement”] [s2 r2 “informative”]
...
because I’d like to do an ANOVA on the raw scores, rather than the means. I did not resolve my confusion about about what to do about the 3-paired data (I guess you could lump each subject’s data in one column, or do it separately by “like”, “agreement”, and “informative”, but I’m interested in how good each of the readings are summed across the three metrics). I then gave up and just presented the summary statistics. (You can extract the raw scores from the Appendix if you put some work into it though, or I could pass along the raw scores, or you could tell me how to do this sort of analysis in Python if you wanted me to do it!)
When I look at these tables, I’m also usually squinting at the median rather than mean, though I look at both. You can see the distributions in the Appendix, which I like even better. But point taken about how it’d be nice to have stats.
Ah, thanks for the suggestion! To be honest, I only have basic knowledge about stats, so I do not know to do the more complex analysis you described. My (quite possibly flawed) intuition for analysing all questions would be:
Determine, for each subject, “overall score” = (“score of question 1“ + “score of question 2” + “score of question 3”)/3.
If some subjects did not answer to all 3 questions, “overall score” = “sum of the scores of the answered questions”/”number of answered questions”.
Calculate the mean and standard error for each of the AI safety materials.
Repeat the calculation of the p-value as I illustrated above for the pairs of AI safety materials (best, 2nd best), (2nd best, 3rd best), …, and (2nd worst, worst), or just analyse all possible pairs.