Why isn’t the survey at least useful count data? It allows me to considerably sharpen my lower bounds on things like total donations and the number of Less Wrong EAs.
I think count data is the much more useful kind to take away even ignoring sampling bias issues, because the data in the survey is over a year old, i.e. Even if it were a representative snapshot of EA in early 2014, that snapshot would be of limited use. Whereas most counts can safely be assumed to be going up.
I agree the survey can provide useful count data along lines of providing lower bounds. With a couple of exceptions though, I didn’t find the sort of lower bounds the survey gives hugely surprising or informative—if others found them much moreso, great!
Why isn’t the survey at least useful count data? It allows me to considerably sharpen my lower bounds on things like total donations and the number of Less Wrong EAs.
I think count data is the much more useful kind to take away even ignoring sampling bias issues, because the data in the survey is over a year old, i.e. Even if it were a representative snapshot of EA in early 2014, that snapshot would be of limited use. Whereas most counts can safely be assumed to be going up.
I agree the survey can provide useful count data along lines of providing lower bounds. With a couple of exceptions though, I didn’t find the sort of lower bounds the survey gives hugely surprising or informative—if others found them much moreso, great!