How would one tell the difference between extra members which came for “capitalising on the media attention around Effective Altruism over the summer”, and 10% donors who simply got rustled up by this attention?
Joining members are asked what they likely would have given if not joining. This is quite a noisy process since people’s estimates of what they would have given will often be inaccurate, but I think it works as a first-order correction (and it’s hard to see how to do better). This is factored into the impact assessment.
Oh I meant how you distinguished between people who signed up to the pledge after seeing GWWC mentioned in the media attention or book (or elsewhere), and people who were a result of the efforts capitalising on this that EA donors are funding. For the question you answered I agree, I can’t think of any better (or other) data to get about individual pledgers and the only thing to compare it to is an overall estimate of the extra donations a pledge could lead to.
Sorry for the misunderstanding! Yeah, it looks kind of hard to distinguish. Maybe someone has a clever method. The only thing I can immediately think of is an RCT on follow-up to different bits of media coverage, but I expect this would be super-messy to run and might not produce great data.
Joining members are asked what they likely would have given if not joining. This is quite a noisy process since people’s estimates of what they would have given will often be inaccurate, but I think it works as a first-order correction (and it’s hard to see how to do better). This is factored into the impact assessment.
Oh I meant how you distinguished between people who signed up to the pledge after seeing GWWC mentioned in the media attention or book (or elsewhere), and people who were a result of the efforts capitalising on this that EA donors are funding. For the question you answered I agree, I can’t think of any better (or other) data to get about individual pledgers and the only thing to compare it to is an overall estimate of the extra donations a pledge could lead to.
Sorry for the misunderstanding! Yeah, it looks kind of hard to distinguish. Maybe someone has a clever method. The only thing I can immediately think of is an RCT on follow-up to different bits of media coverage, but I expect this would be super-messy to run and might not produce great data.