Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
What you want to assess is the marginal cost-effectiveness of pamphlets, so I think the right approach is to include the best hook that you can which scales up at zero marginal cost. This probably excludes an online giving game. It should allow an online video, but ideally one that can be reused if distributing these elsewhere rather than too personalised to the target audience.
Really good point here; I was a fan myself of the online Giving Game, but that would be hard to scale with the program without securing a donor willing to finance it at a pretty large level.
If the hook is worth it, how expensive would it be to scale and hard would that be to finance?
I suppose if the initial pamphlet run is worth it, you could then A-B test it with a Giving Games pamphlet.
Hey Jonathon, this is a really great initiative! Giving What We Can is currently in the process of designing an experiment to test the effectiveness of our pamphlets. We were hoping to run it in London some time late January, early February. We should coordinate on our experiment design [I will post more details on the forum once we have firmed up details about the experiment design].
Cool, will the results be public?
Hey Tom, not sure about TLYCS’s study, but we plan to make ours public (and I imagine they will too!)
After trailing and fiddling to see what works—how much would 20 million copies of a pamphlette aimed at a general audience cost? The post office gives a lot to charity and I can imagine that it wouldn’t be impossible to persuade them to send this out as a one off free of charge—at least to the houses they’re already posting mail to.. Perhaps different language for different postcodes? Chelsea does not equal Bradford in terms of how appeals might work (religious backgrounds, education levels, size of household, disposable income etc.)
That would be great! I’ll connect with you on Facebook and we can open up a line of communication there.
Also, have you got in touch with the good people at Charity Science?
Just took a look at their website, very cool stuff. You suggesting I email them and get their feedback on our plan?
Definitely. Some of the team at least are EA insiders and lurking on this very forum, and they’ll already know about TLYCS for sure.
We lurk amongggggg youuuuu.
Hi Jonathon, will the results be public?
That is definitely the intention. We are really hoping that the data we gather will be useful to other orgs considering a similar program, which was part of the motivation for posting up here ahead of time to get feedback.
There was an “Effective Altruism Brochure” thread on Facebook’s Effective Altruists group. Might be a good starting template to use for a handout: see here
Pretty sharp! If I had seen this before, I definitely would have passed it along to our designer as something to work from.
I noticed what might be a significant confounder in getting this estimate: you are likely to be particularly enthusiastic/eloquent about the whole thing, which is an extra input which will help the effectiveness of the pamphlets, but is very hard to budget properly on the ‘costs’ side.
To account for this, you would probably be better hiring someone else to distribute the pamphlets; probably someone without a deep existing commitment to TLYCS (but some contact should be fine—the question is whether you could get similarly good people when trying to scale it up).
But if you’re going to scale this, you’ll probably get LYCS members or EAs to hand out the pamphlets anyway, right? I mean, we kind-of do need more concrete volunteery tasks that we can give to student groups anyway.
So it’s best to get random EAs or LYCS members to do it in the study, right?
I agree that if the people running the study are also distributing the pamphlets then you end up with bias.
I wasn’t sure what the scaling model was, but if there are enough plausible volunteers then this sounds right.
The general point is that you want to try and produce a typical case, not a special case.
This is a really good point. Yeah, the scaling model is to have local TLYCS chapters organizing volunteers to do this as a regular, rolling semester activity. I hadn’t really considered myself a confounding variable in this sense, because I’m definitely not a master pamphleteer. I’m an engineer by trade, and if this program takes off, I’ll eventually just be another volunteer in the LA area that helps hand out leaflets occasionally. We’re also thinking about splitting crews on Friday distribution days—so I would have a crew that hits up two universities, and there would be another volunteer crew hitting up two different campuses. Any thoughts on this?
Great idea!
Does the pamphleting have to be done on Fridays, or can it be done on pseudo-random days? (I’m thinking about distinguishing the signal from the pamphlets from e.g. people spending more time on the Internet during weekends. Pseudo-random spikes might require fancier math to pick out though, and of course you need to remember which days you handed out pamphlets!
Can you ask people, when they take the pledge, how they found out about TLYCS? (This will provide an under-estimate, but it can be used to sanity-check other estimates). (Also it’s a bit ambiguous if someone had e.g. vaguely heard of TLYCS or Singer before, but pamphleting prompted them to actually take the pledge)
There’s a typo in your text (“require’s”) - make sure you get the pamphlets proof-read :)
Do you know in advance what you expect, in terms of:
How many pamphlets you will distribute
What the effect will be?
(Last I heard, EA was using predictionbazaar.com and predictionbook.com as its prediction markets)
Statistically, the situation you don’t want to get into is leafleting every Friday so there’s no Fridays left to provide your control condition.
Oh yeah, good point.
Some really good points here. I never considered that handing out the leaflets only on Fridays might skew the results (I just happen to have every other Friday off, thanks California), I’ll have to think that through. And it would definitely be a good idea to have a “Where did you hear about the pledge?” question on the pledge site, I’ll check into that as well.
I’m not sure what our initial run on the pamphlets will be, but I’m thinking in the 5K-15K range. I haven’t done any analysis to figure out how many we’d need to hand out to get good statistics; not even really sure how to go about doing that, to be honest. And absolutely no idea what to expect in terms of a response rate. Any thoughts on how to estimate that?
Please talk to a real statistician if you’re designing an experiment! Random Internet people picking your design apart is actually pretty good as far as review goes (if they’re the right Internet people), but actual statisticians are orders of magnitude better. Experiment design is very tricky and good statisticians are aware of both lots of tools to make your job easier, and lots of pitfalls for you to avoid. To quote Ronald Fisher:
Statistics Without Borders may be a good place to start.
I figure most people don’t know a statistician (I don’t) but there are plenty of people in LessWrong discussion who know how to do a power calculation so it might be good to start there (or just to dig a bit deeper here).
It really won’t help address the problem I’m talking about at all, which is unknown design flaws/statistical techniques/study design tools. Once you’ve figured out that you have a problem like “how should I power my study?”, smart people plus Internet is fine; I’m worried about the other 10 issues we haven’t noticed yet. That’s the kind of thing that statisticians are useful for.
Fortunately, it turns out you can still talk to statisticians even if you don’t know them personally. If you’re spending money on your study, you could even go so far as to hire a consultant. I also know statisticians and would be happy to refer Jonathon.
Makes sense. Also, if the EA survey is redone, that might be an even more important place to have a statistician.
As someone who did a lot of study design in undergraduate, is currently a “data scientist”, and considers myself smart, I can confirm that I still make approximately 10 huge mistakes every time I run a study.
Yeah, if you give me the contact info of a statistician that you recommend that would be great. I don’t know if we have the budget for it, but I would definitely reach out.
I’m checking for people who would be interested in doing it pro bono. If that doesn’t work, I’m 99% sure you can find some people to fund a couple consultant-hours.
Not to put too fine a point on it, but if the alternative is TLYCS designing the experiment themselves, this is pretty much like running a charity that spends nothing on overhead. It looks good on paper, but in reality, that last bit of money is a huge effectiveness multiplier.
I’d consider funding this if it’s “worth it” and not too much money. I’m sure others would as well.
I’m fairly surprised the EA movement doesn’t have official statisticians. The EAA movement has a lot of people claiming to be official statisticians.