I don’t think it’s about mismatched expectations so much as I have a different assessment than you do of how much this piece is likely to promote effective giving.
If your intention was to promote consideration of impact, or recipient focussed donation behaviour, then I think this article misses that mark. Sure, the information might be there 15 paragraphs deep in one of a dozen links, but it’s not conveyed to me—even as an interested reader versed in effective altruism ideas.
If indeed your article was intended by you to promote Charity Navigator style research with the hope it will nudge people towards the idea of impactful giving (which is what I take you to mean by saying that flattening out of the message is “a bug not a feature”), then I respectfully disagree that such an approach will in expectation increase effective giving.
If your intention was to promote consideration of impact, or recipient focussed donation behaviour, then I think this article misses that mark. Sure, the information might be there 15 paragraphs deep in one of a dozen links, but it’s not conveyed to me—even as an interested reader versed in effective altruism ideas.
I think there’s a miscommunication somewhere. In the sixth paragraph of the article, I stated that people should “take the perspective of a savvy investor and research donation options to make sure you do the most good per dollar donated.” To me, that’s the essence of EA. Would you disagree?
I respectfully disagree that such an approach will in expectation increase effective giving.
If so, I guess we will have to agree to disagree then.
Fortunately, there is an easy way of figuring out whose opinion is closer to the mark. One of the metrics Intentional Insights tracks is whether people clicked from our article to the website of the direct action charities described in the piece. If your opinion is correct, then we will not see clicks, as people will not be persuaded that EA-style effective giving is a worthwhile area. If my take is correct, then there will be some clicks, since people will be persuaded of the value of AMF and GiveDirectly. I’ll check with AMF and GiveDirectly in a couple of weeks to see what the click-through numbers were, and we’ll find out. Stay tuned!
I agree that maximising the good done with every effort is the essence of EA; I disagree that the wording and structure of your piece communicated that, even with those words included.
There’s a tendency for people who do a lot of academic writing to assume that every sub-clause and every word will be carefully read and weighed by their readers. We agonise for months over a manuscript, carefully selecting modifiers to convey the correct levels of certainty in our conclusions or the strength of a hypothesis. In reality even the average academic reader will look at the title, scan the abstract and possibly look a figure and the concluding sentences.
Communicating complex ideas in a short piece is really hard to do, and if the less concrete the link between the message you want to convey and the topic you are trying to shoehorn that message into, the harder it is to avoid distorting your message. You could seek feedback from people who aren’t already aware of what you’re trying to communicate, but that’s likely to be very hard to do in the time frame needed for a current news story.
If you want a measure of success, I think you need a much better end point than website views, which is a) subject to a wide range of confounders and b) only a proxy for the thing you are trying to achieve.
In reality even the average academic reader will look at the title, scan the abstract and possibly look a figure and the concluding sentences.
I think we might have different perspectives about academic readers
I think you need a much better end point than website views
This seems a bit contradictory to your previous statement about the average reader. I propose that if someone actually takes the time to click to GiveWell etc., this indicates a measure of interest and willingness to pay the resource of attention and time.
In fact, InIn measures its effectiveness in marketing EA-themed ideas about effective giving to a broad audience through its success in drawing the awareness of non-EA members to: EA ideas, such as researching charities, comparing their impact before donating, and expanding their circles of compassion; EA meta-charities that provide evaluations of effective charities; finally, effective direct-action charities themselves. In doing so, InIn works on a relatively neglected area of the EA nonprofit sales funnel, the key first stage of potential donor awareness of the benefits of EA ideas and charities. We then hand off the donors to EA meta-charities and direct-action charities for the latter stages of the sales funnel, which they have more capacity and expertise to handle. The metrics we use here are the exposure of people to our content, the number of those who are exposed who then click from our content to the websites of EA meta-charities and direct-action charities, the number of those people who then engage actively with the nonprofit by signing up to their newsletter, and finally donating. Naturally, each step is progressively harder to track, and the EA charities themselves are responsible for the last two steps.
The EA charities are grateful for the hard work we do, and applaud our efforts. Hopefully, that gives you some more context. My apologies for not sharing this context earlier :-)
We may have different perspectives on academic readers: I’m a relatively junior medical researcher. Three of my papers have over 100 citations. The view I expressed here is the one shared by my Principal Investigator (a professor at Oxford University who leads a multi-million pound international research consortium, and has an extensive history of publishing in Nature and Science). Humanities and medical research are likely to have some differences, but when fewer than 20% of humanities papers are thought to be cited at all, I’m not sure that supports humanities papers being read more extensively.
I don’t see any contradiction between saying:
I believe that, at the level a general reader will engage with it, this piece distorts the ideas of effective giving towards the damaging ‘good charities have low overhead’ meme, and will not in expectation increase donations to EA charities
In order to show the contrary, you need a more concrete endpoint that website clicks.
No matter how many steps there are between an action an an endpoint, the only robust way to show an association between them is to include measurements of the end point you care about: surrogate markers are likely to lead your astray. For instance, I don’t give much weight to a study showing drug Y lowers serum protein X, even though high levels of serum protein X are associated with disease Z. To prove itself worthwhile, the drug companies need to actually show that people on drug Y have lower rates of disease Z, or better yet, deaths from disease Z. Drug companies complain about and manipulate these principles all the time, because solid endpoints are take more time, effort and money to measure, and their manipulation around them has cost lives. (See the diabetic medication glipizide: short terms studies showed it decreasee blood sugar in diabetics—an outcome thought to improve their mortality—but longer term data showed that it makes people taking it more likely to die.)
Of course you’re free to measure your work however you choose: I would personally be unconvinced by website traffic, and if you are aiming to convince evidence minded people of your success I think you’d do well to consider firm endpoints or at least a methodology that can deal with confounding (though that is definitely inferior to not being confounded in the first place).
Thanks for sharing your thoughts! Maybe there’s a difference between our academic backgrounds. I come from the perspective of a historian of science at the intersection of psychology, neuroscience, behavioral economics, philosophy, and other disciplines. I have a couple of monographs out, and over 20 peer-reviewed articles (over 60 editor-reviewed pieces). Since my field intersects both social sciences and humanities, I speak from that background.
Regarding website visitors, it’s important to measure what is under our organization’s control. We can control what we do, namely get visitors to the websites of effective charities. We know that getting such visitors there is crucial to those visitors then converting into donors, and we can have statistics showing that. For instance, 12% of the visitors to The Life You Can Save website from InIn articles then become donors to effective charities through TLYCS website.
However, we can’t control that, and it would not be helpful to assess that on a systematic basis, beyond that base rate. The importance of constant measurement is to show us what we can do better, and the only thing we can control is whether we get people to TLYCS website or to other charities. Does that make sense?
Not really I’m afraid. That reasoning seems analogous to the makers of glipizide saying: we know lowering blood sugar in diabetics decreases deaths (we do indeed have data showing that) and their drug lowers blood sugar, so they don’t need to monitor the effect of their drug on deaths. Your model can be faulty, your base statistics can be wrong, you can have unintended consequences. Glipizide does lower blood sugar, but if you take it as a diabetic, you are more likely to die than if you don’t.
It would also be like the Against Malaria Foundation neglecting to measure malaria rates in the areas they work. AMF only distribute nets, but they don’t actually care about (or restrict themselves to monitoring) how many people sleep under bed nets. The bed net distribution and use only matters if it translates to decreased morbidity and mortality from malaria.
If you are sharing information because you want to increase the flow of money to effective charities, and you don’t measure that, then I think you are hobbling yourself from ever demonstrating an impact.
Bernadette, I’m confused. I did say we measured the rate of conversion from the people we draw to the website of charity evaluaters like TLYCS. What I am saying is what we take credit for, and what we can control.
I want to be honest in saying that we can’t take full credit for what people do once they hit the TLYCS website. Taking credit for that would be somewhat disingenuous, as TLYCS has its own marketing materials on the website, and we cannot control that.
So what we focus on measuring and taking credit for is what we can control :-)
Your comment above indicated you had measured it at one time but did not plan to do so on an ongoing basis: “However, we can’t control that, and it would not be helpful to assess that on a systematic basis, beyond that base rate” That approach would not be sensitive to the changing effect size of different methods.
I don’t think it’s about mismatched expectations so much as I have a different assessment than you do of how much this piece is likely to promote effective giving.
If your intention was to promote consideration of impact, or recipient focussed donation behaviour, then I think this article misses that mark. Sure, the information might be there 15 paragraphs deep in one of a dozen links, but it’s not conveyed to me—even as an interested reader versed in effective altruism ideas.
If indeed your article was intended by you to promote Charity Navigator style research with the hope it will nudge people towards the idea of impactful giving (which is what I take you to mean by saying that flattening out of the message is “a bug not a feature”), then I respectfully disagree that such an approach will in expectation increase effective giving.
I think there’s a miscommunication somewhere. In the sixth paragraph of the article, I stated that people should “take the perspective of a savvy investor and research donation options to make sure you do the most good per dollar donated.” To me, that’s the essence of EA. Would you disagree?
If so, I guess we will have to agree to disagree then.
Fortunately, there is an easy way of figuring out whose opinion is closer to the mark. One of the metrics Intentional Insights tracks is whether people clicked from our article to the website of the direct action charities described in the piece. If your opinion is correct, then we will not see clicks, as people will not be persuaded that EA-style effective giving is a worthwhile area. If my take is correct, then there will be some clicks, since people will be persuaded of the value of AMF and GiveDirectly. I’ll check with AMF and GiveDirectly in a couple of weeks to see what the click-through numbers were, and we’ll find out. Stay tuned!
Another piece of evidence supporting the fact that EA is a key take-away from the piece is how The Chronicle of Philanthropy described my piece: https://philanthropy.com/article/Opinion-Wounded-Warrior-Flap/235715
I agree that maximising the good done with every effort is the essence of EA; I disagree that the wording and structure of your piece communicated that, even with those words included.
There’s a tendency for people who do a lot of academic writing to assume that every sub-clause and every word will be carefully read and weighed by their readers. We agonise for months over a manuscript, carefully selecting modifiers to convey the correct levels of certainty in our conclusions or the strength of a hypothesis. In reality even the average academic reader will look at the title, scan the abstract and possibly look a figure and the concluding sentences.
Communicating complex ideas in a short piece is really hard to do, and if the less concrete the link between the message you want to convey and the topic you are trying to shoehorn that message into, the harder it is to avoid distorting your message. You could seek feedback from people who aren’t already aware of what you’re trying to communicate, but that’s likely to be very hard to do in the time frame needed for a current news story.
If you want a measure of success, I think you need a much better end point than website views, which is a) subject to a wide range of confounders and b) only a proxy for the thing you are trying to achieve.
I think we might have different perspectives about academic readers
This seems a bit contradictory to your previous statement about the average reader. I propose that if someone actually takes the time to click to GiveWell etc., this indicates a measure of interest and willingness to pay the resource of attention and time.
In fact, InIn measures its effectiveness in marketing EA-themed ideas about effective giving to a broad audience through its success in drawing the awareness of non-EA members to: EA ideas, such as researching charities, comparing their impact before donating, and expanding their circles of compassion; EA meta-charities that provide evaluations of effective charities; finally, effective direct-action charities themselves. In doing so, InIn works on a relatively neglected area of the EA nonprofit sales funnel, the key first stage of potential donor awareness of the benefits of EA ideas and charities. We then hand off the donors to EA meta-charities and direct-action charities for the latter stages of the sales funnel, which they have more capacity and expertise to handle. The metrics we use here are the exposure of people to our content, the number of those who are exposed who then click from our content to the websites of EA meta-charities and direct-action charities, the number of those people who then engage actively with the nonprofit by signing up to their newsletter, and finally donating. Naturally, each step is progressively harder to track, and the EA charities themselves are responsible for the last two steps.
The EA charities are grateful for the hard work we do, and applaud our efforts. Hopefully, that gives you some more context. My apologies for not sharing this context earlier :-)
We may have different perspectives on academic readers: I’m a relatively junior medical researcher. Three of my papers have over 100 citations. The view I expressed here is the one shared by my Principal Investigator (a professor at Oxford University who leads a multi-million pound international research consortium, and has an extensive history of publishing in Nature and Science). Humanities and medical research are likely to have some differences, but when fewer than 20% of humanities papers are thought to be cited at all, I’m not sure that supports humanities papers being read more extensively.
I don’t see any contradiction between saying:
I believe that, at the level a general reader will engage with it, this piece distorts the ideas of effective giving towards the damaging ‘good charities have low overhead’ meme, and will not in expectation increase donations to EA charities
In order to show the contrary, you need a more concrete endpoint that website clicks.
No matter how many steps there are between an action an an endpoint, the only robust way to show an association between them is to include measurements of the end point you care about: surrogate markers are likely to lead your astray. For instance, I don’t give much weight to a study showing drug Y lowers serum protein X, even though high levels of serum protein X are associated with disease Z. To prove itself worthwhile, the drug companies need to actually show that people on drug Y have lower rates of disease Z, or better yet, deaths from disease Z. Drug companies complain about and manipulate these principles all the time, because solid endpoints are take more time, effort and money to measure, and their manipulation around them has cost lives. (See the diabetic medication glipizide: short terms studies showed it decreasee blood sugar in diabetics—an outcome thought to improve their mortality—but longer term data showed that it makes people taking it more likely to die.)
Of course you’re free to measure your work however you choose: I would personally be unconvinced by website traffic, and if you are aiming to convince evidence minded people of your success I think you’d do well to consider firm endpoints or at least a methodology that can deal with confounding (though that is definitely inferior to not being confounded in the first place).
At any rate, that’s enough on this from me.
Thanks for sharing your thoughts! Maybe there’s a difference between our academic backgrounds. I come from the perspective of a historian of science at the intersection of psychology, neuroscience, behavioral economics, philosophy, and other disciplines. I have a couple of monographs out, and over 20 peer-reviewed articles (over 60 editor-reviewed pieces). Since my field intersects both social sciences and humanities, I speak from that background.
Regarding website visitors, it’s important to measure what is under our organization’s control. We can control what we do, namely get visitors to the websites of effective charities. We know that getting such visitors there is crucial to those visitors then converting into donors, and we can have statistics showing that. For instance, 12% of the visitors to The Life You Can Save website from InIn articles then become donors to effective charities through TLYCS website.
However, we can’t control that, and it would not be helpful to assess that on a systematic basis, beyond that base rate. The importance of constant measurement is to show us what we can do better, and the only thing we can control is whether we get people to TLYCS website or to other charities. Does that make sense?
Not really I’m afraid. That reasoning seems analogous to the makers of glipizide saying: we know lowering blood sugar in diabetics decreases deaths (we do indeed have data showing that) and their drug lowers blood sugar, so they don’t need to monitor the effect of their drug on deaths. Your model can be faulty, your base statistics can be wrong, you can have unintended consequences. Glipizide does lower blood sugar, but if you take it as a diabetic, you are more likely to die than if you don’t.
It would also be like the Against Malaria Foundation neglecting to measure malaria rates in the areas they work. AMF only distribute nets, but they don’t actually care about (or restrict themselves to monitoring) how many people sleep under bed nets. The bed net distribution and use only matters if it translates to decreased morbidity and mortality from malaria.
If you are sharing information because you want to increase the flow of money to effective charities, and you don’t measure that, then I think you are hobbling yourself from ever demonstrating an impact.
Bernadette, I’m confused. I did say we measured the rate of conversion from the people we draw to the website of charity evaluaters like TLYCS. What I am saying is what we take credit for, and what we can control.
I want to be honest in saying that we can’t take full credit for what people do once they hit the TLYCS website. Taking credit for that would be somewhat disingenuous, as TLYCS has its own marketing materials on the website, and we cannot control that.
So what we focus on measuring and taking credit for is what we can control :-)
Your comment above indicated you had measured it at one time but did not plan to do so on an ongoing basis: “However, we can’t control that, and it would not be helpful to assess that on a systematic basis, beyond that base rate” That approach would not be sensitive to the changing effect size of different methods.
That’s a good point, I am updating toward measuring it more continuously based on your comments. Thanks!