One further risk is that message you are trying to convey has to be stretched or even distorted to be made relevant to the original story. This is a result of the “hijacking” approach, and unfortunately I think it’s evident in this piece.
The problem with Wounded Warriors as I understand it, is not that their proposed projects were not likely to be helpful (I haven’t seen evidence that would help me answer that), but that people in the organisation mis-spent funds, and did not use them according to the charities own stated aims. So the problem here is not whether Wounded Warriors are engaged in effective interventions, but that people within the organisation diverted money from interventions and spend it on luxury flights and accommodation for its staff.
It seemed to me that the characterisation of effective altruism groups in the Time piece as organisations “pushing the nonprofit sector to become more transparent and accountable” is indistinguishable from Charity Navigator and others who are concerned with overhead as a metric of effectiveness. If we dilute the notion of an effective charity to one that has been vetted for financial transparency and accountability, we really lose the key message of how much different interventions vary in their impact.
For an example of how this can lead to opposite conclusions than EA reasoning: most EAs would agree it would be better for the world if programs like Scared Straight or Playpumps were bad at delivering their programs, since their programs have a negative impact. I expect it would be overall negative to deliver the message that finding an organisation with low overheads is both necessary and sufficient to ensuring your donation has a positive impact. I imagine that wasn’t your aim here Gleb, but it’s very much how it reads to me, probably as a result of the need to stay relevant to the news story you were tailing.
Bernadette, these are excellent points, and the risk of distortion is real. However, I think what you saw in this column is not a bug, but a feature :-)
First, the Wounded Warrior Project was indeed not focused on creating effective interventions, but instead on creating Potemkin-like programming that was more oriented toward getting good numbers for reports that assisted fundraising efforts rather than helping veterans, as shown in this piece. For instance, here’s a quote from the piece:
The same push for numbers hit a program that brings wounded veterans together for social events. Former staff members said they had less time to develop therapeutic programs and so relied on giving veterans tickets to concerts and sporting events. To fill seats, they often invited the same veterans. “If the same warrior attends six different events, you could record that as six warriors served,” said Renee Humphrey, who oversaw alumni outreach in Southern California for about four years. “You had the same few guys who loved going to free events.”
I think it’s a bit unfair to read my comments about effective altruism groups as simply organizations “pushing the nonprofit sector to become more transparent and accountable.” This was a shorthand description based on the limited number of words allowed in any op-ed. It should be read in light of my earlier comments in the article about what it means to be transparent and accountable, namely “take the perspective of a savvy investor and research donation options to make sure you do the most good per dollar donated.” This is the essence of EA, and makes it quite distinguishable from Charity Navigator and others. I hope this clarifies the situation, and I see how that misunderstanding can arise if there was a lack of awareness about the word limitations on the piece :-)
I also think there might be a mismatch of expectations. The piece itself aims to bridge the inferential gap between people who right now might not even bother to do research on their donations, and persuade them to considering effective giving. It’s really important to remember that what I’m doing here, and what Intentional Insights does as a whole, is less about explicitly promoting EA but about promoting EA-themed effective giving, to prevent the danger of flooding the EA movement with non-value aligned newcomers.
As you can see, there’s only a paragraph there about the EA movement, and it’s not pushed heavily as the solution to all nonprofit problems, but as one way of doing so. Those who are intrigued by our data-driven, utilitarian approach and check out the movement will already be likely to be value-aligned. Others who are not so interested in the movement itself can go to the individual charities and charity evaluators cited in the piece.
Hope that clarifies the issues you raised, and thanks again for sharing your thoughts!
I don’t think it’s about mismatched expectations so much as I have a different assessment than you do of how much this piece is likely to promote effective giving.
If your intention was to promote consideration of impact, or recipient focussed donation behaviour, then I think this article misses that mark. Sure, the information might be there 15 paragraphs deep in one of a dozen links, but it’s not conveyed to me—even as an interested reader versed in effective altruism ideas.
If indeed your article was intended by you to promote Charity Navigator style research with the hope it will nudge people towards the idea of impactful giving (which is what I take you to mean by saying that flattening out of the message is “a bug not a feature”), then I respectfully disagree that such an approach will in expectation increase effective giving.
If your intention was to promote consideration of impact, or recipient focussed donation behaviour, then I think this article misses that mark. Sure, the information might be there 15 paragraphs deep in one of a dozen links, but it’s not conveyed to me—even as an interested reader versed in effective altruism ideas.
I think there’s a miscommunication somewhere. In the sixth paragraph of the article, I stated that people should “take the perspective of a savvy investor and research donation options to make sure you do the most good per dollar donated.” To me, that’s the essence of EA. Would you disagree?
I respectfully disagree that such an approach will in expectation increase effective giving.
If so, I guess we will have to agree to disagree then.
Fortunately, there is an easy way of figuring out whose opinion is closer to the mark. One of the metrics Intentional Insights tracks is whether people clicked from our article to the website of the direct action charities described in the piece. If your opinion is correct, then we will not see clicks, as people will not be persuaded that EA-style effective giving is a worthwhile area. If my take is correct, then there will be some clicks, since people will be persuaded of the value of AMF and GiveDirectly. I’ll check with AMF and GiveDirectly in a couple of weeks to see what the click-through numbers were, and we’ll find out. Stay tuned!
I agree that maximising the good done with every effort is the essence of EA; I disagree that the wording and structure of your piece communicated that, even with those words included.
There’s a tendency for people who do a lot of academic writing to assume that every sub-clause and every word will be carefully read and weighed by their readers. We agonise for months over a manuscript, carefully selecting modifiers to convey the correct levels of certainty in our conclusions or the strength of a hypothesis. In reality even the average academic reader will look at the title, scan the abstract and possibly look a figure and the concluding sentences.
Communicating complex ideas in a short piece is really hard to do, and if the less concrete the link between the message you want to convey and the topic you are trying to shoehorn that message into, the harder it is to avoid distorting your message. You could seek feedback from people who aren’t already aware of what you’re trying to communicate, but that’s likely to be very hard to do in the time frame needed for a current news story.
If you want a measure of success, I think you need a much better end point than website views, which is a) subject to a wide range of confounders and b) only a proxy for the thing you are trying to achieve.
In reality even the average academic reader will look at the title, scan the abstract and possibly look a figure and the concluding sentences.
I think we might have different perspectives about academic readers
I think you need a much better end point than website views
This seems a bit contradictory to your previous statement about the average reader. I propose that if someone actually takes the time to click to GiveWell etc., this indicates a measure of interest and willingness to pay the resource of attention and time.
In fact, InIn measures its effectiveness in marketing EA-themed ideas about effective giving to a broad audience through its success in drawing the awareness of non-EA members to: EA ideas, such as researching charities, comparing their impact before donating, and expanding their circles of compassion; EA meta-charities that provide evaluations of effective charities; finally, effective direct-action charities themselves. In doing so, InIn works on a relatively neglected area of the EA nonprofit sales funnel, the key first stage of potential donor awareness of the benefits of EA ideas and charities. We then hand off the donors to EA meta-charities and direct-action charities for the latter stages of the sales funnel, which they have more capacity and expertise to handle. The metrics we use here are the exposure of people to our content, the number of those who are exposed who then click from our content to the websites of EA meta-charities and direct-action charities, the number of those people who then engage actively with the nonprofit by signing up to their newsletter, and finally donating. Naturally, each step is progressively harder to track, and the EA charities themselves are responsible for the last two steps.
The EA charities are grateful for the hard work we do, and applaud our efforts. Hopefully, that gives you some more context. My apologies for not sharing this context earlier :-)
We may have different perspectives on academic readers: I’m a relatively junior medical researcher. Three of my papers have over 100 citations. The view I expressed here is the one shared by my Principal Investigator (a professor at Oxford University who leads a multi-million pound international research consortium, and has an extensive history of publishing in Nature and Science). Humanities and medical research are likely to have some differences, but when fewer than 20% of humanities papers are thought to be cited at all, I’m not sure that supports humanities papers being read more extensively.
I don’t see any contradiction between saying:
I believe that, at the level a general reader will engage with it, this piece distorts the ideas of effective giving towards the damaging ‘good charities have low overhead’ meme, and will not in expectation increase donations to EA charities
In order to show the contrary, you need a more concrete endpoint that website clicks.
No matter how many steps there are between an action an an endpoint, the only robust way to show an association between them is to include measurements of the end point you care about: surrogate markers are likely to lead your astray. For instance, I don’t give much weight to a study showing drug Y lowers serum protein X, even though high levels of serum protein X are associated with disease Z. To prove itself worthwhile, the drug companies need to actually show that people on drug Y have lower rates of disease Z, or better yet, deaths from disease Z. Drug companies complain about and manipulate these principles all the time, because solid endpoints are take more time, effort and money to measure, and their manipulation around them has cost lives. (See the diabetic medication glipizide: short terms studies showed it decreasee blood sugar in diabetics—an outcome thought to improve their mortality—but longer term data showed that it makes people taking it more likely to die.)
Of course you’re free to measure your work however you choose: I would personally be unconvinced by website traffic, and if you are aiming to convince evidence minded people of your success I think you’d do well to consider firm endpoints or at least a methodology that can deal with confounding (though that is definitely inferior to not being confounded in the first place).
Thanks for sharing your thoughts! Maybe there’s a difference between our academic backgrounds. I come from the perspective of a historian of science at the intersection of psychology, neuroscience, behavioral economics, philosophy, and other disciplines. I have a couple of monographs out, and over 20 peer-reviewed articles (over 60 editor-reviewed pieces). Since my field intersects both social sciences and humanities, I speak from that background.
Regarding website visitors, it’s important to measure what is under our organization’s control. We can control what we do, namely get visitors to the websites of effective charities. We know that getting such visitors there is crucial to those visitors then converting into donors, and we can have statistics showing that. For instance, 12% of the visitors to The Life You Can Save website from InIn articles then become donors to effective charities through TLYCS website.
However, we can’t control that, and it would not be helpful to assess that on a systematic basis, beyond that base rate. The importance of constant measurement is to show us what we can do better, and the only thing we can control is whether we get people to TLYCS website or to other charities. Does that make sense?
Not really I’m afraid. That reasoning seems analogous to the makers of glipizide saying: we know lowering blood sugar in diabetics decreases deaths (we do indeed have data showing that) and their drug lowers blood sugar, so they don’t need to monitor the effect of their drug on deaths. Your model can be faulty, your base statistics can be wrong, you can have unintended consequences. Glipizide does lower blood sugar, but if you take it as a diabetic, you are more likely to die than if you don’t.
It would also be like the Against Malaria Foundation neglecting to measure malaria rates in the areas they work. AMF only distribute nets, but they don’t actually care about (or restrict themselves to monitoring) how many people sleep under bed nets. The bed net distribution and use only matters if it translates to decreased morbidity and mortality from malaria.
If you are sharing information because you want to increase the flow of money to effective charities, and you don’t measure that, then I think you are hobbling yourself from ever demonstrating an impact.
Bernadette, I’m confused. I did say we measured the rate of conversion from the people we draw to the website of charity evaluaters like TLYCS. What I am saying is what we take credit for, and what we can control.
I want to be honest in saying that we can’t take full credit for what people do once they hit the TLYCS website. Taking credit for that would be somewhat disingenuous, as TLYCS has its own marketing materials on the website, and we cannot control that.
So what we focus on measuring and taking credit for is what we can control :-)
Your comment above indicated you had measured it at one time but did not plan to do so on an ongoing basis: “However, we can’t control that, and it would not be helpful to assess that on a systematic basis, beyond that base rate” That approach would not be sensitive to the changing effect size of different methods.
Owen I think these are important caveats.
One further risk is that message you are trying to convey has to be stretched or even distorted to be made relevant to the original story. This is a result of the “hijacking” approach, and unfortunately I think it’s evident in this piece.
The problem with Wounded Warriors as I understand it, is not that their proposed projects were not likely to be helpful (I haven’t seen evidence that would help me answer that), but that people in the organisation mis-spent funds, and did not use them according to the charities own stated aims. So the problem here is not whether Wounded Warriors are engaged in effective interventions, but that people within the organisation diverted money from interventions and spend it on luxury flights and accommodation for its staff.
It seemed to me that the characterisation of effective altruism groups in the Time piece as organisations “pushing the nonprofit sector to become more transparent and accountable” is indistinguishable from Charity Navigator and others who are concerned with overhead as a metric of effectiveness. If we dilute the notion of an effective charity to one that has been vetted for financial transparency and accountability, we really lose the key message of how much different interventions vary in their impact.
For an example of how this can lead to opposite conclusions than EA reasoning: most EAs would agree it would be better for the world if programs like Scared Straight or Playpumps were bad at delivering their programs, since their programs have a negative impact. I expect it would be overall negative to deliver the message that finding an organisation with low overheads is both necessary and sufficient to ensuring your donation has a positive impact. I imagine that wasn’t your aim here Gleb, but it’s very much how it reads to me, probably as a result of the need to stay relevant to the news story you were tailing.
Bernadette, these are excellent points, and the risk of distortion is real. However, I think what you saw in this column is not a bug, but a feature :-)
First, the Wounded Warrior Project was indeed not focused on creating effective interventions, but instead on creating Potemkin-like programming that was more oriented toward getting good numbers for reports that assisted fundraising efforts rather than helping veterans, as shown in this piece. For instance, here’s a quote from the piece:
I think it’s a bit unfair to read my comments about effective altruism groups as simply organizations “pushing the nonprofit sector to become more transparent and accountable.” This was a shorthand description based on the limited number of words allowed in any op-ed. It should be read in light of my earlier comments in the article about what it means to be transparent and accountable, namely “take the perspective of a savvy investor and research donation options to make sure you do the most good per dollar donated.” This is the essence of EA, and makes it quite distinguishable from Charity Navigator and others. I hope this clarifies the situation, and I see how that misunderstanding can arise if there was a lack of awareness about the word limitations on the piece :-)
I also think there might be a mismatch of expectations. The piece itself aims to bridge the inferential gap between people who right now might not even bother to do research on their donations, and persuade them to considering effective giving. It’s really important to remember that what I’m doing here, and what Intentional Insights does as a whole, is less about explicitly promoting EA but about promoting EA-themed effective giving, to prevent the danger of flooding the EA movement with non-value aligned newcomers.
As you can see, there’s only a paragraph there about the EA movement, and it’s not pushed heavily as the solution to all nonprofit problems, but as one way of doing so. Those who are intrigued by our data-driven, utilitarian approach and check out the movement will already be likely to be value-aligned. Others who are not so interested in the movement itself can go to the individual charities and charity evaluators cited in the piece.
Hope that clarifies the issues you raised, and thanks again for sharing your thoughts!
I don’t think it’s about mismatched expectations so much as I have a different assessment than you do of how much this piece is likely to promote effective giving.
If your intention was to promote consideration of impact, or recipient focussed donation behaviour, then I think this article misses that mark. Sure, the information might be there 15 paragraphs deep in one of a dozen links, but it’s not conveyed to me—even as an interested reader versed in effective altruism ideas.
If indeed your article was intended by you to promote Charity Navigator style research with the hope it will nudge people towards the idea of impactful giving (which is what I take you to mean by saying that flattening out of the message is “a bug not a feature”), then I respectfully disagree that such an approach will in expectation increase effective giving.
I think there’s a miscommunication somewhere. In the sixth paragraph of the article, I stated that people should “take the perspective of a savvy investor and research donation options to make sure you do the most good per dollar donated.” To me, that’s the essence of EA. Would you disagree?
If so, I guess we will have to agree to disagree then.
Fortunately, there is an easy way of figuring out whose opinion is closer to the mark. One of the metrics Intentional Insights tracks is whether people clicked from our article to the website of the direct action charities described in the piece. If your opinion is correct, then we will not see clicks, as people will not be persuaded that EA-style effective giving is a worthwhile area. If my take is correct, then there will be some clicks, since people will be persuaded of the value of AMF and GiveDirectly. I’ll check with AMF and GiveDirectly in a couple of weeks to see what the click-through numbers were, and we’ll find out. Stay tuned!
Another piece of evidence supporting the fact that EA is a key take-away from the piece is how The Chronicle of Philanthropy described my piece: https://philanthropy.com/article/Opinion-Wounded-Warrior-Flap/235715
I agree that maximising the good done with every effort is the essence of EA; I disagree that the wording and structure of your piece communicated that, even with those words included.
There’s a tendency for people who do a lot of academic writing to assume that every sub-clause and every word will be carefully read and weighed by their readers. We agonise for months over a manuscript, carefully selecting modifiers to convey the correct levels of certainty in our conclusions or the strength of a hypothesis. In reality even the average academic reader will look at the title, scan the abstract and possibly look a figure and the concluding sentences.
Communicating complex ideas in a short piece is really hard to do, and if the less concrete the link between the message you want to convey and the topic you are trying to shoehorn that message into, the harder it is to avoid distorting your message. You could seek feedback from people who aren’t already aware of what you’re trying to communicate, but that’s likely to be very hard to do in the time frame needed for a current news story.
If you want a measure of success, I think you need a much better end point than website views, which is a) subject to a wide range of confounders and b) only a proxy for the thing you are trying to achieve.
I think we might have different perspectives about academic readers
This seems a bit contradictory to your previous statement about the average reader. I propose that if someone actually takes the time to click to GiveWell etc., this indicates a measure of interest and willingness to pay the resource of attention and time.
In fact, InIn measures its effectiveness in marketing EA-themed ideas about effective giving to a broad audience through its success in drawing the awareness of non-EA members to: EA ideas, such as researching charities, comparing their impact before donating, and expanding their circles of compassion; EA meta-charities that provide evaluations of effective charities; finally, effective direct-action charities themselves. In doing so, InIn works on a relatively neglected area of the EA nonprofit sales funnel, the key first stage of potential donor awareness of the benefits of EA ideas and charities. We then hand off the donors to EA meta-charities and direct-action charities for the latter stages of the sales funnel, which they have more capacity and expertise to handle. The metrics we use here are the exposure of people to our content, the number of those who are exposed who then click from our content to the websites of EA meta-charities and direct-action charities, the number of those people who then engage actively with the nonprofit by signing up to their newsletter, and finally donating. Naturally, each step is progressively harder to track, and the EA charities themselves are responsible for the last two steps.
The EA charities are grateful for the hard work we do, and applaud our efforts. Hopefully, that gives you some more context. My apologies for not sharing this context earlier :-)
We may have different perspectives on academic readers: I’m a relatively junior medical researcher. Three of my papers have over 100 citations. The view I expressed here is the one shared by my Principal Investigator (a professor at Oxford University who leads a multi-million pound international research consortium, and has an extensive history of publishing in Nature and Science). Humanities and medical research are likely to have some differences, but when fewer than 20% of humanities papers are thought to be cited at all, I’m not sure that supports humanities papers being read more extensively.
I don’t see any contradiction between saying:
I believe that, at the level a general reader will engage with it, this piece distorts the ideas of effective giving towards the damaging ‘good charities have low overhead’ meme, and will not in expectation increase donations to EA charities
In order to show the contrary, you need a more concrete endpoint that website clicks.
No matter how many steps there are between an action an an endpoint, the only robust way to show an association between them is to include measurements of the end point you care about: surrogate markers are likely to lead your astray. For instance, I don’t give much weight to a study showing drug Y lowers serum protein X, even though high levels of serum protein X are associated with disease Z. To prove itself worthwhile, the drug companies need to actually show that people on drug Y have lower rates of disease Z, or better yet, deaths from disease Z. Drug companies complain about and manipulate these principles all the time, because solid endpoints are take more time, effort and money to measure, and their manipulation around them has cost lives. (See the diabetic medication glipizide: short terms studies showed it decreasee blood sugar in diabetics—an outcome thought to improve their mortality—but longer term data showed that it makes people taking it more likely to die.)
Of course you’re free to measure your work however you choose: I would personally be unconvinced by website traffic, and if you are aiming to convince evidence minded people of your success I think you’d do well to consider firm endpoints or at least a methodology that can deal with confounding (though that is definitely inferior to not being confounded in the first place).
At any rate, that’s enough on this from me.
Thanks for sharing your thoughts! Maybe there’s a difference between our academic backgrounds. I come from the perspective of a historian of science at the intersection of psychology, neuroscience, behavioral economics, philosophy, and other disciplines. I have a couple of monographs out, and over 20 peer-reviewed articles (over 60 editor-reviewed pieces). Since my field intersects both social sciences and humanities, I speak from that background.
Regarding website visitors, it’s important to measure what is under our organization’s control. We can control what we do, namely get visitors to the websites of effective charities. We know that getting such visitors there is crucial to those visitors then converting into donors, and we can have statistics showing that. For instance, 12% of the visitors to The Life You Can Save website from InIn articles then become donors to effective charities through TLYCS website.
However, we can’t control that, and it would not be helpful to assess that on a systematic basis, beyond that base rate. The importance of constant measurement is to show us what we can do better, and the only thing we can control is whether we get people to TLYCS website or to other charities. Does that make sense?
Not really I’m afraid. That reasoning seems analogous to the makers of glipizide saying: we know lowering blood sugar in diabetics decreases deaths (we do indeed have data showing that) and their drug lowers blood sugar, so they don’t need to monitor the effect of their drug on deaths. Your model can be faulty, your base statistics can be wrong, you can have unintended consequences. Glipizide does lower blood sugar, but if you take it as a diabetic, you are more likely to die than if you don’t.
It would also be like the Against Malaria Foundation neglecting to measure malaria rates in the areas they work. AMF only distribute nets, but they don’t actually care about (or restrict themselves to monitoring) how many people sleep under bed nets. The bed net distribution and use only matters if it translates to decreased morbidity and mortality from malaria.
If you are sharing information because you want to increase the flow of money to effective charities, and you don’t measure that, then I think you are hobbling yourself from ever demonstrating an impact.
Bernadette, I’m confused. I did say we measured the rate of conversion from the people we draw to the website of charity evaluaters like TLYCS. What I am saying is what we take credit for, and what we can control.
I want to be honest in saying that we can’t take full credit for what people do once they hit the TLYCS website. Taking credit for that would be somewhat disingenuous, as TLYCS has its own marketing materials on the website, and we cannot control that.
So what we focus on measuring and taking credit for is what we can control :-)
Your comment above indicated you had measured it at one time but did not plan to do so on an ongoing basis: “However, we can’t control that, and it would not be helpful to assess that on a systematic basis, beyond that base rate” That approach would not be sensitive to the changing effect size of different methods.
That’s a good point, I am updating toward measuring it more continuously based on your comments. Thanks!