Proposed - ‘How Much Does It Cost to Save a Life?’ Quiz, calculator, tool
Epistemic basis/status: I’ve talked this over with Grace and others at GWWC, and people seem generally interested. I’m posting this to get feedback and gauge interest before potentially pushing it further.
Basic idea
I’d like to get your thoughts on a “How Much Does It Cost to Save a Life?”[1] quiz and calculator. I’ve been discussing this with Giving What We Can; it’s somewhat modeled off their how rich am I calculator, which drives a lot of traffic to their site.
This would mainly target non-EAs, but it would try to strike a good balance between sophistication and simplicity. It could start as a quiz to get people’s attention. People would be asked to guess this cost. They could then be asked to reconsider it considering some follow-up questions. This might be a good opportunity for a chatbot to work its magic.
After this interaction, the ‘correct answer’ and ‘how well did I do’ would take you to an interactive page, presenting the basic calculation and reasoning. (Before or after presenting this) it could also allow users to adjust their moral and epistemic parameters and the scope of their inquiry. This might be something to unfold gradually, letting people specify first one thing, and then maybe more, if they like.
E.g.,
Target: Rich or poor countries, which age groups, etc.
Relative value of a child or adults life
How much do you weight life-years for certain states
Which evidence do you find more plausible
Do you want to include or exclude certain types of benefits
Discount rate
We would aim to go viral (or at least bacterial)!
Value/ToC
I believe that people would be highly interested in this: it could be engaging and pique curiosity and competitiveness (a bit click-baity, maybe, but the payoff is not click bait)!
It could potentially make news headlines. It’s an “easy story” for media people, asks a question people can engage with, etc. … ’how much does it cost to save a life? find out after the break!) giving the public a chance to engage with the question: “How much does it cost to save a life?”
It could help challenge misconceptions about the cost of saving lives, contributing to a more reality-based, impact-focused, and evidence-driven donor community. If people do think it’s much cheaper than it is, as some studies suggest, it would probably be good to change this misconception. It may also be a stepping stone towards encouraging people to think more critically about measuring impact and considering EA-aligned evaluations.
--> Greater acceptance and understanding of EA, better epistemics in the general public, better donation and policy choices
Implementation
While GiveWell does have a page with a lot of technical details, it doesn’t quite capture the interactive and compelling aspects I’m envisioning for this tool.
Giving What We Can’s response has been positive, but they understandably lack the capacity within their core team to take on such a project. They suggest it could make for an interesting volunteer project if a UX designer and an engineer were interested in participating.
Considering the enthusiasm and the potential for synergy with academic research (which could be supported by funds for Facebook academic ads), I’m contemplating the best approach to bring this idea to life. I tentatively propose the following steps:
Put out a request for a volunteer to help develop a proof of concept or minimum viable product. Giving What We Can has some interested engineers, and I could help with guidance and encouragement.
2 Apply for direct funding for the project, possibly collaborating with groups focused on quantitative uncertainty and “build your own cost-effectiveness” initiatives, or perhaps with SoGive.
I’d love to hear your thoughts, feedback, and any suggestions you may have for moving forward with this idea.
(And GPT4 tells me to write: “Together, we can create a tool that truly engages people and inspires them to think more deeply about the impact of their giving.”)
- ↩︎
We’d probably initially start with human lives ala Givewell.
- 16 Mar 2024 17:00 UTC; 28 points) 's comment on New video: You’re richer than you realise by (
Hey!
If you elaborate a bit on what you’d like in your website (or if you join for a video call), maybe some people would enjoy collaboratively building it in this group (I’d share your post now but I think it needs a bit more producty work first)
Thanks. I’ll try to follow up. I don’t think it’s completely a software/IT issue, but that’s a major part of it.
Thank you for thinking of SoGive. We now have a new, slimmed down strategy; I’m still finalising writing up the document, but I suspect that projects like these might not fit so well with them any more (although they may well have been a better fit in the past).
On another note, I don’t think I understand your theory of change based on this post.
I don’t know why people would be interested in this.
It’s different from the “How rich am I” calculator, because that calculator tells me something about me, and there is nothing more fascinating than me (!).
I also ran through the original list of news values/media values (i.e. the list of characteristics that makes something newsworthy) and it seemed to me that this didn’t tick any of the boxes (although you might disagree with this assessment, or might not think that is the correct list of media values).
Even if it did go viral, it seems it would have a pretty low impact on the extent to which EA has greater acceptance.
I hope that doesn’t sound too critical, perhaps I’m missing something?
I think an important aspect of the theory of change could be showing participants that their priors on the cost to save a life are very wrong especially with regard to domestic v. international charities. Ultimately, this could lead to donors giving more internationally.
My understanding is that most donors prefer to make an impact on their local community/country, but that preference may weaken if they learn that the impact of their dollars are an order of magnitude more impactful internationally.
I think I might agree (and maybe I should take that out of the ToC) but could you elaborate a bit on “why not”?
The base argument is “people see this important insight they were wrong about, they are impressed with the careful evidence and arguments, they see it’s linked to EA, this increases their interest in EA and how much credibility they give EA”… I guess that’s how many of us came into EA, but it’s not said that this would be the dominant path for people exposed to this.
That media list seems outdated … or perhaps it applies more to passive content (not even ‘clicks’). The proposal is more in the nature of the ‘personality quiz’, ‘one weird trick’, ‘can you guess better than a 5 year old’ or ‘here’s what you are wrong about’ internet genre.
I.e., it piques my curiosity. when someone asks me something I think I know the answer to (and have strong opinions on) but have heard contradictory things, I want to click it to scratch the itch.