Thanks Ozzie! I agree that it’s a bit hard to take $5 per income doubling at face value. I was tempted to revise some of my figures (eg reduce the 1⁄4 probability of ‘success’) to come up with a more ‘realistic’ figure (though arguably this undermines the very process of cost efficacy analysis if we post-hoc change the inputs to reach an output we want/find plausible). I fully agree that the uncertainty bars are large. Addressing your points in more detail: 1. It’s true, David & Edward have both made massive contributions to the coding for free. That said, both have reported that use of AI has consistently helped them achieve goals faster & easier than they had thought. 2. We were budgeting $3 per user for distribution costs (row 41 on first sheet of spreadsheet) which is roughly what my organisation’s paper-based intervention currently costs. But it obviously depends a lot on whether word-of-mouth helps it take off, or if we need to continue promoting it widely. 3. Agreed that the AI landscape is highly uncertain; for this reason we are only modelling out 5 years and not assuming any benefits post that period (perhaps we should reduce this to 3 years). More broadly, I think if/when we do reach AGI, it will require a massive rethink of everything—including most global health & development initiatives—and most cost-efficacy analyses will go out the window. Whether returns to education would increase or decrease in a post-AGI world is a fascinating question. 4. Agreed, we have not yet built a Hindi version of the app; once we have this will be a top priority to measure learning gains. The effect sizes are derived from both others’ studies of comparable EdTech and my organisation’s paper-based ALfA program. As written above, I have similar doubts about whether we can hold the user’s attention with the app—will require significant work. Fair call to be skeptical until we get some results. If/when we do manage to build the app and pilot it, will post the results here. Thanks, Tom
Thanks Ozzie!
I agree that it’s a bit hard to take $5 per income doubling at face value. I was tempted to revise some of my figures (eg reduce the 1⁄4 probability of ‘success’) to come up with a more ‘realistic’ figure (though arguably this undermines the very process of cost efficacy analysis if we post-hoc change the inputs to reach an output we want/find plausible). I fully agree that the uncertainty bars are large.
Addressing your points in more detail:
1. It’s true, David & Edward have both made massive contributions to the coding for free. That said, both have reported that use of AI has consistently helped them achieve goals faster & easier than they had thought.
2. We were budgeting $3 per user for distribution costs (row 41 on first sheet of spreadsheet) which is roughly what my organisation’s paper-based intervention currently costs. But it obviously depends a lot on whether word-of-mouth helps it take off, or if we need to continue promoting it widely.
3. Agreed that the AI landscape is highly uncertain; for this reason we are only modelling out 5 years and not assuming any benefits post that period (perhaps we should reduce this to 3 years). More broadly, I think if/when we do reach AGI, it will require a massive rethink of everything—including most global health & development initiatives—and most cost-efficacy analyses will go out the window. Whether returns to education would increase or decrease in a post-AGI world is a fascinating question.
4. Agreed, we have not yet built a Hindi version of the app; once we have this will be a top priority to measure learning gains. The effect sizes are derived from both others’ studies of comparable EdTech and my organisation’s paper-based ALfA program. As written above, I have similar doubts about whether we can hold the user’s attention with the app—will require significant work.
Fair call to be skeptical until we get some results. If/when we do manage to build the app and pilot it, will post the results here.
Thanks,
Tom