First, at Malengo the students fully fund the next cohort via repaying the original donation in an ISA.
This means that funding 1 student will actually fund many students over time. Using the numbers above you get a rate of return around 6% annualized. So funding a student is sorta infinite students 0% discount rates. But that is unreasonable, so let’s just cap at the next 100 years and say 2% discount rate from inflation.
BOTEC for 1 funding pays 12.5 students or a student every 8 years.
That changes your calculation from 3x givedirectly to 37.5x.
Second, you also said the students are richer but that is factually incorrect, the program is means testing to ensure that students are well targeted.
Finally, there are other fudge factors, but they are all dwarfed by the development benefits of immigration.
https://www.nber.org/papers/w29862
This shows that nearly 80% of long-run income gains are accrued within sending countries across a wide variety of channels.
Hence, I think 37.5x GiveDirectly is a completely reasonable estimate.
“While useful, even models that produced a perfect probability density function for precisely selected outcomes would not prove sufficient to answer such questions. Nor are they necessary.”
I recommend reading DMDU since it goes into much more detail than I can do justice.
Yet, I believe you are focusing heavily on the concept of the distribution existing while the claim should be restated.
Deep uncertainty implies that the range of reasonable distributions allows so many reasonable decisions that attempting to “agree on assumptions then act” is a poor frame. Instead, you want to explore all reasonable distributions then “agree on decisions”.
If you are in a state where reasonable people are producing meaningfully different decisions, ie different sign from your convention above, based on the distribution and weighting terms. Then it becomes more useful to focus on the timeline and tradeoffs rather than the current understanding of the distribution:
Explore the largest range of scenarios (in the 1/n case each time you add another plausible scenario it changes all scenario weights)
Understand the sequence of actions/information released
Identify actions that won’t change with new info
Identify information that will meaningfully change your decision
Identify actions that should follow given the new information
Quantify tradeoffs forced with decisions
This results is building an adapting policy pathway rather than making a decision or even choosing a model framework.
Value is derived from expanding the suite of policies, scenarios and objectives or illustrating the tradeoffs between objectives and how to minimize those tradeoffs via sequencing.
This is in contrast to emphasizing the optimal distribution (or worse, point estimate) conditional on all available data. Since that distribution is still subject to change in time and evaluated under different weights by different stakeholders.