Have you considered holding out some languages at random to assess the impact of the program? You could e.g. delay funding for some languages by 1-2 years and try to estimate the difference in some relevant outcome during that period. I understand this may be hard or undesirable for several reasons (finding and measuring the right outcomes, opportunity costs, managing grantee expectations).
Unfortunately I think this kind of experimental approach is a bad fit here; opportunity costs seem really high, thereās a small number of data points, and thereās a ton of noise from other factors that language communities vary along.
Fortunately I think weāll have additional context that will help us assess the impacts of these grants beyond a black-box ādid this input lead to this outputā analysis.
Have you considered holding out some languages at random to assess the impact of the program? You could e.g. delay funding for some languages by 1-2 years and try to estimate the difference in some relevant outcome during that period. I understand this may be hard or undesirable for several reasons (finding and measuring the right outcomes, opportunity costs, managing grantee expectations).
Unfortunately I think this kind of experimental approach is a bad fit here; opportunity costs seem really high, thereās a small number of data points, and thereās a ton of noise from other factors that language communities vary along.
Fortunately I think weāll have additional context that will help us assess the impacts of these grants beyond a black-box ādid this input lead to this outputā analysis.