One concern that asymmetrically affects this discussion is measurement error. Democracy or institutional quality or any such impact is measured with a lot of error, so estimates of the impact of X on democracy/institutional quality are going to be super noisy and it’s going to be harder to reject the null of no impact—even if there is a real negative impact. Is that something that’s received a lot of attention?
In general, it’s harder to put stock in “studies have not found evidence of X” than “studies have found evidence of X” since everything is measured with error, so rejecting nulls is harder than failing to reject them.
Good question. It’s worth recalling that Sam and Finn’s JDE actually finds small positive (significant) effects of aid on many governance outcomes. I’m not sure I actually believe that those positive effects exist, but it’s important to see that my claims above don’t hinge on over interpreting (imprecise) nulls. Also I realize you know this, but for others it’s good to remember that classical measurement error in the DV will increase noise but will not introduce bias.
The article in footnote 3 is also an example of other work I (in the interest of brevity) didn’t summarize but have read that lends support to my claims re: aid and governance by testing one mechanism thought to link aid and governance in proper experiments (this is another related one based on qualitative work: https://rdcu.be/b4aTu), so my claims don’t rest purely on country-year panels. If some people DM me and say that they want a longer summary covering this stuff, I might try to find the time to do it.
One concern that asymmetrically affects this discussion is measurement error. Democracy or institutional quality or any such impact is measured with a lot of error, so estimates of the impact of X on democracy/institutional quality are going to be super noisy and it’s going to be harder to reject the null of no impact—even if there is a real negative impact. Is that something that’s received a lot of attention?
In general, it’s harder to put stock in “studies have not found evidence of X” than “studies have found evidence of X” since everything is measured with error, so rejecting nulls is harder than failing to reject them.
Good question. It’s worth recalling that Sam and Finn’s JDE actually finds small positive (significant) effects of aid on many governance outcomes. I’m not sure I actually believe that those positive effects exist, but it’s important to see that my claims above don’t hinge on over interpreting (imprecise) nulls. Also I realize you know this, but for others it’s good to remember that classical measurement error in the DV will increase noise but will not introduce bias.
The article in footnote 3 is also an example of other work I (in the interest of brevity) didn’t summarize but have read that lends support to my claims re: aid and governance by testing one mechanism thought to link aid and governance in proper experiments (this is another related one based on qualitative work: https://rdcu.be/b4aTu), so my claims don’t rest purely on country-year panels. If some people DM me and say that they want a longer summary covering this stuff, I might try to find the time to do it.