Economics PhD Student (Lancaster University) and analyst at SoGive.
Joseph Richardson
Skimming the studies in the meta-analysis, I am rather sceptical that anything can be concluded from these studies. As far as I can tell, each study investigates how quantity sold varies with the price, but cannot distinguish between demand and supply shocks. If any price changes are instead due to demand shocks, then the estimated coefficient could be severely biased and potentially possess an incorrect sign. Therefore, without a valid instrumental variable that only affects prices through supply-side factors, these coefficients will not be informative of consumers’ true substitution patterns.
Although I have no doubts that iron deficiency is a problem, I do not think the evidence linked for this is particularly strong backing for it having massive effects. In particular, estimating the cognitive impacts of anything from one study that hits marginal statistical significance with a massive estimated effect size (0.5 standard deviations) seems likely to lead to a wildly inaccurate estimate of the true effect. This is because this study possesses all the hallmarks of low statistical power interacting with publication bias.
Furthermore, given that the the other studies appear to have small samples sizes (note: I am an economist, not a medic) and the p-values are not far off 0.05, I would be worried about publication bias exaggerating any effects there as well, especially as I suspect studies conducted fifteen to twenty years ago were unlikely to be pre-registered.
To convince me of an effect size, I would want to see a study with p<<0.01 or a meta-analysis of RCTs that addresses the issue of publication bias.
- 11 Jun 2023 3:23 UTC; 6 points) 's comment on Change my mind: Veganism entails trade-offs, and health is one of the axes by (LessWrong;
It’s good to see an animal welfare organisation using serious analysis to guide their interventions, although I’m not entirely clear why the assessment is being done on the basis of deaths rather than the integral of welfare over time?
Looking at the numbers, it appears that producing a kilogram of carp involves significantly more time in factory farms than for a kilogram of salmon (both fish spend around 3 years in a farm but carp weigh half as much and there are more premature deaths). Additionally, given the far higher mortality rates, it seems likely that carp welfare is significantly worse than salmon welfare. If both these factors are true, this intervention is only backfiring under specific (and potentially resolvable) assumptions on the badness of slaughtering wild fish , the welfare of a wild fishes, and the elasticity of wild fish populations with respect to farmed salmon demand.
- 6 Jan 2023 7:54 UTC; 45 points) 's comment on Why Anima International suspended the campaign to end live fish sales in Poland by (
Isn’t that exactly what we’d expect when there is the marginal utility of consumption is diminishing? An additional pound in a developing country is probably more likely to be purchasing something more important to a person’s welfare than someone in a developed country e.g., food or basic shelter vs video games. Furthermore, some of these essentials could themselves be life extending, which would bias the estimates. Finally, it’s possible that life in poverty is bad enough that individuals are willing to forego less to extend it (I put the least weight on this explanation, but it is plausible).
In each of these cases, this GDP-adjusted value of a statistical life discrepancy would be completely rational and the underlying poverty driving the differences would be what needs addressing.
Thanks. I think the issue is the use of the word effect, which usually implies causality in my field (Economics), rather than association when referring to the cross-sectional analysis alongside the fact that context was lost when it was edited down for the forum.
I’d like to add that related interventions have been successful for policymakers in the developing world, with econometrics training increasing reliance on RCT evidence in policymaking and instruction in Effective Altruism increasing politicians’ altruism. Indeed, influencing policymakers may be cost-effective in a wider range of scenarios as it could be far cheaper and is unlikely to require as much highly visible political messaging.
I think this is interesting research but I would quibble with your interpretation of the top part of figure 4 as a causal effect. As far as I can tell, that part is a cross-sectional analysis that is only valid if individuals with greater knowledge of climate organisations are the same in all relevant ways as those with lower levels of knowledge that identify with Friends of the Earth to the same extent. This seems unlikely to be true and indeed does not have to hold for the fixed effect analyses that make up the majority of this piece to be unbiased. If I have not misinterpreted something here, I would recommend being much clearer in future about when switching between fixed and random effects models as they estimate very different parameters, with fixed effects usually being much more reliable at retrieving causal effects.
Given it is easier to migrate to developed countries as a qualified doctor, medicine may also be a promising earn-to-give strategy for those in developing countries as well, if they wanted to pursue that route.
If you want to become a research assistant to academic economists, I would recommend taking econometrics courses with a coding component. A course using Stata is probably best for this, but R might also be fine. The essential requirement for most of those jobs is being able to clean data and run econometric tests, usually within Stata.
Although EA risk attitudes may have played a role in FTX’s demise, I think to the extent that is true it is due to the peculiar nature of finance rather than EA advice being wrong in most instances. Specifically, impact in most areas (e.g., media, innovations, charitable impact) is heavily right-tailed but financial exchanges have a major left-tailed risk of collapse. As human expectations of success are heavily formed and biased by our most recent similar experiences, this will cause people to not take enough risk when the value is in the right tail (as median<mean) and take on too much when there are major failures in the left tail (as median>mean).
If this is true, we may need to consider which specific situations have these left-tailed properties and to be cautious about discouraging too much risk taking in those domains. However, I suspect that this situation may be very rare and has few implications for what EAs should do going forwards.
NOTE: I published something similar on another thread but feel it is even more relevant here.
To the extent that any EA beliefs likely contributed to FTX’s collapse, I suspect that they are mostly related to the fact that typical EA risk attitudes, while normally correct, transfer poorly to the financial sector under human cognitive constraints. Specifically, I think that the finance industry is a special case where the recommendation to be more risk-seeking is wrong. This is because in most areas (e.g., media, innovation, charitable impact) the distribution of outcomes is right-tailed, but in finance it is left-tailed. As there is robust evidence that humans overweight the outcomes of recent events when forming their expectations, this will cause someone trying to optimise for impact in a risk-neutral way (as SBF seemingly tried to) to take on excessive risk in finance and not enough in other fields. This could be especially dangerous if SBF had internalised the meme that we should be more risk-seeking due to the distribution of outcomes being right-tailed. If my hypothesis is correct, it implies that it is especially important to know your risk environment when making decisions but there may not be many implications for most EA activities, as FTX operated in an atypical risk environment.
Critiquing GiveWell’s Model of Economic Effects from Health Interventions
There is indeed some evidence that human capital interventions can have their impact significantly attenuated via general equilibrium effects. For example, in one of the first empirical investigations of this issue, the benefits from an education expansion in India were significantly attenuated once spill-overs were accounted for (Khanna 2022) . Such general equilibrium effects could either take the form of the classic signalling arguments about education or by other mechanisms, such as decreasing the marginal returns to human capital leading to the control group’s investment being lower relative to a counterfactual with no treatment (think of a production function with diminishing marginal products). For a more detailed exposition of general equilibrium’s relevance, see Acemoglu (2010).
Additionally, in the context of cash transfers you cite, you might be interested to know that some RCTs in that area have found negative spill-overs within treated villages (e.g., Haushofer and Shapiro 2018), although the mechanisms are not totally clear. In fact, the existing evidence led GiveWell to believe that cash transfers’ spill-overs were negative in expectation when they last reviewed the evidence.
This is a great post, but I would like to present a counterargument to the claim that the extent of the funding did not matter due to scope neglect. Specifically, I think Flynn’s race could suggest there are limits to the amount one organisation can spend on primary races relative to other ones (I agree that absolute numbers are unlikely to matter). I have two reasons for thinking that the relative levels of spending could have mattered. Firstly, it does seem highly unusual for a well-funded campaign to get all its funding from just one donor, which may have made it easier to land attacks relating to SBF. Secondly, this race becoming the most expensive primary in the country grabbed the attention of national media outlets and possibly political groups (e.g., the opposition PAC), potentially helping rally support around viable opponents. Finally, I will note that my argument is reasoning from one unusual data point and thus am not certain of it myself.
This is an important topic that needs more discussion, but I’m not sure that there are many cases where technocracy and popular opinion actually conflict, because there rarely is a well defined public opinion on an issue. In polls, just changing the way you ask a question is asked can flip the results entirely and the responses are likely to be driven by what they believe their social/political group believes rather than any careful consideration. Furthermore, even if a stable public opinion exists, it is no guarantee that the direction of policy won’t be decided by elite opinion/technocrats/interest groups. That would require a significant number of voters to feel strongly enough to take action (protest/change their vote) for the majority view.
Therefore, I think this discussion could benefit from concrete examples where EA activities is likely to come into a large direct conflict with public opinion in a concrete way, because I can’t think of much EA is currently doing that could lead to such issues. Many of the ways EA currently interacts with the political process (eg Clean Air Taskforce style lobbying for clean energy tax credits, or organisations opposing gain of function research) appear to me as the minutia of funding decisions and regulations that receive too little media attention for there to be a strong and stable public opinion on it. I would expect that to also be the case if EA orgs attempt to influence any other issue that is not a political hot button.
If you’d like to read more about why we might not be able to define a stable public opinion on most issues, I’d recommend the book Democracy for Realists.
Thank you for the response.
Hello, this sounds like a nice idea. Is this program interested in potential Economics PhDs who don’t have many EA related research ideas yet? I’m planning on putting together a PhD application this year, but most of my potential research ideas are currently focused on more typical areas such as Labour (I only got properly interested in EA a few months ago, so maybe some will come).
Thank you for the reply. Indeed, I was referring to the studies engaging with butter-margarine substitution. However, I think it definitely needs emphasising just how weak those studies are and thus that they cannot be trusted to make any policy decisions. Additionally, while relevant, I would not want to use Auer and Papies (2020) as a basis for policy decisions either, as it is quite unclear how comparable those estimates are due to them pooling across a wide range of markets potentially quite different to this one (is the degree of product differentiation the same? groceries could be different to pharmaceuticals or durables). Finally, there is also the issue the papers they synthesise using instruments might not use good ones, but I do not have the time to check that.