I research a wide variety of issues relevant to global health and development. I’m always happy to chat—if you think we have similar interests and would like to talk, send me a calendar invite at karthikt@berkeley.edu!
Karthik Tadepalli
Came here to comment this. It’s the kind of paradigmatic criticism that Scott Alexander talks about, which everyone can nod and agree with when it’s an abstraction.
I love this, thank you for pushing the frontiers of doing good!
Labor markets in LMICs (what we know about growth, part 3)
There were a bunch, most prominently IRRI in the Philippines—Table 1 in this paper lists all of them.
Interesting, then I figure it probably substituted for meat consumption at restaurants rather than meat consumption at home. Regardless, I think it’s mostly valid to use increase in plant based consumption as a proxy for a reduction in meat consumption since total food consumption is relatively stable.
Where are you getting that it didn’t decrease meat sales? I see nothing in the article pointing to that and they also point out that aggregate meat sales have been calling.
I would be extremely skeptical that vegan consumption could go up a lot without meat consumption going down, since that would imply people are just consuming a lot more food in aggregate compared to previous years, which seems unlikely.
I don’t have any disagreement with getting people information early, I just think characterizing the current system as one where only the criticizee benefits is wrong.
Yes, Ramsey discounting focuses on higher incomes of people in the future, which is the part I focused on. I probably shouldn’t have said “main”, but I meant that uncertainty over the future seems like the first order concern to me(and Ramsey ignores it).
Habryka’s comment:
applying even mild economic discount rates very quickly implies pursuing policies that act with extreme disregard for any future civilizations and future humans (and as such overdetermine the results of any analysis about the long-run future).
seems to be arguing for a zero discount rate.
Good point that growth-adjusted discounting doesn’t apply here, my main claim was incorrect.
If you think that the risk of extinction in any year is a constant , then the risk of extinction by year is , so that makes it the only principled discount rate. If you think the risk of extinction is time-varying, then you should do something else. I imagine that a hyperbolic discount rate or something else would be fine, but I don’t think it would change the results very much (you would just have another small number as the break-even discount rate).
Matthew is right that uncertainty over the future is the main justification for discount rates, but another principled reason to discount the future is that future humans will be significantly richer and better off than we are, so if marginal utility is diminishing, then resources are better allocated to us than to them. This classically gives you a discount rate of where is the applied discount rate, is a rate of pure time preference that you argue should be zero, is the growth rate of income, and determines how steeply marginal utility declines with income. So even if you have no ethical discount rate (), you would still end up with . Most discount rates are loaded on the growth adjustment () and not the ethical discount rate () so I don’t think longtermism really bites against having a discount rate. [EDIT: this is wrong, see Jack’s comment]
Also, am I missing something, or would a zero discount rate make this analysis impossible? The future utility with and without science is “infinite” (the sum of utilities diverges unless you have a discount rate) so how can you work without a discount rate?
You might want to add what the subject of Moretti (2021) is, and what the result is, just so people know if they’re interested in learning more.
But the plans I’ve seen are all fairly one sided: all upside goes to the criticized, all the extra work goes to the critic.
I see a pretty important benefit to the critic, because you’re ensuring that there isn’t some obvious response to your criticisms that you are missing.
I once posted something that revised/criticized an Open Philanthropy model, without running it by anyone there, and it turned out that my conclusions were shifted dramatically by a coding error that was detected immediately in the comments.
That’s a particularly dramatic example that I don’t expect to generalize, but often if a criticism goes “X organization does something bad” the natural question is, why do they do that? Is there a reason that’s obvious in hindsight that they’ve thought about a lot, but I haven’t? Maybe there isn’t, but I would want to run a criticism by them just to see if that’s the case.
I don’t think people are obligated to build in the feedback they get extensively if they don’t think it’s valid/their point still stands.
The only thing I would add to Joseph’s response is that EAG Bay Area last year had a ton of undergrads.
My point is that AI could plausibly have rules for interacting with other “persons”, and those rules could look much like ours, but that we will not be “persons” under their code. Consider how “do not murder” has never applied to animals.
If AIs treat us like we treat animals then the fact that they have “values” will not be very helpful to us.
Hi, development economist here. None of these organizations are EA organizations.
The systemic review below is interesting, it seems like it has potential be a fairly big deal on a number of measures including catastrophic expenditure for families, absenteeism, even GDP loss.
Most studies in this space are just correlational, and having high burden of malaria is obviously correlated with lots of bad things—that doesn’t tell us anything meaningful. For example, being poor could cause countries to be unable to deal with malaria, and also cause all those bad things. It looks like the systematic review is also correlational. The studies I linked are the only ones I know of that have a quasi-experimental approach, which is why I lean on them.
as moving from low malaria prevalence to eradication may only improve productivity for a small percentage of people who were getting malaria. Anywhere where malaria has been “eradicated”, seems unlikely to have had malaria as a massive economic issue in the 30 yeas before eradication.
Broadly I don’t think this is true. DDT was an incredibly powerful anti-malarial tool and caused near-eradication even in places with quite high burdens. Malaria has always been endemic to the Americas and my impression is that DDT is the reason it’s mostly gone, though it’s still a real public health problem in e.g. Brazil.
even with nets and prompt treatment, it really seems to affect productivity and motivation.
I’m sure this is true, but the productivity and motivation of individual workers isn’t the big constraint on growth. A lack of jobs, mobility frictions, inability to invest in growing firms, etc—so even if you made every worker able to 2x their working hours because of better health, that would have <<2x impact on GDP.
Putting “almost no stock” in a self-reported measurement which is fairly well correlated with measures like good health, income etc. seems like a strong response, but responding to that might be too long for this thread!
GDP per capita is also strongly correlated with all of those things… but yes it’s a bit long of a discussion for this, see some comments here for a good discussion.
There are a couple of papers showing that disease eradication has real but quantitatively small effects on income. (Acemoglu and Johnson 2007, Bleakley 2007, Bleakley 2010) They are severely problematic in many ways but they are the best evidence we have and they don’t point to large effects. So health interventions are just not that promising in that regard. I plan to elaborate on this argument in the final post of my growth series… some day...
Edit: I should note that Bleakley is more positive than me in his interpretation, but I think the effect sizes are just not large and certainly wouldn’t survive any skeptical adjustments downwards (of which many are warranted)
I guess “narrow target” is just an underspecified part of your argument then, because I don’t know what it’s meant to capture if not “in most plausible scenarios, AI doesn’t follow the same set of rules as humans”.
When I think of values I think of interpretation #2, and I don’t think you prove that P4 is untrue under that interpretation. The idea is that humans are both a) constrained and b) generally inclined to follow some set of rules. An AI would be neither constrained nor necessarily inclined to follow these rules.
Consider our most basic laws: do not murder, do not steal, do not physically assault another person. These seem like very natural ideas could be stumbled upon by a large set of civilizations, even given wildly varying individual and cultural values between them.
Virtually all historical and present atrocities are framed in terms of determining who is a person and who is not. Why would AIs see us as having moral personhood?
I upvoted because I liked the story, but this feels like a pretty glaring strawman of “mathematical solutions to multifaceted human problems”. I can’t imagine any reasonable solution/intervention to which this critique would apply.