I research a wide variety of issues relevant to global health and development. I also consult as a researcher for GiveWell (but nothing I say on the Forum is ever representative of GiveWell). I’m always happy to chat—if you think we have similar interests and would like to talk, send me a calendar invite at karthikt@berkeley.edu!
Karthik Tadepalli
I’ve been thinking about this post for days, which is a great sign, and in particular I think there’s a deep truth in the following:
Indeed, my guess is that people’s utility in the goods available today does have an upper asymptote, that new goods in the future could raise our utility above that bound, and that this cycle has been played out many times already.
I realize this is tangential to your point about GDP measurement, but I think Uzawa’s theorem probably set growth theory back by decades. By axiomatizing that technical change is labor-augmenting, we became unable to speak coherently about automation, something that is only changing recently. I think there is so much more we can understand about technical change that we don’t yet. My best guess of the nature of technological progress is as follows:
In the long run, capital and labor are gross substitutes, and basically all technological change in existing goods is capital-augmenting (-> labor-replacing by the gross substitutes assumption).
However, we constantly create new goods that have a high labor share of costs (e.g. the services transition). These goods keep increasing as a share of the economy and cause an increase in wages.
This idea is given some empirical support by Hubmer 2022 and theoretical clarity by Jones and Liu 2024, but it’s still just a conjecture. So I think the really important question about AI is whether the tons of new products it will enable will themselves be labor-intensive or capital-intensive. If the new products are capital-intensive, breaking with historical trend, then I expect that the phenomenon you describe (good 2′s productivity doesn’t grow) will not happen.
Yeah I was referring more to whether it can bring new ways of spending money to improve the world. There will be new market failures to solve, new sorts of technology that society could gain from accelerating, new ways to get traction on old problems
Similar to Ollie’s answer, I don’t think EA is prepared for the world in which AI progress goes well. I expect that if that happens, there will be tons of new opportunities for us to spend money/start organizations that improve the world in a very short timeframe. I’d love to see someone carefully think through what those opportunities might be.
A history of ITRI, Taiwan’s national electronics R&D institute. It was established in 1973, when Taiwan’s income was less than Pakistan’s income today. Yet it was single-handedly responsible for the rise of Taiwan’s electronics industry, spinning out UMC, MediaTek and most notably TSMC. To give you a sense of how insane this is, imagine that Bangladesh announced today that they were going to start doing frontier AI R&D, and in 2045 they were the leaders in AI. ITRI is arguably the most successful development initiative in history, but I’ve never seen it brought up in either the metascience/progress community or the global dev community.
I didn’t; my focus here is on orienting people towards growth theory, not empirics.
I don’t understand this view. Would they want their initiative to be run by incompetent people? If not, in what world do they not train their staff? The fact that they also tacked on an expectation that they would not migrate does not mean that expectation was pivotal in their decision.
I think Jason is saying that the “support to emigrate” was limited to recommendations.
Got it, yes I agree now.
Yes, continuity doesn’t rule out St Petersburg paradoxes. But i don’t see how unbounded utility leads to a contradiction. can you demonstrate it?
Continuity doesn’t imply your utility function is bounded, just that it never takes on the value “infinity”, ie for any value it takes on, there are higher and lower values that can be averaged to reach that value.
Maximizing expected utility is not the same as maximizing expected value. The latter assumes risk neutrality, but vNM is totally consistent with maximizing expected utility under arbitrary levels of risk aversion, meaning that it doesn’t provide support for your view expressed elsewhere that risk aversion is inconsistent with vNM.
The key point is that there is a subtle difference between maximizing a linear combination of outcomes, vs maximizing a linear combination of some transformation of outcomes. That transformation can be arbitrarily concave, such that we would end up making a risk averse decision.
Apt time to plug an analysis I did a while ago of paying farmers in India not to burn their crop stubble. It’s primarily a (pretty effective) air quality intervention, but I pulled together some numbers that suggest it also averts GHGs at $36/ton of CO2e, which would probably satisfy a lot of climate funders!
I’m referring to why it doesn’t get brought up by the opposers of Trump tariffs, who clearly do not think that trade is zero sum (unless they somehow think that tariffs benefit foreigners and hurt Americans). The liberal American opposition to tariffs is totally silent on their effects abroad.
Tariffs on manufactured goods are likely incident on manufacturing workers, which is a way in which they can increase poverty, though probably not extreme $1/day poverty. Regardless the general point goes through, that they will reduce the incomes of a generally not-well-off group of people.
Love this analysis and I’ve been wondering why no one talks about it. There are two motivations that makes sense to me for why analysts don’t talk about this:
Political framing—putting American interests first is the way to persuade policymakers to listen to you.
Genuine nationalism—these analysts actually care more about the harms to Americans than to foreigners.
It bothers me to not be able to distinguish between these.
If you haven’t read it, this article is a convincing argument for why containing harmful policies by the West should be a main focus for development policy.
Shooting from the hip here—if the future of AI progress is inference-time scaling, that seems inherently “safer”/less prone to power-seeking. Expensive inference means that a model is harder to reproduce (e.g. can’t just upload itself somewhere else, because without heavy compute its new version is relatively impotent) and harder for rogue actors to exploit (since they will also need to secure compute for every action they make it do).
If this is true, it suggests that AI safety could be advanced by capabilities research into AI architecture that can be more powerful yet also more constrained in individual computations. So is it true?
The Humane League, EA Animal Welfare Fund, GiveWell. Amounts were small but I have something planned for next year...
I think your position compels you to say that not only is it better to donate to AW over GHD, it is actually better to set money on fire rather than donate it to GHD. Or spend it on a yacht, or a castle, or a pile of video games. With that framing, I think you’re back down to 1% territory.
Merely subsidizing nets, as opposed to free distribution, used to be a much more popular idea. My understanding is that that model was nuked by this paper showing that demand for nets falls discontinuously at any positive price (60 percentage points reduction in demand when going from 100% subsidy to 90% subsidy). So unless people’s value for their children’s lives are implausibly low, people are making mistakes in their choice of whether or not to purchase a bednet.
New Incentives, another GiveWell top charity, can move people to vaccinate their children with very small cash transfers (I think $10). The fact that $10 can mean the difference between whether people protect their children from life threatening diseases or not is crazy if you think about it.
This is not a rare finding. This paper found very low household willingness to pay for cleaning up contaminated wells, which cause childhood diarrhea and thus death. Their estimates imply that households in rural Kenya are willing to pay at most $770 to prevent their child’s death, which just doesn’t seem plausible. Ergo, another setting where people are making mistakes. Another; demand for motorcycle helmets is stupidly low and implies that Nairobi residents value a statistical life at $220, less than 10% of annual income. Unless people would actually rather die than give up 10% of their income for a year, this is clearly another case where people’s decisions do not reflect their true value.
This is not that surprising if you think about it. People in rich countries and poor countries alike are really bad at investing in preventative health. Each year I dillydally on getting the flu vaccine, even though I know the benefits are way higher than the costs, because I don’t want to make the trip to CVS (an hour out of my day, max). My friend doesn’t wear a helmet when cycling, even at night or in the rain, because he finds it inconvenient. Most of our better health in the rich world doesn’t come from us actively making better health decisions, but from our environment enabling us to not need to make health decisions at all.
Felt a little scared realizing that that episode is over 3 years old. It’s such a great one and I return to it often!
Serious question: doesn’t that cut against the efficacy of corporate campaigns? How would an organization ever know if the company was respecting their promise?