I research a wide variety of issues relevant to global health and development. I also consult as a researcher for GiveWell (but nothing I say on the Forum is ever representative of GiveWell). I’m always happy to chat—if you think we have similar interests and would like to talk, send me a calendar invite at karthikt@berkeley.edu!
Karthik Tadepalli
Maximizing a linear objective always leads to a corner solution. So to get an optimal interior allocation, you need to introduce nonlinearity somehow. Different approaches to this problem differ mainly in how they introduce and justify nonlinear utility functions. To me I can’t see where the nonlinearity is introduced in your framework. That makes me suspect the credence-weighted allocation you derive is not actually the optimal allocation even under model uncertainty. Am I missing something?
Apropos of nothing, it will be funny to see SummaryBot summarizing an AI summary.
I think the phrasing is probably a joke but the substance is the same as the post
For what it’s worth “not consistently candid” is definitely a joke about the OpenAI board saying that Sam altman was “not consistently candid” with them rather than a statement of context.
Thanks for the link to your thoughts on why you think it’s likely that there will be a crash. I think you underestimate the likelihood of the US government propping up AI companies. Just because they didn’t invest money in the Stargate expansion doesn’t mean they aren’t reserving the option to do so later if necessary. It seems clear that Elon Musk is personally very invested in AI. Even aside from his personal involvement the fact that China/DeepSeek is in the mix points towards even a normal government offering strong support to American companies in this race.
If you believe that the US government will prop up AI companies to virtually any level they might realistically need by 2029, then i don’t see a crash happening.
The author of this post must be over the moon right now
IQ grew over the entire 20th century (Flynn effect). Even if it’s declining now, it is credulous to take a trend over a few decades and extrapolate it to millennia from today. Especially when that trend of a few decades is itself a reversal of an even longer trend.
Compare this to other trends that we extrapolate out for millennia – increases in life expectancy and income. These are much more robust. Income has been steadily increasing since the Industrial Revolution and life expectancy possibly for even longer than that. That doesn’t make extrapolation watertight by any means, but it’s a way stronger foundation.
Also, I don’t know much about the social context for this article that you say is controversial, but it strikes me as really weird to say “here’s an empirical fact that might have moral implications, but EAs won’t acknowledge it because its taboo and they’re not truthseeking enough”. That’s putting the cart a few miles before the horse.
The True Believer by Eric Hoffer is a book about the psychology of mass movements. I think there are important cautions for EAs thinking about their own relationship to the movement.
There is a fundamental difference between the appeal of a mass movement and the appeal of a practical organization. The practical organization offers opportunities for self-advancement, and its appeal is mainly to self-interest. On the other hand, a mass movement, particularly in its active, revivalist phase, appeals not to those intent on bolstering and advancing a cherished self, but to those who crave to be rid of an unwanted self. A mass movement attracts and holds a following not because it can satisfy the desire for self-advancement, but because it can satisfy the passion for self-renunciation.
I wanted to write a draft amnesty post about this, but I couldn’t write anything better than this Lou Keep essay about the book, so I’ll just recommend you read that.
Something that I personally would find super valuable is to see you work through a forecasting problem “live” (in text). Take an AI question that you would like to forecast, and then describe how you actually go about making that forecast. The information you seek out, how you analyze it, and especially how you make it quantitative. That would
make the forecast process more transparent for someone who wanted to apply skepticism to your bottom line
help me “compare notes”, ie work through the same forecasting question that you pose, come to a conclusion, and eventually see how my reasoning compares to yours.
This exercise does double duty as “substantive take about the world for readers who want an answer” and “guide to forecasting for readers who want to do the same”.
But neglectedness as a heuristic is very good precisely for narrowing down what you think the good opportunity is. Every neglected field is a subset of a non-neglected field. So pointing out that great grants have come in some subset of a non neglected field doesn’t tell us anything.
To be specific, it’s really important that EA identifies the area within that neglected field where resources aren’t flowing, to minimize funging risk. Imagine that AI safety polling had not been neglected and that in fact there were tons of think tanks who planned to do AI safety polling and tons of funders who wanted to make that happen. Then even though it would be important and tractable, EA funding would not be counterfactually impactful, because those hypothetical factors would lead to AI safety polling happening with or without us. So ignoring neglectedness would lead to us having low impact.
I consider myself good at sniffing out edited images but I can’t spot any signs in Balenciaga Pope. Besides, for a deepfake to be useful, it only has to be convincing to a large minority of people, including very technologically unsophisticated people.
I read it as “providing enough funding for independent auditors of charities to exist and be financially sustainable”
Cluster thinking vs sequence thinking remains unbeaten as a way to typecast EA disagreements. It’s been a while since I saw it discussed on the forum. Maybe lots of newer EAs don’t even know about it!
Serious question: doesn’t that cut against the efficacy of corporate campaigns? How would an organization ever know if the company was respecting their promise?
I’ve been thinking about this post for days, which is a great sign, and in particular I think there’s a deep truth in the following:
Indeed, my guess is that people’s utility in the goods available today does have an upper asymptote, that new goods in the future could raise our utility above that bound, and that this cycle has been played out many times already.
I realize this is tangential to your point about GDP measurement, but I think Uzawa’s theorem probably set growth theory back by decades. By axiomatizing that technical change is labor-augmenting, we became unable to speak coherently about automation, something that is only changing recently. I think there is so much more we can understand about technical change that we don’t yet. My best guess of the nature of technological progress is as follows:
In the long run, capital and labor are gross substitutes, and basically all technological change in existing goods is capital-augmenting (-> labor-replacing by the gross substitutes assumption).
However, we constantly create new goods that have a high labor share of costs (e.g. the services transition). These goods keep increasing as a share of the economy and cause an increase in wages.
This idea is given some empirical support by Hubmer 2022 and theoretical clarity by Jones and Liu 2024, but it’s still just a conjecture. So I think the really important question about AI is whether the tons of new products it will enable will themselves be labor-intensive or capital-intensive. If the new products are capital-intensive, breaking with historical trend, then I expect that the phenomenon you describe (good 2′s productivity doesn’t grow) will not happen.
Yeah I was referring more to whether it can bring new ways of spending money to improve the world. There will be new market failures to solve, new sorts of technology that society could gain from accelerating, new ways to get traction on old problems
Similar to Ollie’s answer, I don’t think EA is prepared for the world in which AI progress goes well. I expect that if that happens, there will be tons of new opportunities for us to spend money/start organizations that improve the world in a very short timeframe. I’d love to see someone carefully think through what those opportunities might be.
A history of ITRI, Taiwan’s national electronics R&D institute. It was established in 1973, when Taiwan’s income was less than Pakistan’s income today. Yet it was single-handedly responsible for the rise of Taiwan’s electronics industry, spinning out UMC, MediaTek and most notably TSMC. To give you a sense of how insane this is, imagine that Bangladesh announced today that they were going to start doing frontier AI R&D, and in 2045 they were the leaders in AI. ITRI is arguably the most successful development initiative in history, but I’ve never seen it brought up in either the metascience/progress community or the global dev community.
I didn’t; my focus here is on orienting people towards growth theory, not empirics.
I certainly agree that you’re right about describing why people diversify but I think the interesting challenge is to try and understand under what conditions this behavior is optimal.
You’re hinting at a bargaining microfoundation, where diversification can be justified as the solution arrived at by a group of agents bargaining over how to spend a shared pot of money. I think that’s fascinating and I would explore that more.