I think about technological progress, global development, and AI’s economic impacts. I write about these topics on my blog, Beyond Imitation.
Karthik Tadepalli
The Oracle’s Gift
I don’t think of altruism as being completely selfless. Altruism is a drive to help other people. It exists within all of us to more or less extent, and it coexists with all of our other desires. Wanting things for yourself or for your loved ones is not opposed to altruism.
When you accept that—and the point Henry makes that it isn’t zero sum—there doesn’t seem to be any conflict.
Yes, nothing in this post seems less likely than an EA trying to convince socialists to become EAs and subsequently being convinced of socialism.
This is a valid statement but non-responsive to the actual post. The argument is that there is intuitive appeal in having a utility function with a discontinuity at zero (ie a jump in disutility from causing harm), and ~standard EV maximisation does not accommodate that intuition. That is a totally separate normative claim from arguing that we should encode diminishing marginal utility.
I’m obviously missing something trivial here, but also I find it hard to buy “limited org capacity”-type explanations for GW in particular given total funding moved, how long they’ve worked, their leading role in the grantmaking ecosystem etc)
This should be very easy for you to buy! The opportunity cost of lookbacks is investigating new grants. It’s not obvious that lookbacks are the right way to spend limited research capacity. Worth remembering that GW only has around 30 researchers and makes grants in a lot of areas. And while they are a leading EA grantmaker, it’s only recently that their giving has scaled up to being a notable player in the total development ecosystem.
The relative range heuristic
Without this assumption, recursive self improvement is a total non starter. RSI relies on an improved AI being able to design future AIs (“we want Claude N to build Claude N+1”)
Skeptic says “longtermism is false because premises X don’t hold in case Y.” Defender says “maybe X doesn’t hold for Y, but it holds for case Z, so longtermism is true. And also Y is better than Z so we prioritize Y.”
What is being proven here? The prevailing practice of longtermism (AI risk reduction) is being defended by a case whose premises are meaningfully different from the prevailing practice. It feels like a motte and bailey.
It’s clearly not the case that asteroid monitoring is the only or even a highly prioritised intervention among longtermists. That makes it uncompelling to defend longtermism with an argument in which the specific case of asteroid monitoring is a crux.
If your argument is true, why don’t longtermists actually give a dollar to asteroid monitoring efforts in every decision situation involving where to give a dollar?
I certainly agree that you’re right about describing why people diversify but I think the interesting challenge is to try and understand under what conditions this behavior is optimal.
You’re hinting at a bargaining microfoundation, where diversification can be justified as the solution arrived at by a group of agents bargaining over how to spend a shared pot of money. I think that’s fascinating and I would explore that more.
Maximizing a linear objective always leads to a corner solution. So to get an optimal interior allocation, you need to introduce nonlinearity somehow. Different approaches to this problem differ mainly in how they introduce and justify nonlinear utility functions. To me I can’t see where the nonlinearity is introduced in your framework. That makes me suspect the credence-weighted allocation you derive is not actually the optimal allocation even under model uncertainty. Am I missing something?
Apropos of nothing, it will be funny to see SummaryBot summarizing an AI summary.
I think the phrasing is probably a joke but the substance is the same as the post
For what it’s worth “not consistently candid” is definitely a joke about the OpenAI board saying that Sam altman was “not consistently candid” with them rather than a statement of context.
Thanks for the link to your thoughts on why you think it’s likely that there will be a crash. I think you underestimate the likelihood of the US government propping up AI companies. Just because they didn’t invest money in the Stargate expansion doesn’t mean they aren’t reserving the option to do so later if necessary. It seems clear that Elon Musk is personally very invested in AI. Even aside from his personal involvement the fact that China/DeepSeek is in the mix points towards even a normal government offering strong support to American companies in this race.
If you believe that the US government will prop up AI companies to virtually any level they might realistically need by 2029, then i don’t see a crash happening.
The author of this post must be over the moon right now
IQ grew over the entire 20th century (Flynn effect). Even if it’s declining now, it is credulous to take a trend over a few decades and extrapolate it to millennia from today. Especially when that trend of a few decades is itself a reversal of an even longer trend.
Compare this to other trends that we extrapolate out for millennia – increases in life expectancy and income. These are much more robust. Income has been steadily increasing since the Industrial Revolution and life expectancy possibly for even longer than that. That doesn’t make extrapolation watertight by any means, but it’s a way stronger foundation.
Also, I don’t know much about the social context for this article that you say is controversial, but it strikes me as really weird to say “here’s an empirical fact that might have moral implications, but EAs won’t acknowledge it because its taboo and they’re not truthseeking enough”. That’s putting the cart a few miles before the horse.
The True Believer by Eric Hoffer is a book about the psychology of mass movements. I think there are important cautions for EAs thinking about their own relationship to the movement.
There is a fundamental difference between the appeal of a mass movement and the appeal of a practical organization. The practical organization offers opportunities for self-advancement, and its appeal is mainly to self-interest. On the other hand, a mass movement, particularly in its active, revivalist phase, appeals not to those intent on bolstering and advancing a cherished self, but to those who crave to be rid of an unwanted self. A mass movement attracts and holds a following not because it can satisfy the desire for self-advancement, but because it can satisfy the passion for self-renunciation.
I wanted to write a draft amnesty post about this, but I couldn’t write anything better than this Lou Keep essay about the book, so I’ll just recommend you read that.
Something that I personally would find super valuable is to see you work through a forecasting problem “live” (in text). Take an AI question that you would like to forecast, and then describe how you actually go about making that forecast. The information you seek out, how you analyze it, and especially how you make it quantitative. That would
make the forecast process more transparent for someone who wanted to apply skepticism to your bottom line
help me “compare notes”, ie work through the same forecasting question that you pose, come to a conclusion, and eventually see how my reasoning compares to yours.
This exercise does double duty as “substantive take about the world for readers who want an answer” and “guide to forecasting for readers who want to do the same”.
I have long had the opposite criticism; that almost everything that gets high engagement on the Forum is lowest-common-denominator content, usually community-related posts or something about current events, rather than technical writing that has high signal and helps us make progress on a topic. So in a funny way, I have also come to the same conclusion as you:
but for the opposite reason.