Next week for the 80,000 Hours Podcast I’ll be interviewing Carl Shulman, advisor to Open Philanthropy, and generally super informed person about history, technology, possible futures, and a shocking number of other topics.
He has previously appeared on our show and the Dwarkesh Podcast:
Carl Shulman (Pt 1) - Intelligence Explosion, Primate Evolution, Robot Doublings, & Alignment
Carl Shulman (Pt 2) - AI Takeover, Bio & Cyber Attacks, Detecting Deception, & Humanity’s Far Future
Carl Shulman on the common-sense case for existential risk work and its practical implications
He has also written a number of pieces on this forum.
What should I ask him?
Why aren’t there more people like him, and what is he doing or planning to do about that?
Related question: How does one become someone like Carl Shulman (or Wei Dai, for that matter)?
I thought about this and wrote down some life events/decisions that probably contributed to becoming who I am today.
Immigrating to the US at age 10 knowing no English. Social skills deteriorated while learning language, which along with lack of cultural knowledge made it hard to make friends during teenage and college years, which gave me a lot of free time that I filled by reading fiction and non-fiction, programming, and developing intellectual interests.
Was heavily indoctrinated with Communist propaganda while in China, but leaving meant I then had no viable moral/philosophical/political foundations. Parents were too busy building careers as new immigrants and didn’t try to teach me values/traditions. So I had a lot of questions that I didn’t have ready answers to, which perhaps contributed to my intense interest in philosophy (ETA: and economics and game theory).
Had an initial career in cryptography, but found it a struggle to compete with other researchers on purely math/technical skills. Realized that my comparative advantage was in more conceptual work. Crypto also taught me to be skeptical of my own and other people’s ideas.
Had a bad initial experience with academic research (received nonsensical peer review when submitting a paper to a conference) so avoided going that route. Tried various ways to become financially independent, and managed to “retire” in my late 20s to do independent research as a hobby.
A lot of these can’t really be imitated by others (e.g., I can’t recommend people avoid making friends in order to have more free time for intellectual interests). But here are some practical advice I can think of:
Try to rethink what your comparative advantage really is.
I think humanity really needs to make faster philosophical progress, so try your hand at that even if you think of yourself as more of a technical person. Same may be true for solving social/coordination problems. (But see next item.)
Somehow develop a healthy dose of self-skepticism so that you don’t end up wasting people’s time and attention arguing for ideas that aren’t actually very good.
It may be worth keeping an eye out for opportunities to “get rich quick” so you can do self-supported independent research. (Which allows you to research topics that don’t have legible justifications or are otherwise hard to get funding for, and pivot quickly as the landscape and your comparative advantage both change over time.)
ETA: Oh, here’s a recent LW post where I talked about how I arrived at my current set of research interests, which may also be of interest to you.
Maybe if/how his thinking about AI governance has changed over the last year?
Relatedly, I’d be interested to know whether his thoughts on the public’s support for AI pauses or other forms of strict regulation have updated since his last comment exchange with Katja, now that we have many reasonably high-quality polls on the American public’s perception of AI (much more concerned than excited), as well as many more public conversations.
A bit, but more on the willingness of AI experts and some companies to sign the CAIS letter and lend their voices to the view ‘we should go forward very fast with AI, but keep an eye out for better evidence of danger and have the ability to control things later.‘
My model has always been that the public is technophobic, but that ‘this will be constrained like peaceful nuclear power or GMO crops’ isn’t enough to prevent a technology that enables DSA and OOMs (and nuclear power and GMO crops exist, if AGI exists somewhere that place outgrows the rest of the world if the rest of the world sits on the sidelines). If leaders’ understanding of the situation is that public fears are erroneous, and going forward with AI means a hugely better economy (and thus popularity for incumbents) and avoiding a situation where abhorred international rivals can safely disarm their military, then I don’t expect it to be stopped. So the expert views, as defined by who the governments view as experts, are central in my picture.
Visible AI progress like ChatGPT strengthens ‘fear AI disaster’ arguments but at the same time strengthens ‘fear being behind in AI/others having AI’ arguments. The kinds of actions that have been taken so far are mostly of the latter type (export controls, etc), and measures to monitor the situation and perhaps do something later if the evidential situation changes. I.e. they reflect the spirit of the CAIS letter, which companies like OpenAI and such were willing to sign, and not the pause letter which many CAIS letter signatories oppose.
The evals and monitoring agenda is an example of going for value of information rather than banning/stopping AI advances, like I discussed in the comment, and that’s a reason it has had an easier time advancing.
Nice to know, Rob! I have really liked the podcasts Carl did. You may want to link to Carl’s (great!) blog in your post too.
In general, I would be curious to know more about how Carl thinks about determining how much resources should go into each cause area, which I do not recall being discussed much in Carl’s 3 podcasts. Some potential segways:
Open Phil Should Allocate Most Neartermist Funding to Animal Welfare. Carl shared in the comments his thoughts on Rethink Priorities’ moral weight project.
Rethink Priorities’ CURVE sequence and cross-cause cost-effectiveness model.
How would Carl allocate Open Philanthropy’s funding? I am not sure how easy it would be to discuss this given Carl has been advising Open Phil, but I like your policy of letting guests decide which questions to answer.
Which areas are under or overrated, and why.
Carl has knowledge about lots of topics, very much like Anders Sandberg. So I think the questions I shared to ask Anders are also good questions for Carl:
Importance of the digital minds stuff compared to regular AI safety; how many early-career EAs should be going into this niche? What needs to happen between now and the arrival of digital minds? In other words, what kind of a plan does Carl have in mind for making the arrival go well? Also, since Carl clearly has well-developed takes on moral status, what criteria he thinks could determine whether an AI system deserves moral status, and to what extent.
Additionally—and this one’s fueled more by personal curiosity than by impact—Carl’s beliefs on consciousness. Like Wei Dai, I find the case for anti-realism as the answer to the problem of consciousness weak, yet this is Carl’s position (according to this old Brian Tomasik post, at least), and so I’d be very interested to hear Carl explain his view.
IIRC Carl had a $5M discretionary funding pot from OpenPhil. What has he funded with it?
Not much new on that front besides continuing to back the donor lottery in recent years, for the same sorts of reasons as in the link, and focusing on research and advising rather than sourcing grants.
My understanding is that he believes that full non-indexical conditioning has solved many (most? all?) problems in anthropics. It might be interesting to hear his views on what has been solved, and what is remaining.
I’d like to hear his advice for smart undergrads who want to build their own similarly deep models in important areas which haven’t been thought about very much e.g. take-off speeds, the influence of pre-AGI systems on the economy, the moral value of insects, preparing for digital minds (ideally including specific exercises/topics/reading/etc.).
I’m particularly interested in how he formed good economic intuitions, as they seem to come up a lot in his thinking/writing.
Can you ask him whether or not it’s rational to assume AGI comes with significant existential risk as a default position, or if one has to make a technical case for it coming with x-risk?
How did he deal with two-envelope considerations in his calculation of moral weights for OpenPhil?
I have never calculated moral weights for Open Philanthropy, and as far as I know no one has claimed that. The comment you are presumably responding to began by saying I couldn’t speak for Open Philanthropy on that topic, and I wasn’t.