CS student at the University of Southern California. Previously worked for three years as a data scientist at a fintech startup. Before that, four months on a work trial at AI Impacts. Currently working with Professor Lionel Levine on language model safety research.
aogara
Glad you’re interested in EA! Looks like a great background for various AI and AI safety jobs. One opportunity that could be cool: https://second-bellflower-54f.notion.site/Co-Founder-for-FTX-Funded-Project-ef448c5dc3864939b919be3c79114fbf
Fair enough. Both seem plausible to me, we’d probably need more evidence to know which one would require more compute.
This seems like a reasonable assumption for other anchors such as the Lifetime and the Neural Network Horizon anchors, which assume that training environments for TAI are similar to training environments used for AI today. But it seems much more difficult to justify for the evolution anchor, which Ajeya admits would be far more computationally intensive than storing text or simulating a deterministic Atari game.
This post argues that the evolutionary environment is similarly or more complex than the brains of the organisms within it, while the second paragraph of the above quotation disagrees. Neither argument seems detailed enough to definitively answer the question, so I’d be interested to read any further research on the two questions proposed in the post:
Coming up with estimates of what the least fine-grained world that we would expect might be able to produce intelligent life if we simulated natural selection in it.
Calculating how much compute it would take to in fact simulate it.
I basically agree with this. On 1, undergrad degrees aren’t a great proxy but particularly the people listed on the LTFF site are all career engineers. On 2, your description sounds like the correct general case, but in a case where non-policy people are questioning the effectiveness of any policy work on the grounds that policy is ineffective, I would expect people who’d worked on it to usually have a brighter view given that they’ve chosen to work on it. 3 is of course up for debate and the main question.)
Yeah I’ve only listened to the Tyler Cowen one but I thought it was great. Tyler kept objecting to utilitarianism using various thought experiments and arguments, and Will had some pretty interesting responses.
That’s really cool! Seems like exactly the kind of person you’d want for policy grantmaking, with previous experience in federal agencies, think tanks, and campaigns. Thanks for sharing.
Ah, no worries. Are there any new grantmakers with policy backgrounds?
All four current fund managers at LTFF have degrees in computer science, and none have experience in policy. Similarly, neither of OpenPhil’s two staff members on AI Governance have experience working in government or policy organizations. These grantmakers do incredible work, but this seems like a real blind spot. If there are ways that policy can improve the long-term future, I would expect that grantmakers with policy expertise would be best positioned find them.
EDIT: See below for the new LTFF grantmaker with exactly this kind of experience :)
GPT-3 was released June 2020. Meta didn’t release their OPT until May 2022. They did this after open source replications by EleutherAI and others, and after more impressive language models had been released by DeepMind (Gopher, Chinchilla) and Google (PaLM). According to Meta’s own evaluation in Figure 4 of the OPT paper, their model still fails to perform as well as GPT-3.
Meta also recently lost many of their top AI scientists [1]. They disbanded FAIR, their dedicated AI research group, and instead have put all ML and AI researchers on product-focused teams [2].
Meta seems ~2 years behind OpenAI and DeepMind in the AI race. They are prioritizing video games, not AI, as their central focus in the next 5-10 years. Zuckerberg must have longer timelines than many other people, or else he’d be jumping on this economic opportunity. As best I can tell, OpenAI, DeepMind, and Google Brain are head and shoulders above any other non-Chinese competition and are therefore responsible for the ongoing race to AGI.
[1] https://www.cnbc.com/amp/2022/04/01/metas-ai-lab-loses-some-key-people.html
This looks really cool, thanks for sharing. Would you be able to say more about who the audience is, and how you’ll publicize this writing? The venue of publication seems like one of the more important factors in determining the impact of the writing, and different venues call for different writing styles. For example, I’d write very different pieces for a Vox explainer, a Brookings report, a published paper, or a PDF attached to an email. Where do you plan to publish by default? And do you think it would be worthwhile to identify write for a specific venue, perhaps by working with relevant coauthors?
This seems like a great opportunity for independently funded engineers to work with professors without receiving funding from their universities. My understanding is that the Fund for Alignment Research (FAR) does exactly this. They hire their own software engineers with their own funding, and then choose researchers who need engineering talent to work with on a project basis. FAR only focuses on AI Safety, but this kind of organization could be valuable for other fields. Individuals could also do this on an freelance basis with a grant from e.g. FTX or LTFF.
This seems like a good idea to me. I didn’t donate immediately when I had my first job because I still felt financially illiterate and I didn’t have a great sense of my overall spending and income trends. A few years later, when I had a few months runway in savings and a better sense of my financial position, I went back and made donations proportional to what I’d earned in years prior. This had the additional benefit of clustering my tax writeoffs in a single year, resulting in a nice tax refund.
Hi Cullen, this is a fantastic sequence. Have you considered publishing it as a paper on Arxiv or the like, so that it can be more readily cited in other work?
I recently wrote a summary of work on “Legal AI”, arguing that it is an important research direction for alignment but noting that “there exists no thorough overview of legal AI from a longtermist perspective.”Clearly this was incorrect, you’ve written it. I had not yet come across your sequence, in part because it was not cited as a motivation for any of the papers I’d read.
I’m planning to follow up within the next few weeks with more concrete questions about potential research directions on this topic, but just wanted to leave that brief note and say kudos on the work.
I agree, it doesn’t seem like an actionable opportunity to help with our current resources. I’d be more optimistic about sending supplies to reduce suffering because I don’t think mass civilian casualties actually threaten the stability of the regime. But that’s debatable and difficult to do.
I’m mainly interested because the North Korean government seems like one of the largest sources of suffering in the world, and EAs should stay aware of the world’s biggest challenges even if we’re currently better able to help in other places.
North Korea faces Covid and Drought [Linkpost]
Great review, thanks. Andrew Leigh is a really interesting profile, I’m surprised I haven’t seen more EA writing about him. Excited for more policy proposals about catastrophic risks over the coming years.
The organizers of such a group are presumably working towards careers in AI safety themselves. What do you think about the opportunity cost of their time?
To bring more people into the field, this strategy seems to delay the progress of the currently most advanced students within the AI safety pipeline. Broad awareness of AI risk among potentially useful individuals should absolutely by higher, but it doesn’t seem like the #1 bottle neck compared to developing people from “interested” to “useful contributor”. If somebody is on that cusp themselves, should they focus on personal development or outreach?
Trevor Levin and Ben Todd had an interesting discussion of toy models on this question here: https://forum.effectivealtruism.org/posts/ycCBeG5SfApC3mcPQ/even-more-early-career-eas-should-try-ai-safety-technical?commentId=tLMQtbY3am3mzB3Yk#comments
This explanation makes sense to me, but I wonder if there is better middle ground where regrantors benefit from a degree of publicity.
This comes from personal experience. I received an FTX regrant for upskilling in technical AI safety research, as did several other students in similar positions as me. I did not know my regrantor personally, but rather messaged them on the EA Forum and hopped on a call to discuss careers in AI safety. They saw that I fit the profile of “potential AI safety technical researcher” and very quickly funded me without an extended vetting process. I would not have received my grant if (a) I didn’t often message people on the EA Forum or (b) I didn’t get on a call with a stranger without a clear goal in mind, both of which seem like poor screening criteria.
Perhaps it was an effective screen for “entrepreneurial” candidates, but I expect that an EA Forum post requesting applications could have produced several more grants of similar quality without overwhelming my regrantor. Regranting via personal connections reduces the pool of potential grantees to people who have thoroughly networked themselves within EA, which privileges paths like “move to the Bay” at the expense of paths like “go to your cheap state school with no EA group and study hard”. It’s a difficult line to walk and I’m not a grantmaker, but I think more public access might improve both the equity and quality of FTX regrants.
Edited to add: Given LTFF’s history of funding similar people and the drawbacks of regrantor publicity, FTX’s anonymity policy does seem reasonable to me. Appreciate the pushback.
This is a great set of guidelines for integrity. Hopefully more grantmakers and other key individuals will take this point of view.
I’d still be interested in hearing how the existing level of COIs affects your judgement of EA epistemics. I think your motivated reasoning critique of EA is the strongest argument that current EA priorities do not accurately represent the most impactful causes available. I still think EA is the best bet available for maximizing my expected impact, but I have baseline uncertainty that many EA beliefs might be incorrect because they’re the result of imperfect processes with plenty of biases and failure modes. It’s a very hard topic to discuss, but I think it’s worth exploring (a) how to limit our epistemic risks and (b) how to discount our reasoning in light of those risks.
Hey, thanks for sharing! I thought this was well researched and written. As somebody who’s pretty convinced by the arguments for AI risk, I do mostly disagree with it, but I’d just like to ask a question and share an interesting line of research:
First, do you think there was ever a time when climate change predictions were more similar to religious apocalypse claims? For example, before there was substantial evidence that the Earth was getting warmer, or when people first started hypothesizing how chemicals and the atmosphere worked. The greenhouse effect was first proposed in 1824 long before temperatures started to rise — was the person who proposed it closer to a religious prophet than a scientist?
(I would say no because scientists can make good predictions about future events by using theory and careful experiments. For example, Einstein predicted the existence of “gravitational waves” in 1916 based on theory alone, and his theory wasn’t confirmed with empirical evidence until nearly 100 years later by the LIGO project. AI risk is similarly a prediction based on good theory and careful experiments that we can conduct today, despite the fact that we don’t have AGI yet and therefore don’t know for certain.)
Second, you mention that no existential harm has ever befallen humanity. It’s worth pointing out that, if it had, we wouldn’t be here talking about it today. Perhaps the reason we don’t see aliens in the sky is because existential catastrophes are common for intelligent life, and our survival thus far is a long string of good luck. I’m not an expert on this topic and I don’t quite believe all the implications, but there is a field of study devoted to it called anthropics, and it seems pretty interesting.
More on anthropics: https://www.briangwilliams.us/human-extinction/doomsday-and-the-anthropic-principle.html, https://nickbostrom.com/papers/anthropicshadow.pdf
Hope to read more from you again!