Head of Video at 80,000 Hours
(Opinions here my own by default though will sometimes speak in a professional capacity).
Personal website: www.chanamessinger.com
Head of Video at 80,000 Hours
(Opinions here my own by default though will sometimes speak in a professional capacity).
Personal website: www.chanamessinger.com
That means the world, Toby, on behalf of the whole team, thank you!
Well everything in American politics sucks.
But! There are probably things to actually do about it.
I’m looking for competent, agentic people to volunteer, fundraise or work full time on evidence based preservation of american democracy.
(not in 80k capacity)
DM me if interested.
IMO that’s a different category—there’s a lot of that kind of thing as well and I’m glad it exists but I think it’s useful to separate out.
Thank you!
Oh, nice!
Thanks for this. I’ve read through the whole thing though haven’t thought about the numbers in depth yet. I’m hoping to write a forum post with my retrospective on the AI in Context video at some point!
A few quick thoughts which I imagine won’t be very new to people:
Comment and comment analysis could also be a proxy for engagement and quality of engagement
Someone said that it would be hard to predict future success from AI in context based only on our one big video and I strongly agree. We’re hoping to release our next one in the next month, and I’m really excited about it but by default, we should expect a lot of regression to the mean. (Note: I wouldn’t even think of us as having two videos. The other one is just a channel trailer we threw up to have something to introduce people to the channel.)
I like this question on the value of subsequent viewer minutes, and I don’t currently have a take. I think some complicating factors are:
- For one thing, it seems like a 5-minute video wouldn’t do very well on YouTube, so it’s not like it’s really an option necessarily to make a lot of 5-minute videos relative to one 45-minute video, because 45-minute videos just have a niche. You might still want to make more 20-minute videos and fewer 45-minute videos.
- I’m also not convinced that effort scales with time. Certainly editing time does, but often what you’re trying to do (at least what we’re trying to do) is tell a story and there’s a certain length that allows you to tell the story. And so it’s not convertible or fungible in the way that it might naively appear.
- To the point above about telling a story, I think part of the value of a video is whether people come away with like an overall sense of what the video is about in a way that’s memorable/the takeaways, and that might require telling a good story. Some stories might take 20 minutes to tell and some stories might take 45 minutes to tell. Maybe you want to focus on the stories that take less time to tell if it takes you a lot less time or money to make the video, but as I say I don’t think effort scales that way for us.
For what it’s worth we’re not currently focused really heavily on getting to the right target audience. We’re currently doing product validation to just see if we know how to make good videos, but will be excited to think about that more in the future.
Mostly this is on vibes, and the MIRI team trying hard and seeming very successful and getting a lot of buzz, great blurbs, some billboards, etc.
I saw this tweet
E.g., the book is likely to become a NYT bestseller. The exact position can be improved by more pre-orders. (The figure is currently at around 5k pre-orders, according to the q&a; +20k more would make it a #1 bestseller).
Chat says about that
If preorders = 5k, you’re probably looking at 8k–15k total copies sold in week 1 (preorders + launch week sales).
Recently, nonfiction books debuting around 8k–12k week-1 copies often chart #8–#15 on the NYT list.
Lifetime Sales Ranges
Conservative: 20k–30k copies total (good for a nonfiction debut with moderate buzz).
Optimistic: 40k–60k (if reviews, media, podcasts, or TikTok keep it alive).
Breakout: 100k+ (usually requires either a viral moment, institutional adoption, or the author becoming part of a big public debate).
Is that a lot? I don’t actually know, would be that’s not that many, but a decent number and might get a lot of buzz, commentary, etc. This is a major crux so I’d be interested in take.
I think the arguments here are clear but let me know if not
e.g.
If you have next steps for people (BlueDot, CEA, MATS), be ready to retweet / restack MIRI’s materials and be like “if you care about this, here’s a way to get involved”
Similarly, maybe pitch MIRI on putting your org / next steps on their landing page for the book and see if they think that makes sense
Landing page / resource hub: “So you just read the MIRI book?” page that curates your content, fellow orgs’ resources, and next steps. Make it optimized for search and linkable.
Other?
Very interested in takes!
Thank you so much!
Thank you so much!
Amazing
I hear this; I don’t know if this is too convenient or something, but, given that you were already concerned at the prioritization 80K was putting on AI (and I don’t at all think you’re alone there), I hope there’s something more straightforward and clear about the situation as it lies now where people can opt-in or out of this particular prioritization or hearing the case for it.
Appreciate your work as a university organizer—thanks for the time and effort you dedicate to this (and also hello from a fellow UChicagoan, though many years ago).
Sorry I don’t have much in the way of other recommendations; I hope others will post them.
I think others at 80k are best placed to answer this (for time zone reasons I’m most active in this thread right now), but for what it’s worth, I’m worried about the loss at the top of the EA funnel! I think it’s worth it overall, but I think this is definitely a hit.
That said, I’m not sure AI risk has to be abstract or speculative! AI is everywhere, I think feels very real to some people, can feel realer than others, and the problems we’re encountering are rapidly less speculative (we have papers showing at least some amount of alignment faking, scheming, obfuscation of chain of thought, reward hacking, all that stuff!)
One question I have is how much it will be the case in the future that people looking for a general “doing good” framework will in fact bounce off of the new 80k. For instance, it could be the case that AI is so ubiquitous that it would feel totally out of touch to not be discussing it a lot. More compellingly to me, I think it’s 80k’s job to make the connection; doing good in the current world requires taking AI and its capabilities and risks seriously. We are in an age of AI, and that has implications for all possible routes to doing good.
I like your take on reputation considerations; I think lots of us will definitely have to eat non-zero crow if things really plateau, but I think the evidence is strong enough to care deeply about this and prioritize it, and I don’t want to obscure that we believe that for the reputational benefit.
Hey Zach,
(Responding as an 80k team member, though I’m quite new)
I appreciate this take; I was until recently working at CEA, and was in a lot of ways very very glad that Zach Robinson was all in on general EA. It remains the case (as I see it) that, from a strategic and moral point of view, there’s a ton of value in EA in general. It says what’s true in a clear and inspiring way, a lot of people are looking for a worldview that makes sense, and there’s still a lot we don’t know about the future. (And, as you say, non-fanaticism and pluralistic elements have a lot to offer, and there are some lessons to be learned about this from the FTX era)
At the same time, when I look around the EA community, I want to see a set of institutions, organizations, funders and people that are live players, responding to the world as they see it, making sure they aren’t missing the biggest thing currently happening. (or, if like 80k they are an org where one of its main jobs is communicating important things, letting their audiences miss it.) Most importantly, I want people to act on their beliefs (with appropriate incorporation of heuristics, rules of thumb, outside views, etc). And to the extent that 80k staff and leadership’s beliefs changed with the new evidence, I’m excited for them to be acting on it.
I wasn’t involved in this strategic pivot, but when I was considering the job, I was excited to see a certain kind of leaping to action in the organization as I was considering whether to join.
It could definitely be a mistake even within this framework (by causing 80k to not appeal parts of its potential audience) or empirically (on size of AI risk, or sizes of other problems) or long term (because of the damage it does to the EA community or intellectual lifeblood / eating the seed corn). In the past I’ve worried that various parts of the community were jumping too fast into what’s shiny and new, but 80k has been talking about this for more than a year, which is reassuring.
I think the 80k leadership have thoughts about all of these, but I agree that this blog post alone doesn’t fully make the case.
I think the right answer to these uncertainties is some combination of digging in and arguing about them (as you’ve started here — maybe there’s a longer conversation to be had), or waiting and see how these bets turn out.
Anyway, I appreciate considerations like the ones you’ve laid out because I think they’ll help 80k figure out if it’s making a mistake (now or in the future), even though I’m currently really energized and excited by the strategic pivot.
We’ve started adding support for search operators in the search text box. Right now you can use the “user” operator to filter by author, and the “topic” operator to filter by topic, though these will currently only do exact matches and are case-sensitive. Note that there is already a topic filter on the left side, if that is more convenient for you.
Oooh I’m especially excited for this for comments, but it looks like it doesn’t work for comments, is that right?
I like the idea of past editions living somewhere (do they now?) to avoid link rot and allow looking at the history of things and etc. Maybe I’d advocate for putting them all somewhere CEA owns as well in case substack stops being the right place.
Not sure if that’s the same distinction I would make but broadly just takes a long time to write a full script that we’re happy with, which includes figuring out the right structure, the high level narratives, the beats we want to hit, the takeaways, giving it a good emotional arc, etc.