Hi Alice, thanks for the datapoint. It’s useful to know you have been a LessWrong user for a long time.
I agree with your overall point that the people we want to reach would be on platforms that have a higher signal-to-noise ratio.
Here are some reasons for why I think it might still make sense to post short-form (not trying to convince you, I just think these arguments are worth mentioning for anyone reading this):
Even if there’s more people we want to reach who watch longform vs. short-form (or even who read LessWrong), what actually matters is whether short-form content is neglected, and whether the people who watch short-form would also end up watching long-form anyway. I think there’s a case for it being neglected, but I agree that a lot of potentially impactful people who watch TikTok probably also watch Youtube.
The super-agentic people who have developed substantial “cog sec” and manage to not look at any social media at all would probably only be reachable via LessWrong / arXiv papers, which is an argument that undermines most AI Safety comms, not just short-form. To that I’d say:
I remember Dwarkesh saying somewhere that 30% of his podcast growth comes from short-form. This hints at short-form bringing potential long-form viewer / listener, and those Dwarkesh listeners are people we’d want to reach.
Youtube pushes aggressively for short-form. And for platforms like Instagram it’s even harder to ignore.
It’s possible to not use Instagram at all, and disable short-form recommendations on Youtube, but every time you add a “cog sec” criteria you’re filtering even more people. (A substantial amount of my short-form views come posting on YT shorts, and I’m planning to extend to Instagram soon).
Similarly to what @Cameron Holmes argues below, broad public awareness is also a nice externality, not just getting more AI Safety talent.
You could imagine reaching people indirectly (think your friend who does watch short-form content talks to you about what they’ve learned at lunch).
When I actually look at the data of what kind of viewers watch my short-form content, it’s essentially older people (> 24yo, even >34yo) from high-income countries like the US. It’s surprisingly not younger people (who you might expect to have shorter attention span / be less agentic).
Makes sense, I agree that neglectedness is still pretty high here even though more people are getting into this side of comms. I’m working on broadly similar things, but not explicitly short form video content.
Hi Alice, thanks for the datapoint. It’s useful to know you have been a LessWrong user for a long time.
I agree with your overall point that the people we want to reach would be on platforms that have a higher signal-to-noise ratio.
Here are some reasons for why I think it might still make sense to post short-form (not trying to convince you, I just think these arguments are worth mentioning for anyone reading this):
Even if there’s more people we want to reach who watch longform vs. short-form (or even who read LessWrong), what actually matters is whether short-form content is neglected, and whether the people who watch short-form would also end up watching long-form anyway. I think there’s a case for it being neglected, but I agree that a lot of potentially impactful people who watch TikTok probably also watch Youtube.
The super-agentic people who have developed substantial “cog sec” and manage to not look at any social media at all would probably only be reachable via LessWrong / arXiv papers, which is an argument that undermines most AI Safety comms, not just short-form. To that I’d say:
I remember Dwarkesh saying somewhere that 30% of his podcast growth comes from short-form. This hints at short-form bringing potential long-form viewer / listener, and those Dwarkesh listeners are people we’d want to reach.
Youtube pushes aggressively for short-form. And for platforms like Instagram it’s even harder to ignore.
It’s possible to not use Instagram at all, and disable short-form recommendations on Youtube, but every time you add a “cog sec” criteria you’re filtering even more people. (A substantial amount of my short-form views come posting on YT shorts, and I’m planning to extend to Instagram soon).
Similarly to what @Cameron Holmes argues below, broad public awareness is also a nice externality, not just getting more AI Safety talent.
You could imagine reaching people indirectly (think your friend who does watch short-form content talks to you about what they’ve learned at lunch).
When I actually look at the data of what kind of viewers watch my short-form content, it’s essentially older people (> 24yo, even >34yo) from high-income countries like the US. It’s surprisingly not younger people (who you might expect to have shorter attention span / be less agentic).
Makes sense, I agree that neglectedness is still pretty high here even though more people are getting into this side of comms. I’m working on broadly similar things, but not explicitly short form video content.