Nice analysis, this is the sort of thing I like to see. I have some ideas for potential improvements that don’t require significant effort:
Alternative methods of evaluating videos, e.g. by view count rather than by view-minutes. I already did a bit of this in my other comment.
In that comment, I calculated channel rankings by views-per-dollar, and by an average of views-per-dollar + view-minutes-per-dollar.
You could also treat the value of a view as the logarithm of view-minutes, which feels about right to me. I couldn’t calculate that from just the spreadsheet, I’d need to modify the Python script, but that still shouldn’t be hard to do.
[ETA 3] Actually I’m not sure this is possible since I don’t think there’s a way to see view time for individual users. But maybe it’s possible if you can see bucketed view times, e.g. “X people watched the video for 1 to 2 minutes”? Then you can take the logarithm of each bucket and get the average.
How many views are from new viewers, and how much is it “preaching to the converted”?
An easy heuristic: assign more value to views-above-expectation. For example if a channel routinely gets 10,000 views, assume most of those are coming from the same people, and discount them. But if one video gets 100,000 views, treat the extra 90,000 viewers as unique.
I believe YouTubers get data on how many views come from subscribers and how many new subscribers they get per video. You could discount views from pre-existing subscribers, and give full credit to views from non-subscribers and maybe extra credit to views that convert to a subscription.
[ETA] I would do a Bayesian adjustment based on the number of videos a channel has published. For example, AI in Context only has two videos and only one video with significant views, so it’s hard to predict whether they will be able to reproduce that success. Whereas Rob Miles has tons of videos, so I have a pretty good idea of how many views his next video is going to get. You could use a Bayesian algorithm to assign greater confidence to channels with many videos. You could even use the variance on a channel’s video views to calculate the posterior expected views (or view-minutes) on a new video.
[ETA 2] Look at channel growth over time, not just the average. Maybe fit an exponential curve to the view count and use that to predict the views on the next video. (Or combine this method with a Bayesian adjustment.)
Yes, lots to consider. I talked to a lot of people about how to measure impact, and yes, it’s hard. This is, AFAIK, the first public attempt at cost-effectiveness for this stuff.
I disagree on things like log(minutes). Short-form content is consumed with incredibly low engagement and gets scrolled through extremely passively, for hours at a time, just like long-form content.
In terms of preaching to the converted, I think it takes a lot of engagement time to get people to take action. It seems to often take people 1-3 years of engagement with EA content to make significant career shifts, etc.
I’m measuring cost effectiveness thus far. Some people may overperform expectations, and some people may underperform.
As for measuring channel growth, I expect lots of people to make cases for why their channel will grow more compared to others and this would introduce a ton of bias. The fairest thing to do is to measure past impact. More importantly, when we compare to other places we use CEAs, we measure the impact that has happened, we don’t just speculate (even with good assumptions) the impact that will occur in the future. Small grants/attempts are made and the ones that work, we scale up.
Nice analysis, this is the sort of thing I like to see. I have some ideas for potential improvements that don’t require significant effort:
Alternative methods of evaluating videos, e.g. by view count rather than by view-minutes. I already did a bit of this in my other comment.
In that comment, I calculated channel rankings by views-per-dollar, and by an average of views-per-dollar + view-minutes-per-dollar.
You could also treat the value of a view as the logarithm of view-minutes, which feels about right to me. I couldn’t calculate that from just the spreadsheet, I’d need to modify the Python script, but that still shouldn’t be hard to do.
[ETA 3] Actually I’m not sure this is possible since I don’t think there’s a way to see view time for individual users. But maybe it’s possible if you can see bucketed view times, e.g. “X people watched the video for 1 to 2 minutes”? Then you can take the logarithm of each bucket and get the average.
How many views are from new viewers, and how much is it “preaching to the converted”?
An easy heuristic: assign more value to views-above-expectation. For example if a channel routinely gets 10,000 views, assume most of those are coming from the same people, and discount them. But if one video gets 100,000 views, treat the extra 90,000 viewers as unique.
I believe YouTubers get data on how many views come from subscribers and how many new subscribers they get per video. You could discount views from pre-existing subscribers, and give full credit to views from non-subscribers and maybe extra credit to views that convert to a subscription.
[ETA] I would do a Bayesian adjustment based on the number of videos a channel has published. For example, AI in Context only has two videos and only one video with significant views, so it’s hard to predict whether they will be able to reproduce that success. Whereas Rob Miles has tons of videos, so I have a pretty good idea of how many views his next video is going to get. You could use a Bayesian algorithm to assign greater confidence to channels with many videos. You could even use the variance on a channel’s video views to calculate the posterior expected views (or view-minutes) on a new video.
[ETA 2] Look at channel growth over time, not just the average. Maybe fit an exponential curve to the view count and use that to predict the views on the next video. (Or combine this method with a Bayesian adjustment.)
Yes, lots to consider. I talked to a lot of people about how to measure impact, and yes, it’s hard. This is, AFAIK, the first public attempt at cost-effectiveness for this stuff.
I disagree on things like log(minutes). Short-form content is consumed with incredibly low engagement and gets scrolled through extremely passively, for hours at a time, just like long-form content.
In terms of preaching to the converted, I think it takes a lot of engagement time to get people to take action. It seems to often take people 1-3 years of engagement with EA content to make significant career shifts, etc.
I’m measuring cost effectiveness thus far. Some people may overperform expectations, and some people may underperform.
As for measuring channel growth, I expect lots of people to make cases for why their channel will grow more compared to others and this would introduce a ton of bias. The fairest thing to do is to measure past impact. More importantly, when we compare to other places we use CEAs, we measure the impact that has happened, we don’t just speculate (even with good assumptions) the impact that will occur in the future. Small grants/attempts are made and the ones that work, we scale up.