Any update on when “early 2023” will be?
TedSanders
Absolutely. A few comments:
Stated preference (uplifting documentaries) and revealed preference (reality TV crime shows) are different
Asking people for their preference is quite difficult—only a small fraction of Netflix users give star ratings or thumb ratings. In general, users like using software to achieve their immediate goals. It’s tough to get them to invest time and skill into making it better in the future. For most people, each app is a tiny tiny slice of their day and they don’t want to do work to optimize anything. Customization and user controls often fail because no one uses them.
If serving recommendations according to stated preferences causes people to unsubscribe more, how should we interpret that? That their true preference is to not be subscribed to Netflix? It’s unclear.
In any case, Netflix is financially incentivized to optimize for subscriptions, not viewing. So if people pay for what they want, then Netflix ought to be aligned with what they want. Netflix is only misaligned with what people want if people’s own spending is not aligned with what they want (theoretically).
I work at Netflix on the recommender. It’s interesting to read this abstract article about something that’s very concrete for me.
For example, the article asks, “The key question any model of the problem needs to answer is—why aren’t recommender systems already aligned.”
Despite working on a recommender system, I genuinely don’t know what this means. How does one go about measuring how much a recommender is aligned with user interests? Like, I guarantee 100% that people would rather have the recommendations given by Netflix and YouTube than a uniform random distribution. So in that basic sense, I think we are already aligned. It’s really not obvious to me that Netflix and YouTube are doing anything wrong. I’m not really sure how to go about measuring alignment, and without a measurement, I don’t know how to tell whether we’re making progress toward fixing it.
My two cents.
Excellent post.
I want to highlight something that I missed on the first read but nagged me on the second read.
You define transformative AGI as:
1. Gross world product (GWP) exceeds 130% of its previous yearly peak value
2. World primary energy consumption exceeds 130% of its previous yearly peak value
3. Fewer than one billion biological humans remain alive on Earth
You predict when transformative AGI will arrive by building a model that predicts when we’ll have enough compute to train an AGI.
But I feel like there’s a giant missing link—what are the odds that training an AGI causes 1, 2, or 3?
It feels not only plausible but quite likely to me that the first AGI will be very expensive and very uneven (superhuman in some respects and subhuman in others). An expensive, uneven AGI may years or decades to self-improve to the point that GWP or energy consumption rises by 30% in a year.
It feels like you are implicitly ascribing 100% probability to this key step.
This is one reason (among others) that I think your probabilities are wildly high. Looking forward to setting up our bet. :)