This is fantastic. I donât have high confidence in the numbers youâve put forth (for example, itâs hard to compare QALYs from âmore entertainmentâ/ââbetter articlesâ to QALYs from âno malariaâ), but I love the way this post was put together:
Lots of citations (to a stunning variety of sources; it feels like youâve been thinking about these questions for a long time)
Careful analysis of what could go wrong
Willingness to use numbers, even if they are made up
Even putting aside flow-through effects on alignment, I think that âmicrotimeâ is important. Even saving people a few minutes of wasted time each day can be hugely beneficial at scale (especially if that time is replaced with something that fits a userâs extrapolated volition). Our lives are made up of the way we spend each hour, and we could certainly be having better hours.
In a world where this is not a promising cause area, even if the risks turn out not to be a concern, I think the most likely cause of âfailureâ would be something like regulatory capture, where people enter large tech companies hoping to better their algorithms but get swept up by existing incentives. Iâd guess that many people who already work at FANG companies entered with the goal of improving usersâ lives and slowly drifted awayâor came to believe that metrics companies now use are in fact improving usersâ lives to a âsufficientâ extent.
(If you spend all day at Netflix, and come to think of TV as a golden wonderland of possibility, why not work to get people spending as much time as possible watching TV?)
Itâs possible that these employees still generally feel bad about optimizing for bad metrics, but however they feel, it hasnât yet added up to deliberative anti-addictive properties for any of the biggest tech companies (as far as Iâm aware). It would be nice to see evidence that people have successfully advocated for these changes from the inside (Mark Zuckerberg has recently made some noises about trying to improve the situation on Facebook, but Iâm not sure how much of that is due to pressure from inside Facebook vs. external pressure or his own feelings).
The first two links are identical; was that your intention?
Recommender systems often have facilities for deep customization (for instance, itâs possible to tell the Facebook News Feed to rank specific friendsâ posts higher than others) but the cognitive overhead of creating and managing those preferences is high enough that almost nobody uses them.
In addition to work on improved automated recommendation systems, it seems like there should be valuable projects out there that focus on getting more people to exercise their existing control over present-day systems (e.g. an app that gamifies changing your newsfeed settings, apps that let you more easily set limits for how youâll spend time online).
Examples:
FB Purity claims to have over 450,000 users; even if only 100,000 are currently blocking their own newsfeeds, that probably represents ~10,000,000 hours each year spent somewhere other than Facebook.
StayFocusd has saved me, personally, thousands of hours on things my extrapolated volition would have regretted.
This is fantastic. I donât have high confidence in the numbers youâve put forth (for example, itâs hard to compare QALYs from âmore entertainmentâ/ââbetter articlesâ to QALYs from âno malariaâ), but I love the way this post was put together:
Lots of citations (to a stunning variety of sources; it feels like youâve been thinking about these questions for a long time)
Careful analysis of what could go wrong
Willingness to use numbers, even if they are made up
Even putting aside flow-through effects on alignment, I think that âmicrotimeâ is important. Even saving people a few minutes of wasted time each day can be hugely beneficial at scale (especially if that time is replaced with something that fits a userâs extrapolated volition). Our lives are made up of the way we spend each hour, and we could certainly be having better hours.
In a world where this is not a promising cause area, even if the risks turn out not to be a concern, I think the most likely cause of âfailureâ would be something like regulatory capture, where people enter large tech companies hoping to better their algorithms but get swept up by existing incentives. Iâd guess that many people who already work at FANG companies entered with the goal of improving usersâ lives and slowly drifted awayâor came to believe that metrics companies now use are in fact improving usersâ lives to a âsufficientâ extent.
(If you spend all day at Netflix, and come to think of TV as a golden wonderland of possibility, why not work to get people spending as much time as possible watching TV?)
Itâs possible that these employees still generally feel bad about optimizing for bad metrics, but however they feel, it hasnât yet added up to deliberative anti-addictive properties for any of the biggest tech companies (as far as Iâm aware). It would be nice to see evidence that people have successfully advocated for these changes from the inside (Mark Zuckerberg has recently made some noises about trying to improve the situation on Facebook, but Iâm not sure how much of that is due to pressure from inside Facebook vs. external pressure or his own feelings).
The first two links are identical; was that your intention?
In addition to work on improved automated recommendation systems, it seems like there should be valuable projects out there that focus on getting more people to exercise their existing control over present-day systems (e.g. an app that gamifies changing your newsfeed settings, apps that let you more easily set limits for how youâll spend time online).
Examples:
FB Purity claims to have over 450,000 users; even if only 100,000 are currently blocking their own newsfeeds, that probably represents ~10,000,000 hours each year spent somewhere other than Facebook.
Thanks for the catchâfixed.
I think youâre underrating the risk of capabilities acceleration.