I have a hunch that a big part of the issue here is institutional momentum around maximizing key performance indicators such as daily active users, time spent on platform, etc. Perhaps it will be important to persuade decisionmakers that although optimizing for these metrics helps the bottom line in the short run, in the long run optimizing these to the exclusion of all else hurts the brand, increases the probability of regulatory action or negative “black swan” type events, and risks having the users abandon the product. (I understand that the longer a culture gets exposed to alcohol, the greater the degree it develops “cultural antibodies” to the negative effects of alcohol which allow it to mitigate the harms… decisionmakers should worry that if users don’t endorse the time they spend with the product, this hurts the long-term viability of the platform; imagine the formation of a group like Alcoholics Anonymous but for social media, for instance.) I think it’d be good if decisionmakers also started optimizing for key performance indicators like whether users think the product is a benefit to their life personally, whether the product makes society healthier/better off, etc. Or even more specific stuff, like whether users who engage in disagreements tend to come to a consensus vs walking away even angrier than when they started.
With regard to risks, here are some thoughts of mine related to scenarios in which users self-select in their use of these tools. I think maybe what I describe in this comment has already happened though.
Good post!
I have a hunch that a big part of the issue here is institutional momentum around maximizing key performance indicators such as daily active users, time spent on platform, etc. Perhaps it will be important to persuade decisionmakers that although optimizing for these metrics helps the bottom line in the short run, in the long run optimizing these to the exclusion of all else hurts the brand, increases the probability of regulatory action or negative “black swan” type events, and risks having the users abandon the product. (I understand that the longer a culture gets exposed to alcohol, the greater the degree it develops “cultural antibodies” to the negative effects of alcohol which allow it to mitigate the harms… decisionmakers should worry that if users don’t endorse the time they spend with the product, this hurts the long-term viability of the platform; imagine the formation of a group like Alcoholics Anonymous but for social media, for instance.) I think it’d be good if decisionmakers also started optimizing for key performance indicators like whether users think the product is a benefit to their life personally, whether the product makes society healthier/better off, etc. Or even more specific stuff, like whether users who engage in disagreements tend to come to a consensus vs walking away even angrier than when they started.
With regard to risks, here are some thoughts of mine related to scenarios in which users self-select in their use of these tools. I think maybe what I describe in this comment has already happened though.