Notes on cultivating good incentives

Which behaviours would I be praised for if the praiser was kinder & smarter than me, and had access to all the same internal & external information?

  • Always seek to impress the internal simulation you call upon to answer the above question. If your motivations are sensitive to praise from people who know nothing about you, you are likely to ignore your own information-rich judgments just to impress them.

  • Always seek to praise others for effectively self-praising themselves. You have a lot less information about them than they do. So if you only appreciate object-level behaviours you can verify, they have an incentive to ignore their own information-rich judgments in order to impress you.

  • Only upvote posts you personally benefited from.[1] If you instead upvote or downvote based on what you think other people should or shouldn’t read,[2] this pollutes the voting economy with noisy judgments.

  • Worst of all, however, is when you only like things based on what you can share, and you share things based on what you think others will like. If enough people are like you, this leads to a Keynesian beauty contest where people have incentives to share what they predict others predict others will share.

    • While infocascades[3] may start out as Markovian chains of epistemically rational decisions, they’re massively amplified as soon as they exceed a threshold for becoming speculative bubbles.

  • You needn’t like or understand everything in a post in order for it to be of net benefit to you. You needn’t even read the whole thing. A piece of writing is a net benefit to you if it was worth the cost in time it took you to read it. Exposure to bad ideas very rarely harms you, so you should use positive selection[4] for praising/​upvoting contributions or posts.

    • You may find above equation to be wrong and/​or cringe, but I doubt it caused any harm. If you can easily discern that it’s a bad idea, have some faith that other readers can discern it too.[2]

    • And if they do end up convinced by it, consider that once they find out how it’s wrong, they will have learned both 1) the fact that they’re susceptible to bad arguments of this type, as well as 2) how to patch this particular weakness in their epistemic filters.

    • Temporarily having a bad model is better for progress compared to having no model at all.

  1. ^

    > “An independent impression is a belief formed through a process that excludes epistemic deference to the beliefs of others. Independent impressions may be contrasted with all-things-considered beliefs, which are beliefs that do allow for such deference.”

  2. ^

    > “The third-person effect hypothesis predicts that people tend to perceive that mass media messages have a greater effect on others than on themselves, based on personal biases.”

  3. ^

    > “Information cascades develop consistently in a laboratory situation in which other incentives to go along with the crowd are minimized. Some decision sequences result in (...) initial misrepresentative signals start a chain of incorrect [but individually rational] decisions that is not broken by more representative signals received later.”
    Information Cascades in the Laboratory

  4. ^

    > “Somebody who comes up with one good original idea (plus ninety-nine really stupid cringeworthy takes) is a better use of your reading time than somebody who reliably never gets anything too wrong, but never says anything you find new or surprising. Alyssa Vance calls this positive selection – a single good call rules you in – as opposed to negative selection, where a single bad call rules you out.”

No comments.