Around prediction infrastructure and information, I find that a lot of smart people make some weird (to me) claims. Like:
If a prediction didn’t clearly change a specific major decision, it was worthless.
Politicians don’t pay attention to prediction applications / related sources, so these sources are useless.
There are definitely ways to steelman these, but I think on the face they represent oversimplified models of how information leads to changes.
I’ll introduce a different model, which I think is much more sensible:
Whenever some party advocates for belief P, they apply some pressure for that belief to those who notice this advocacy.
This pressure trickles down, often into a web of resulting beliefs that are difficult to trace.
People both decide what decisions to consider, and what choices to make, based on their beliefs.
For any agent having an important belief P, this is expected to have been influenced by the beliefs of those that they pay attention to. One can model this with social networks and graphs.
Generally, introducing more correct beliefs, and providing more support to them in directions where important decisions happen, is expected to make those decisions better. This often is not straightforward, but I think we can make decent and simple graphical models of how said beliefs propagate.
Decisions aren’t typically made all-at-once. Often they’re very messy. Beliefs are formed over time, and people randomly decide what questions to pay attention to or what decisions to even consider. Information changes the decisions one chooses to make, not just the outcomes of these decisions.
For example—take accounting. A business leader might look at their monthly figures without any specific decisions in mind. But if they see something that surprises them, they might investigate further, and eventually change something important.
This isn’t at all to say “all information sources are equally useful” or “we can’t say anything about what information is valuable”.
But rather, more like,
“(directionally-correct) Information is useful on a spectrum. The more pressure it can excerpt on decision-relevant beliefs of people with power, the better.”
Around prediction infrastructure and information, I find that a lot of smart people make some weird (to me) claims. Like:
If a prediction didn’t clearly change a specific major decision, it was worthless.
Politicians don’t pay attention to prediction applications / related sources, so these sources are useless.
There are definitely ways to steelman these, but I think on the face they represent oversimplified models of how information leads to changes.
I’ll introduce a different model, which I think is much more sensible:
Whenever some party advocates for belief P, they apply some pressure for that belief to those who notice this advocacy.
This pressure trickles down, often into a web of resulting beliefs that are difficult to trace.
People both decide what decisions to consider, and what choices to make, based on their beliefs.
For any agent having an important belief P, this is expected to have been influenced by the beliefs of those that they pay attention to. One can model this with social networks and graphs.
Generally, introducing more correct beliefs, and providing more support to them in directions where important decisions happen, is expected to make those decisions better. This often is not straightforward, but I think we can make decent and simple graphical models of how said beliefs propagate.
Decisions aren’t typically made all-at-once. Often they’re very messy. Beliefs are formed over time, and people randomly decide what questions to pay attention to or what decisions to even consider. Information changes the decisions one chooses to make, not just the outcomes of these decisions.
For example—take accounting. A business leader might look at their monthly figures without any specific decisions in mind. But if they see something that surprises them, they might investigate further, and eventually change something important.
This isn’t at all to say “all information sources are equally useful” or “we can’t say anything about what information is valuable”.
But rather, more like,
“(directionally-correct) Information is useful on a spectrum. The more pressure it can excerpt on decision-relevant beliefs of people with power, the better.”