Hmm, I feel like you may be framing things quite differently to how I would, or something. My initial reaction to your comment is something like:
It seems usefully to conceptually separate data collection from data processing, where by the latter I mean using that data to arrive at probability estimates and decisions.
I think Bayesianism (in the sense of using Bayesā theorem and a Bayesian interpretation of probability) and āmath and technical patchesā might tend to be part of the data processing, not the data collection. (Though they could also guide what data to look for. And this is just a rough conceptual divide.)
When Ozzie wrote about going with āan approach that in-expectation does a decent job at approximating the mathematical approachā, he was specifically referring to dealing with the optimizerās curse. Iād consider this part of data processing.
Meanwhile, my intuitions (i.e., gut reactions) and what experts say are data. Attending to them is data collection, and then we have to decide how to integrate that with other things to arrive at probability estimates and decisions.
I donāt think we should see ourselves as deciding between either Bayesianism and āmath and technical patchesā or paying attention to my intuitions and domain experts. You can feed all sorts of evidence into Bayes theorem. I doubt any EA would argue we should form conclusions from āBayesianism and math aloneā, without using any data from the world (including even their intuitive sense of what numbers to plug in, or whether people they share their findings with seem skeptical). Iām not even sure what thatād look like.
And I think my intuitions or what domain experts says can very easily be made sense of as valid data within a Bayesian framework. Generally, my intuitions and experts are more likely to indicate X is true in worlds where X is true than where itās not. This effect is stronger when the conditions for intuitive expertise are met, when expertsā incentives seem to be well aligned with seeking and sharing truth, etc. This effect is weaker when it seems that there are strong biases or misaligned incentives at play, or when it seems there might be.
(Perhaps this is talking past you? Iām not sure I understood your argument.)
I largely agree with what you said in this comment, though Iād say the line between data collection and data processing is often blurred in real-world scenarios.
I think we are talking past each other (not in a bad faith way though!), so I want to stop myself from digging us deeper into an unproductive rabbit hole.
Hmm, I feel like you may be framing things quite differently to how I would, or something. My initial reaction to your comment is something like:
It seems usefully to conceptually separate data collection from data processing, where by the latter I mean using that data to arrive at probability estimates and decisions.
I think Bayesianism (in the sense of using Bayesā theorem and a Bayesian interpretation of probability) and āmath and technical patchesā might tend to be part of the data processing, not the data collection. (Though they could also guide what data to look for. And this is just a rough conceptual divide.)
When Ozzie wrote about going with āan approach that in-expectation does a decent job at approximating the mathematical approachā, he was specifically referring to dealing with the optimizerās curse. Iād consider this part of data processing.
Meanwhile, my intuitions (i.e., gut reactions) and what experts say are data. Attending to them is data collection, and then we have to decide how to integrate that with other things to arrive at probability estimates and decisions.
I donāt think we should see ourselves as deciding between either Bayesianism and āmath and technical patchesā or paying attention to my intuitions and domain experts. You can feed all sorts of evidence into Bayes theorem. I doubt any EA would argue we should form conclusions from āBayesianism and math aloneā, without using any data from the world (including even their intuitive sense of what numbers to plug in, or whether people they share their findings with seem skeptical). Iām not even sure what thatād look like.
And I think my intuitions or what domain experts says can very easily be made sense of as valid data within a Bayesian framework. Generally, my intuitions and experts are more likely to indicate X is true in worlds where X is true than where itās not. This effect is stronger when the conditions for intuitive expertise are met, when expertsā incentives seem to be well aligned with seeking and sharing truth, etc. This effect is weaker when it seems that there are strong biases or misaligned incentives at play, or when it seems there might be.
(Perhaps this is talking past you? Iām not sure I understood your argument.)
I largely agree with what you said in this comment, though Iād say the line between data collection and data processing is often blurred in real-world scenarios.
I think we are talking past each other (not in a bad faith way though!), so I want to stop myself from digging us deeper into an unproductive rabbit hole.