Seems like you’re trying to get at what I’ve seen referred to as ‘multifinal means’ at one point. Keyword might help find related stuff.
This is sort of tangential, but related to the idea of making the distinction between inputs and outputs in running certain decision processes. I now view both consequentialism and deontological theories to be examples of what I’ve been calling perverse monisms. A perverse monism is when there is a strong desire to collapse all the complexity in a domain into a single term. This is usually achieved via aether variables, we rearrange the model until the complexity (or uncertainty) has been shoved into a corner either implicitly or explicitly, which makes the rest of the model look very tidy indeed.
With consequentialism we say that one should allow the inputs to vary freely while holding the outputs fixed (our idea of what the outcome should be, or heuristics that evaluate outcomes etc.). We backprop the appropriate inputs from the outputs. Deontology says we can’t control outputs, but we can control inputs, so we should allow outputs to vary freely while holding the inputs to some fixed ideal.
Both of these are a hope that one can avoid the nebulosity of having a full blown confusion matrix about inputs and outputs, and that changing problem to problem. That is to say, I have some control over which outputs to optimize for, and some control over inputs, and false positives and false negatives in my beliefs about both of those. Actual problem solving of any complexity at all both forward chains from known info about inputs, and backchains from previous data about outputs then tries to find places where the two branching chains meet. In the process of investigating this, beliefs about the inputs or outputs may also update.
More generally, I’ve been getting a lot of mileage out of thinking of ‘philosophical positions’ as different sorts of error checks that we use on decision processes.
It’s also fun to think about this in terms of the heuristic that How to Measure Anything recommends:
Define parameters explicitly (what outputs do we think we care about, what inputs do we think we control)
Establish value of information (how much will it cost to test various assumptions)
Sensitivity analysis (how much does final proxy vary as a function of changes in inputs)
it’s a non linear heuristic, so the info gathered in any one step can cause you to go back and adjust one of the others, which involves that sort of bouncing back and forth between forward chaining and back chaining.
Seems like you’re trying to get at what I’ve seen referred to as ‘multifinal means’ at one point. Keyword might help find related stuff.
This is sort of tangential, but related to the idea of making the distinction between inputs and outputs in running certain decision processes. I now view both consequentialism and deontological theories to be examples of what I’ve been calling perverse monisms. A perverse monism is when there is a strong desire to collapse all the complexity in a domain into a single term. This is usually achieved via aether variables, we rearrange the model until the complexity (or uncertainty) has been shoved into a corner either implicitly or explicitly, which makes the rest of the model look very tidy indeed.
With consequentialism we say that one should allow the inputs to vary freely while holding the outputs fixed (our idea of what the outcome should be, or heuristics that evaluate outcomes etc.). We backprop the appropriate inputs from the outputs. Deontology says we can’t control outputs, but we can control inputs, so we should allow outputs to vary freely while holding the inputs to some fixed ideal.
Both of these are a hope that one can avoid the nebulosity of having a full blown confusion matrix about inputs and outputs, and that changing problem to problem. That is to say, I have some control over which outputs to optimize for, and some control over inputs, and false positives and false negatives in my beliefs about both of those. Actual problem solving of any complexity at all both forward chains from known info about inputs, and backchains from previous data about outputs then tries to find places where the two branching chains meet. In the process of investigating this, beliefs about the inputs or outputs may also update.
More generally, I’ve been getting a lot of mileage out of thinking of ‘philosophical positions’ as different sorts of error checks that we use on decision processes.
It’s also fun to think about this in terms of the heuristic that How to Measure Anything recommends:
Define parameters explicitly (what outputs do we think we care about, what inputs do we think we control)
Establish value of information (how much will it cost to test various assumptions)
Uncertainty analysis (narrowing confidence bounds)
Sensitivity analysis (how much does final proxy vary as a function of changes in inputs)
it’s a non linear heuristic, so the info gathered in any one step can cause you to go back and adjust one of the others, which involves that sort of bouncing back and forth between forward chaining and back chaining.