Yes, that’s an excellent reformulation of what I meant!
I think this roughly corresponds to how people do it intuitively in practice, but I doubt most people would be able to describe it in as much detail (or even be aware) if asked. But at least among people who read LessWrong, it’s normal to talk about “assigning utility”. The percentage among people who self-describe as EA who also think like this goes up the longer they’ve been in EA, and maybe like 80% of people who attended an EAG in 2022. (Pure speculation; I haven’t even been to an EAG.)
I don’t know anywhere it’s written like this, but it probably exists somewhere on LW and probably in academic literature as well. On second though, I remembered Arbital probably has something good on it. Here.
Yes, that’s an excellent reformulation of what I meant!
Great. Let’s try two things next.
First, do you think my solution could work? Do you think it’s merely inferior or is there something fully broken about it? I ask because there are few substantially different epistemologies that could work at all, so I think every one is valuable and should be investigated a lot. Maybe that point will make sense to you.
Second, I want to clarify how assigning utility works.
One model is: The utility function comes first. People can look at different outcomes and judge how much utility they have. People, in some sense, know what they want. Then, when making decisions, a major goal is trying to figure out which decisions will lead to higher utility outcomes. For complicated decisions, it’s not obvious how to get a good outcome, so you can e.g. break it down and evaluate factors separately. Simplifying, you might look at a correlation where many high utility outcomes have high scores for factor X (and you might also be able to explain why X is good), so then you’d be more inclined to make a decision that you think will get a lot of X, and in factor summing approaches you’d assign X a high weighting.
A different model is: Start with factors and then calculate utility. Utility is a downstream consequence of factors. Instead of knowing what has high utility by looking at it and judging how much you like it or something along those lines, you have to figure out what has high utility. You do that by figuring out e.g. what factors are good and then concluding that outcomes with a lot of those factors probably have high utility.
I don’t have time to engage very well, but I would say the first model you describe fits me better. I don’t look at the world to figure out my terminal utilities (well, I do, but the world’s information is used as a tool to figure out the parts of my brain which determine what I terminally want). Instead, there’s something in my brain that determines how I tend to assign utility to outcomes, and then I can reason about the likely paths to those outcomes and make decisions. The paths that lead to outcomes I assign terminal utility, will have instrumental utility.
I haven’t investigated this as nearly deeply as I would like, but supposedly there are some ways of using Aristotelian logic (although I don’t know which kind) to derive probability theory and expected utility theory from more basic postulates. I would also look at whether any epistemology I’m considering is liable to be dutch-booked, because I don’t want to be dutch-booked.
“I ask because there are few substantially different epistemologies that could work at all, so I think every one is valuable and should be investigated a lot.”
Agreed. Though it depends on what your strengths are what role you wish to play in the research-and-doing community. I think it’s fine that a lot of people defer to others on the logical foundations of probability and utility, but I still think some of us should be investigating it and calling “foul!” if they’ve discovered something that needs a revolution. This could be especially usefwl for a relative “outsider”[1] to try. I doubt it’ll succeed, but the expected utility of trying seems high. :p
I’m not making a strict delineation here, nor a value-judgment. I just mean that if someone is motivated to try to upend some popular EA/rationalist epistemology, then it might be easier to do that for someone who hasn’t been deeply steeped into that paradigm already.
Agreed. Though it depends on what your strengths are what role you wish to play in the research-and-doing community. I think it’s fine that a lot of people defer to others on the logical foundations of probability and utility, but I still think some of us should be investigating it
I agree. People can specialize in what works for them. Division of labor is reasonable.
That’s fine as long as there are some people working on the foundational research stuff and some of them are open to serious debate. I think EA has people doing that sort of research but I’m concerned that none of them are open to debate. So if they’re mistaken, there’s no good way for anyone who knows to fix it (and conversely, any would-be critic who is mistaken has no good way to receive criticism from EA and fix their own mistake).
To be fair, I don’t know of any Popperian experts who are very open to debate, either, besides me. I consider lack of willingness to debate a very large, widespread problem in the world.
I think working on that problem – poor openness to debate – might do more good than everything EA is currently doing. Better debates would e.g. improve science and could make a big difference to the replication crisis.
Another way better openness to debate would do good is: currently EA has a lot of high-effort, thoughtful arguments on topics like animal welfare, AI alignment, clean water, deworming, etc. Meanwhile, there are a bunch of charities, with a ton of money, which do significantly less effective (or even counter-productive) things and won’t listen, give counter arguments, or debate. Currently, EA tries to guide people to donate to better charities. It’d potentially be significantly higher leverage (conditional on ~being right) to debate the flawed charities and win, so then they change to use their money in better ways. I think many EAs would be very interested in participating in those debates; the thing blocking progress here is poor societal norms about debate and error correction. I think if EA’s own norms were much better on those topics, then it’d be in a better position to call out the problem, lead by example, and push for change in ways that many observers find rational and persuasive.
Yes, that’s an excellent reformulation of what I meant!
I think this roughly corresponds to how people do it intuitively in practice, but I doubt most people would be able to describe it in as much detail (or even be aware) if asked. But at least among people who read LessWrong, it’s normal to talk about “assigning utility”. The percentage among people who self-describe as EA who also think like this goes up the longer they’ve been in EA, and maybe like 80% of people who attended an EAG in 2022. (Pure speculation; I haven’t even been to an EAG.)
I don’t know anywhere it’s written like this, but it probably exists somewhere on LW and probably in academic literature as well. On second though, I remembered Arbital probably has something good on it. Here.
Great. Let’s try two things next.
First, do you think my solution could work? Do you think it’s merely inferior or is there something fully broken about it? I ask because there are few substantially different epistemologies that could work at all, so I think every one is valuable and should be investigated a lot. Maybe that point will make sense to you.
Second, I want to clarify how assigning utility works.
One model is: The utility function comes first. People can look at different outcomes and judge how much utility they have. People, in some sense, know what they want. Then, when making decisions, a major goal is trying to figure out which decisions will lead to higher utility outcomes. For complicated decisions, it’s not obvious how to get a good outcome, so you can e.g. break it down and evaluate factors separately. Simplifying, you might look at a correlation where many high utility outcomes have high scores for factor X (and you might also be able to explain why X is good), so then you’d be more inclined to make a decision that you think will get a lot of X, and in factor summing approaches you’d assign X a high weighting.
A different model is: Start with factors and then calculate utility. Utility is a downstream consequence of factors. Instead of knowing what has high utility by looking at it and judging how much you like it or something along those lines, you have to figure out what has high utility. You do that by figuring out e.g. what factors are good and then concluding that outcomes with a lot of those factors probably have high utility.
Does one of these models fit your thinking well?
I don’t have time to engage very well, but I would say the first model you describe fits me better. I don’t look at the world to figure out my terminal utilities (well, I do, but the world’s information is used as a tool to figure out the parts of my brain which determine what I terminally want). Instead, there’s something in my brain that determines how I tend to assign utility to outcomes, and then I can reason about the likely paths to those outcomes and make decisions. The paths that lead to outcomes I assign terminal utility, will have instrumental utility.
I haven’t investigated this as nearly deeply as I would like, but supposedly there are some ways of using Aristotelian logic (although I don’t know which kind) to derive probability theory and expected utility theory from more basic postulates. I would also look at whether any epistemology I’m considering is liable to be dutch-booked, because I don’t want to be dutch-booked.
Agreed. Though it depends on what your strengths are what role you wish to play in the research-and-doing community. I think it’s fine that a lot of people defer to others on the logical foundations of probability and utility, but I still think some of us should be investigating it and calling “foul!” if they’ve discovered something that needs a revolution. This could be especially usefwl for a relative “outsider”[1] to try. I doubt it’ll succeed, but the expected utility of trying seems high. :p
I’m not making a strict delineation here, nor a value-judgment. I just mean that if someone is motivated to try to upend some popular EA/rationalist epistemology, then it might be easier to do that for someone who hasn’t been deeply steeped into that paradigm already.
and
I agree. People can specialize in what works for them. Division of labor is reasonable.
That’s fine as long as there are some people working on the foundational research stuff and some of them are open to serious debate. I think EA has people doing that sort of research but I’m concerned that none of them are open to debate. So if they’re mistaken, there’s no good way for anyone who knows to fix it (and conversely, any would-be critic who is mistaken has no good way to receive criticism from EA and fix their own mistake).
To be fair, I don’t know of any Popperian experts who are very open to debate, either, besides me. I consider lack of willingness to debate a very large, widespread problem in the world.
I think working on that problem – poor openness to debate – might do more good than everything EA is currently doing. Better debates would e.g. improve science and could make a big difference to the replication crisis.
Another way better openness to debate would do good is: currently EA has a lot of high-effort, thoughtful arguments on topics like animal welfare, AI alignment, clean water, deworming, etc. Meanwhile, there are a bunch of charities, with a ton of money, which do significantly less effective (or even counter-productive) things and won’t listen, give counter arguments, or debate. Currently, EA tries to guide people to donate to better charities. It’d potentially be significantly higher leverage (conditional on ~being right) to debate the flawed charities and win, so then they change to use their money in better ways. I think many EAs would be very interested in participating in those debates; the thing blocking progress here is poor societal norms about debate and error correction. I think if EA’s own norms were much better on those topics, then it’d be in a better position to call out the problem, lead by example, and push for change in ways that many observers find rational and persuasive.