I feel like I’ve made progress on this. (Caveat with confirmation bias, Dunning-Kruger, etc.)
Areas where I feel like I’ve made progress:
Used to not think very much about cluelessness when doing cause area comparison; now it’s one of my main cause-comparison frameworks.
Have become more solidly longtermist, after reading some of Reasons & Persons and Nick Beckstead’s thesis.
Have gotten clearer on the fuzzies vs. utilons distinction, and now weight purchasing fuzzies much more highly than I used to. (See Giving more won’t make you happier.)
Have reduced my self-deception around fuzzies & utilons. I used to do a lot more “altruistic” stuff where my actual motivations were about fulfilling some internal narrative but I thought I was acting altruistically (i.e. I was thought I was purchasing utilons whereas on reflection I see that I was purchasing fuzzies). I do this less now.
Now believe that it’s very important to pay good salaries to people who have developed a track record of doing high-quality, altruistic work. (I used to think this wasn’t a leveraged use of funds, because these people would probably continue doing their good work in the counterfactual. My former view wasn’t thinking clearly about incentive effects.)
Now believe that training up one’s attention & focus is super important; I was previously treating those as fixed quantities / biological endowments.
Some areas that seem important, where I don’t feel like I’ve made much progress yet:
Whether to focus more on satisfying preferences or provisioning (hedonic) utility.
Stuff about consciousness & where to draw the line for extending moral patienthood. (Rabbit hole 1, rabbit hole 2)
What being a physicalist / monist implies about morality.
What believing that I live in a deterministic system (wherein the current state is entirely the result of the preceding state) implies about morality.
Whether to be a moral realist or antirealist. (And if antirealist, how to reconcile that with some notion of objectivity such that it’s not just “whatever I want” / “everything is permitted,” which seem to impoverish what we mean by being moral.)
How contemplative Eastern practices (mainly Soto Zen, Tibetan, and Theravada practices) mesh with Western analytic frameworks.
Really zoomed-out questions like “Why is there something rather than nothing?”
now weight purchasing fuzzies much more highly than I used to.
Do you mean charitable fuzzies specifically? What kinds of fuzzies do you purchase more of? Do you think this generalizes to more EAs?
What believing that I live in a deterministic system (wherein the current state is entirely the result of the preceding state) implies about morality.
Once upon a time, I read a Douglas Hofstadter book that convinced me that the answer was “nothing” (basically because determinism works at the level of basic physics, and morality / your perception of having free will operates about a gazillion levels of abstraction higher, such that applying the model “deterministic” to your own behavior is kind of like saying that no person is more than 7 years old because that’s the point where all the cells in their body get replaced).
I was in high school at the time so I don’t know if it would have the same effect on me, or you, today though.
(basically because determinism works at the level of basic physics, and morality / your perception of having free will operates about a gazillion levels of abstraction higher, such that applying the model “deterministic” to your own behavior is kind of like saying that no person is more than 7 years old because that’s the point where all the cells in their body get replaced).
Makes sense… this line of reasoning is part of why it feels like an open question for me.
On the other side – I feel like if, at root, I’m the composite of deterministic systems, then the concept of being morally obligated to do things loses force. (An example of how this sort of thing could inform views about morality.)
I feel like I’ve made progress on this. (Caveat with confirmation bias, Dunning-Kruger, etc.)
Areas where I feel like I’ve made progress:
Used to not think very much about cluelessness when doing cause area comparison; now it’s one of my main cause-comparison frameworks.
Have become more solidly longtermist, after reading some of Reasons & Persons and Nick Beckstead’s thesis.
Have gotten clearer on the fuzzies vs. utilons distinction, and now weight purchasing fuzzies much more highly than I used to. (See Giving more won’t make you happier.)
Have reduced my self-deception around fuzzies & utilons. I used to do a lot more “altruistic” stuff where my actual motivations were about fulfilling some internal narrative but I thought I was acting altruistically (i.e. I was thought I was purchasing utilons whereas on reflection I see that I was purchasing fuzzies). I do this less now.
Now believe that it’s very important to pay good salaries to people who have developed a track record of doing high-quality, altruistic work. (I used to think this wasn’t a leveraged use of funds, because these people would probably continue doing their good work in the counterfactual. My former view wasn’t thinking clearly about incentive effects.)
Have become less confident in how we construct life satisfaction metrics. (I was naïvely overconfident before.)
Now believe that training up one’s attention & focus is super important; I was previously treating those as fixed quantities / biological endowments.
Some areas that seem important, where I don’t feel like I’ve made much progress yet:
Whether to focus more on satisfying preferences or provisioning (hedonic) utility.
Stuff about consciousness & where to draw the line for extending moral patienthood. (Rabbit hole 1, rabbit hole 2)
What being a physicalist / monist implies about morality.
What believing that I live in a deterministic system (wherein the current state is entirely the result of the preceding state) implies about morality.
Whether to be a moral realist or antirealist. (And if antirealist, how to reconcile that with some notion of objectivity such that it’s not just “whatever I want” / “everything is permitted,” which seem to impoverish what we mean by being moral.)
How contemplative Eastern practices (mainly Soto Zen, Tibetan, and Theravada practices) mesh with Western analytic frameworks.
Really zoomed-out questions like “Why is there something rather than nothing?”
What complexity theory implies about effective action.
Wow, thanks for the great in depth reply!
Do you mean charitable fuzzies specifically? What kinds of fuzzies do you purchase more of? Do you think this generalizes to more EAs?
Once upon a time, I read a Douglas Hofstadter book that convinced me that the answer was “nothing” (basically because determinism works at the level of basic physics, and morality / your perception of having free will operates about a gazillion levels of abstraction higher, such that applying the model “deterministic” to your own behavior is kind of like saying that no person is more than 7 years old because that’s the point where all the cells in their body get replaced).
I was in high school at the time so I don’t know if it would have the same effect on me, or you, today though.
Makes sense… this line of reasoning is part of why it feels like an open question for me.
On the other side – I feel like if, at root, I’m the composite of deterministic systems, then the concept of being morally obligated to do things loses force. (An example of how this sort of thing could inform views about morality.)
Yeah, I’ve updated towards focusing more on doing things that are helpful to people around me & in the communities I operate in.
In part motivated by complexity & cluelessness considerations, and in part by feeling good about helping my friends, family, and community.
I think doing stuff like this is much more in the direction of purchasing fuzzies, though it has a utilon component.
Also I’m reminded of how Stephanie Wykstra (GiveWell alum) started donating to bail reform.
>fuzzies
lmao