I think that I’ve become more accepting of cause areas that I was not initially inclined toward (particularly various longtermist ones) and also more suspicious of dogmatism of all kinds. In developing and using the tools, it became clear that there were compelling moral reasons in favor of almost any course of action, and slight shifts in my beliefs about risk aversion, moral weights, aggregation methods etc. could lead me to very different conclusions. This inclines me more toward very significant diversification across cause areas.
I share your inclination toward significant diversification. However, I find myself grappling with the question of whether there should be specific limits on this diversification. For instance , Open Philanthropy’s approach seems to be “we diversify amongst worldviews we find plausible,” but it’s not clear to me what makes a worldview plausible. How seriously should we consider, for example, Nietzscheanism?
After working on WIT, I’ve grown a lot more comfortable producing provisional answers to deep questions. In similar academic work, there are strong incentives to only try to answer questions in ways that are fully defensible: if there is some other way of going about it that gives a different result, you need to explain why your way is better. For giant nebulous questions, this means we will make very slow progress on finding a solution. Since these questions can be very important, it is better to come up with some imperfect answers rather than just working on simpler problems. WIT tries to tackle big important nebulous problems, and we have to sometimes make questionable assumptions to do so. The longer I’ve spent here, the more worthwhile our approach feels to me.
Excellent question, Ian! At a high level, I’d say that moral uncertainty has made me much more inclined to care about having an overlapping consensus of reasons for any important decision. Equivalently, I want a diverse set of considerations to point in the same direction before I’m inclined to make a big change. That’s how I got into animal work in the first place. It’s good for the animals, good for human health, good for long-term food security, good for the environment, etc. There are probably lots of other impacts too, but that’s the first one that comes to mind!
Has the moral uncertainty inherent in your work influenced your day-to-day decision-making or personal philosophy?
I think that I’ve become more accepting of cause areas that I was not initially inclined toward (particularly various longtermist ones) and also more suspicious of dogmatism of all kinds. In developing and using the tools, it became clear that there were compelling moral reasons in favor of almost any course of action, and slight shifts in my beliefs about risk aversion, moral weights, aggregation methods etc. could lead me to very different conclusions. This inclines me more toward very significant diversification across cause areas.
I share your inclination toward significant diversification. However, I find myself grappling with the question of whether there should be specific limits on this diversification. For instance , Open Philanthropy’s approach seems to be “we diversify amongst worldviews we find plausible,” but it’s not clear to me what makes a worldview plausible. How seriously should we consider, for example, Nietzscheanism?
After working on WIT, I’ve grown a lot more comfortable producing provisional answers to deep questions. In similar academic work, there are strong incentives to only try to answer questions in ways that are fully defensible: if there is some other way of going about it that gives a different result, you need to explain why your way is better. For giant nebulous questions, this means we will make very slow progress on finding a solution. Since these questions can be very important, it is better to come up with some imperfect answers rather than just working on simpler problems. WIT tries to tackle big important nebulous problems, and we have to sometimes make questionable assumptions to do so. The longer I’ve spent here, the more worthwhile our approach feels to me.
Excellent question, Ian! At a high level, I’d say that moral uncertainty has made me much more inclined to care about having an overlapping consensus of reasons for any important decision. Equivalently, I want a diverse set of considerations to point in the same direction before I’m inclined to make a big change. That’s how I got into animal work in the first place. It’s good for the animals, good for human health, good for long-term food security, good for the environment, etc. There are probably lots of other impacts too, but that’s the first one that comes to mind!