Thanks for engaging. Despite the fact that I don’t expect this to sound convincing or fully lay out the case for anything, I’ll provide a brief partial sketch of some of my tentative views in order to provide at least a degree of transparency around my reasoning—and I’m unlikely to expand on this much futher here, as I think this takes a lot of time and intensive discussion to transmit, and I have other things I should be working on.
First, I think you’re accidentally assuming utilitarianism in my decisionmaking. I view my charitable giving from more of a contractualist and rights view, where the deontological requirement to impartially benefit others is only one aspect of my decisionmaking. This goes pretty deep into fundamental views and arguments which I think are going to be hard to explain quickly, or at all.
Second, my initial commitment to charity, the one that led my to commit to giving 10% of my income, was to benefit the poor impartially, as one of several goals I had—embracing what Richard Chappell has called a benificentist view. I see the money I am putting aside as some degree of stewardship of money I have allocated to charity, which is given on behalf of others. Given that commitment, to the extent that I have goals which differ from benefitting the poor, there is a very, very high bar for me to abrogate that commitment and take that money to do other things. At the very least, having come to a personal conclusion that I care about the future and view existential risk as a priority does not do enough to override that commitment.
As an aside, I’ll note that it is rational for those with fewer opportunities and less access to capital to make shorter term decisions; my prioritization of my children and grandchildren is in large part because I’m comparably rich. And so the refusal to reallocate my giving of money commited to others has a lot to do with respecting preferences, even when I think they are “mistaken”—because the bar for overrising other’s views in choosing how to help them should also be very, very high. And that means I would be happy to defer to a consensus of those impacted if they preferred focus on existential risk reduction. It seems clear they currently do not, even if that is due to a lack of knowledge.
Third, as an individual, I am not utilitarian, and I don’t think my single goal is impartial welfare improvement—it is instead to lead a fulfilling life, and contribute to my family, my community, and the world. As laid out in the piece, I think each of these is best prioritized individually. (This is slightly related to factoring in motivations, but is distinct.) If asked to make decisions on behalf of a community or larger group, I have a deontological responsibility to them that leads to something like utilitarianism over the people who are implicated. And when pursuing impartial utilitarian goals, as one of the things I prioritize, I largely defer to the consensus among those who embrace that goal.
When asked to make decisions for the broader world, these views lead to something very, very like impartial utilitarianism—so that I think it’s correct for organizations with the goal of providing impartial benefit, including the one I currently run, to embrace that view as a matter of their goals, even if not everyone who would be affected agrees to the view, even if those goals do not match my own. And when acting in that capacity, I am fulfilling my deontological duty to be a honest representative, rather than a utilitarian duty to impartially benefit the world—though as a representative of the organization, those should almost exactly match.
Thanks for engaging. Despite the fact that I don’t expect this to sound convincing or fully lay out the case for anything, I’ll provide a brief partial sketch of some of my tentative views in order to provide at least a degree of transparency around my reasoning—and I’m unlikely to expand on this much futher here, as I think this takes a lot of time and intensive discussion to transmit, and I have other things I should be working on.
First, I think you’re accidentally assuming utilitarianism in my decisionmaking. I view my charitable giving from more of a contractualist and rights view, where the deontological requirement to impartially benefit others is only one aspect of my decisionmaking. This goes pretty deep into fundamental views and arguments which I think are going to be hard to explain quickly, or at all.
Second, my initial commitment to charity, the one that led my to commit to giving 10% of my income, was to benefit the poor impartially, as one of several goals I had—embracing what Richard Chappell has called a benificentist view. I see the money I am putting aside as some degree of stewardship of money I have allocated to charity, which is given on behalf of others. Given that commitment, to the extent that I have goals which differ from benefitting the poor, there is a very, very high bar for me to abrogate that commitment and take that money to do other things. At the very least, having come to a personal conclusion that I care about the future and view existential risk as a priority does not do enough to override that commitment.
As an aside, I’ll note that it is rational for those with fewer opportunities and less access to capital to make shorter term decisions; my prioritization of my children and grandchildren is in large part because I’m comparably rich. And so the refusal to reallocate my giving of money commited to others has a lot to do with respecting preferences, even when I think they are “mistaken”—because the bar for overrising other’s views in choosing how to help them should also be very, very high. And that means I would be happy to defer to a consensus of those impacted if they preferred focus on existential risk reduction. It seems clear they currently do not, even if that is due to a lack of knowledge.
Third, as an individual, I am not utilitarian, and I don’t think my single goal is impartial welfare improvement—it is instead to lead a fulfilling life, and contribute to my family, my community, and the world. As laid out in the piece, I think each of these is best prioritized individually. (This is slightly related to factoring in motivations, but is distinct.) If asked to make decisions on behalf of a community or larger group, I have a deontological responsibility to them that leads to something like utilitarianism over the people who are implicated. And when pursuing impartial utilitarian goals, as one of the things I prioritize, I largely defer to the consensus among those who embrace that goal.
When asked to make decisions for the broader world, these views lead to something very, very like impartial utilitarianism—so that I think it’s correct for organizations with the goal of providing impartial benefit, including the one I currently run, to embrace that view as a matter of their goals, even if not everyone who would be affected agrees to the view, even if those goals do not match my own. And when acting in that capacity, I am fulfilling my deontological duty to be a honest representative, rather than a utilitarian duty to impartially benefit the world—though as a representative of the organization, those should almost exactly match.