Probably most controversially, while I view existential risk reduction work as tremendously important, I don’t donate any of the 10% of my income dedicated to effective charity in order to support this work. (I do view it as a critical global priority, which is why the vast majority of my time and effort are spent on it!) Principally, my lack of donations is because I don’t view the cause area as a charitable endeavor, rather than rational self-interest for myself and my family, which has obvious benefits to the broader world. This does not make it less important, but does, in my idiosyncratic view, make it less obviously charity in the sense that I have committed to giving.
This is quite surprising to me. It sounds to me like you are either:
not convinced of the impartial altruistic case for spending on x-risk reduction
trying to factor in motivations into your giving (e.g. penalising things that you are incentivised to care about for non-altruistic reasons—maybe this is important for your definition of charitable?
don’t think x-risk spending is cost-effective on the current margin (but your hours are)
Thanks for engaging. Despite the fact that I don’t expect this to sound convincing or fully lay out the case for anything, I’ll provide a brief partial sketch of some of my tentative views in order to provide at least a degree of transparency around my reasoning—and I’m unlikely to expand on this much futher here, as I think this takes a lot of time and intensive discussion to transmit, and I have other things I should be working on.
First, I think you’re accidentally assuming utilitarianism in my decisionmaking. I view my charitable giving from more of a contractualist and rights view, where the deontological requirement to impartially benefit others is only one aspect of my decisionmaking. This goes pretty deep into fundamental views and arguments which I think are going to be hard to explain quickly, or at all.
Second, my initial commitment to charity, the one that led my to commit to giving 10% of my income, was to benefit the poor impartially, as one of several goals I had—embracing what Richard Chappell has called a benificentist view. I see the money I am putting aside as some degree of stewardship of money I have allocated to charity, which is given on behalf of others. Given that commitment, to the extent that I have goals which differ from benefitting the poor, there is a very, very high bar for me to abrogate that commitment and take that money to do other things. At the very least, having come to a personal conclusion that I care about the future and view existential risk as a priority does not do enough to override that commitment.
As an aside, I’ll note that it is rational for those with fewer opportunities and less access to capital to make shorter term decisions; my prioritization of my children and grandchildren is in large part because I’m comparably rich. And so the refusal to reallocate my giving of money commited to others has a lot to do with respecting preferences, even when I think they are “mistaken”—because the bar for overrising other’s views in choosing how to help them should also be very, very high. And that means I would be happy to defer to a consensus of those impacted if they preferred focus on existential risk reduction. It seems clear they currently do not, even if that is due to a lack of knowledge.
Third, as an individual, I am not utilitarian, and I don’t think my single goal is impartial welfare improvement—it is instead to lead a fulfilling life, and contribute to my family, my community, and the world. As laid out in the piece, I think each of these is best prioritized individually. (This is slightly related to factoring in motivations, but is distinct.) If asked to make decisions on behalf of a community or larger group, I have a deontological responsibility to them that leads to something like utilitarianism over the people who are implicated. And when pursuing impartial utilitarian goals, as one of the things I prioritize, I largely defer to the consensus among those who embrace that goal.
When asked to make decisions for the broader world, these views lead to something very, very like impartial utilitarianism—so that I think it’s correct for organizations with the goal of providing impartial benefit, including the one I currently run, to embrace that view as a matter of their goals, even if not everyone who would be affected agrees to the view, even if those goals do not match my own. And when acting in that capacity, I am fulfilling my deontological duty to be a honest representative, rather than a utilitarian duty to impartially benefit the world—though as a representative of the organization, those should almost exactly match.
This is quite surprising to me. It sounds to me like you are either:
not convinced of the impartial altruistic case for spending on x-risk reduction
trying to factor in motivations into your giving (e.g. penalising things that you are incentivised to care about for non-altruistic reasons—maybe this is important for your definition of charitable?
don’t think x-risk spending is cost-effective on the current margin (but your hours are)
Do any of those sound right to you?
Thanks for engaging. Despite the fact that I don’t expect this to sound convincing or fully lay out the case for anything, I’ll provide a brief partial sketch of some of my tentative views in order to provide at least a degree of transparency around my reasoning—and I’m unlikely to expand on this much futher here, as I think this takes a lot of time and intensive discussion to transmit, and I have other things I should be working on.
First, I think you’re accidentally assuming utilitarianism in my decisionmaking. I view my charitable giving from more of a contractualist and rights view, where the deontological requirement to impartially benefit others is only one aspect of my decisionmaking. This goes pretty deep into fundamental views and arguments which I think are going to be hard to explain quickly, or at all.
Second, my initial commitment to charity, the one that led my to commit to giving 10% of my income, was to benefit the poor impartially, as one of several goals I had—embracing what Richard Chappell has called a benificentist view. I see the money I am putting aside as some degree of stewardship of money I have allocated to charity, which is given on behalf of others. Given that commitment, to the extent that I have goals which differ from benefitting the poor, there is a very, very high bar for me to abrogate that commitment and take that money to do other things. At the very least, having come to a personal conclusion that I care about the future and view existential risk as a priority does not do enough to override that commitment.
As an aside, I’ll note that it is rational for those with fewer opportunities and less access to capital to make shorter term decisions; my prioritization of my children and grandchildren is in large part because I’m comparably rich. And so the refusal to reallocate my giving of money commited to others has a lot to do with respecting preferences, even when I think they are “mistaken”—because the bar for overrising other’s views in choosing how to help them should also be very, very high. And that means I would be happy to defer to a consensus of those impacted if they preferred focus on existential risk reduction. It seems clear they currently do not, even if that is due to a lack of knowledge.
Third, as an individual, I am not utilitarian, and I don’t think my single goal is impartial welfare improvement—it is instead to lead a fulfilling life, and contribute to my family, my community, and the world. As laid out in the piece, I think each of these is best prioritized individually. (This is slightly related to factoring in motivations, but is distinct.) If asked to make decisions on behalf of a community or larger group, I have a deontological responsibility to them that leads to something like utilitarianism over the people who are implicated. And when pursuing impartial utilitarian goals, as one of the things I prioritize, I largely defer to the consensus among those who embrace that goal.
When asked to make decisions for the broader world, these views lead to something very, very like impartial utilitarianism—so that I think it’s correct for organizations with the goal of providing impartial benefit, including the one I currently run, to embrace that view as a matter of their goals, even if not everyone who would be affected agrees to the view, even if those goals do not match my own. And when acting in that capacity, I am fulfilling my deontological duty to be a honest representative, rather than a utilitarian duty to impartially benefit the world—though as a representative of the organization, those should almost exactly match.