Several of these might be summed up under the heading “high risk.” There is a notion that this is exactly what philanthropy (as opposed to governments) ought to be doing.
One area I think hits many of these: global income inequality.
I don’t blame governments for not pursuing such things. I’ve never thought of philanthropy, or how others think of philanthropy, to be about pursuing high-risk altruism. I’ve alwasy thought of philanthropy as wealthy people with big hearts trying to help people in a way that tugged at their heartstrings, patronizing something they’re passionate about, such as research to cure a particular disease or works of fine art they enjoy, or to signal their magnaminity, i.e., giving for the sake of conspicuity.
How common do you think this notion is that philanthropy ought to be pursuing high-risk altruism? Effective charity is more risk-averse than other charity due to its very nature. However, some within effective altruism are moe risk-averse than others. Existential risk reduction such as funding the Machine Intelligence Research Institutute (MIRI) in that telling whether MIRI’s research wil ultimately lead to saftey archictectures for A.I. which make it through all the bottlenecks of actually being implemented is so difficult. Why bother funding something which seems like it could have a low likelihood of succeeding, and you don’t even know what to assess to improve your estimate of its success? This is how I feel about MIRI.
The only thing I update my opinion on of MIRI’s potential success is other effective altruists who seem to be correct about many things also believing MIRI has a decent chance of success at their mission. That is, outsiders from MIRI who favor other cause areas in the first place being bullish on MIRI indicates to me they’re perceiving something I’m not, and I’m humble enough to accept just because I don’t understand how the case for MIRI works doesn’t mean it can’t work. Of course, this is just evidence via informational social influence. I don’t know how to rate that relative to other evidence, which I expect is stronger but I don’t know how to asses either, so my updates on MIRI’s proposed efficacy typically round down to zero. Really, such updates are only sufficient to justify spending further time invesitgating MIRI and the field of A. I. risk, which is what the Open Philanthropy Project (Open Phil) is doing now.
With other risk reduction, such as climate change, it’s also a high-risk bet in the sense that funding one climate change intervention hedges against the ability to fund any other intervention with the same money, and it seems impossible to tell which conventional climate change intervention is or will be the most effective. Effective altruism is willing to take high-risk bets when the expected value is sufficiently great and positive. However, there are a couple ways in which we doubt the credence of expected value calculations as a sole tool, all else equal, for evaluating effectiveness.
The first is to doubt that in any given expected value calcuation whether the factors selected in the calcuation are sufficient, and whether the estimated values assigned to each factor are well-calibrated. I myself believe this is a healthy skepticism to take towards any stand-alone expected value calcuation. The second way to approach an expected value calculation with skepticism is to doubt that expected value calculations in the first place, even if as meticulously constructed as possible, would alone be suficient even in theory to bet on a high-risk intervention. This seems an attitude more common to Givewell and Charity Science. Their ratioanle for this is I believe laid out in a blog post called “Why we can’t take expected value calcuations literally, even when they’re explicit”. I haven’t read it.
I believe that everything ultimately would be about idealized expected value calcuations. However, we can’t have ideal expected value calcuations. I believe there are too many factors in any expected value calculation, especially for more specific interventions, for anyone to ever capture them, and we don’t have enough resolute ability to gain information about the factors of expected value to assign reliable or sufficiently precise values to them. For example, Givewell in assessing the effectiveness of a charity will take into account the competence and personal fit of the team working at a given organization. I think that’s a level of sensititivty that would definitely be a factor in an idealized expected value calcuation, but one which I doubt is taken into account by effective altruists in the expected value calcuations they actually use.
In Doing Good Better, Will MacAskill writes about how he and his team at 80,000 Hours (80k) coached one mentee about how expected value calculations, once personal fit and all the rest were taken into account, were the dominant factor in her decision to pursue a career in politics. She was in a reference class of PPE graduates from Oxford, which are disproportionately represented in the U.K. Parliament, and seemed otherwise competent enough to be an apt politician. Further, effective altruism as a whole follows the field of economics and its high confidence across the whole profession in certain policies as being improvements over the status quo as a sufficient indicator these would make beter policies. So, for an 80k member to justify pursuing a career in politics, for which we already have so much good information about for estimating the expected value, as long as the candidate in question stands a good chance of becoming an MP and can vote in a way that will increase the likely implementation of very effective but unpopular or unnoticed policy initiatives, her personal characteristics don’t matter as much.
This isn’t true for the Against Malaria Foundation (AMF) or MIRI. Givewell uses a cluster-thinking approach using as many heuristical and empirical approaches to assessing a charity as they can to minimize the chance they get something wrong. So, there is not prior track record, or frame of reference, for how to bulid the next AMF, or an effectivef existential risk reduction organization. We don’t have a table of prior probabilities to estimate the value of a factor based on the characteristics of an item relative to other itesm in its reference class. So, Givewell is forced to use methods other than expected value, because otherwise they’ll always fall short of the standards they aspire to. If it’s a significant factor Rob Mather is the executive director of AMF in its mission to prevent malaria cases leading to deaths, however many millions, among all possible global health inteventions, than it matters even more who the executive director of MIRI is to save the lives of billions of living people and the countless human population of the future. Michael Dickens is an effective altruist who exemplifies this, as he values animals highly, and he recently stated he is now substantially more likely to donate to MIRI now that their current executive director, Nate Soares, values nonhuman animals and the effect future technologies will have on their welfare, whereas MIRI’s previous executive director, Luke Muehlhauser, does not. Perhaps MIRI should have multiple competitors, each with different stauff, pursuing the same ultimate goals in their techincal research, but otherwise running their organizations quite differently, to minimize the dependence on one organization to save the world. And yet, these are only a couple factors in assessing high-risk, high-return, far-off, and empirically sensitive scenarios. It’s not worth it to nitpick my example of MIRI, its staff, or A.I. risk, because I just wanted to provide one vivid example of how intractable and insufficient expected value calucations are as a lone tool, or even as a primary tool among many, even if we think we’re not wrong in how robust they are.
So, effectively, there is little or no difference between the evaluation approach Givewell uses and the one I’d ideally endorse, because their way of attacking a problem from so many different angles is a giant algorithm which, while not as simple as we might want, better approximates what the output of a perfect expected value calculation would be better than any EV calucation we’ll actually use would.
Perhaps MIRI should have multiple competitors, each with different stauff, pursuing the same ultimate goals in their techincal research, but otherwise running their organizations quite differently, to minimize the dependence on one organization to save the world.
AFAICT, this was target #5 of MIRI’s summer fundraiser. As is, MIRI probably lacks the funding to do this.
Several of these might be summed up under the heading “high risk.” There is a notion that this is exactly what philanthropy (as opposed to governments) ought to be doing.
One area I think hits many of these: global income inequality.
I don’t blame governments for not pursuing such things. I’ve never thought of philanthropy, or how others think of philanthropy, to be about pursuing high-risk altruism. I’ve alwasy thought of philanthropy as wealthy people with big hearts trying to help people in a way that tugged at their heartstrings, patronizing something they’re passionate about, such as research to cure a particular disease or works of fine art they enjoy, or to signal their magnaminity, i.e., giving for the sake of conspicuity.
How common do you think this notion is that philanthropy ought to be pursuing high-risk altruism? Effective charity is more risk-averse than other charity due to its very nature. However, some within effective altruism are moe risk-averse than others. Existential risk reduction such as funding the Machine Intelligence Research Institutute (MIRI) in that telling whether MIRI’s research wil ultimately lead to saftey archictectures for A.I. which make it through all the bottlenecks of actually being implemented is so difficult. Why bother funding something which seems like it could have a low likelihood of succeeding, and you don’t even know what to assess to improve your estimate of its success? This is how I feel about MIRI.
The only thing I update my opinion on of MIRI’s potential success is other effective altruists who seem to be correct about many things also believing MIRI has a decent chance of success at their mission. That is, outsiders from MIRI who favor other cause areas in the first place being bullish on MIRI indicates to me they’re perceiving something I’m not, and I’m humble enough to accept just because I don’t understand how the case for MIRI works doesn’t mean it can’t work. Of course, this is just evidence via informational social influence. I don’t know how to rate that relative to other evidence, which I expect is stronger but I don’t know how to asses either, so my updates on MIRI’s proposed efficacy typically round down to zero. Really, such updates are only sufficient to justify spending further time invesitgating MIRI and the field of A. I. risk, which is what the Open Philanthropy Project (Open Phil) is doing now.
With other risk reduction, such as climate change, it’s also a high-risk bet in the sense that funding one climate change intervention hedges against the ability to fund any other intervention with the same money, and it seems impossible to tell which conventional climate change intervention is or will be the most effective. Effective altruism is willing to take high-risk bets when the expected value is sufficiently great and positive. However, there are a couple ways in which we doubt the credence of expected value calculations as a sole tool, all else equal, for evaluating effectiveness.
The first is to doubt that in any given expected value calcuation whether the factors selected in the calcuation are sufficient, and whether the estimated values assigned to each factor are well-calibrated. I myself believe this is a healthy skepticism to take towards any stand-alone expected value calcuation. The second way to approach an expected value calculation with skepticism is to doubt that expected value calculations in the first place, even if as meticulously constructed as possible, would alone be suficient even in theory to bet on a high-risk intervention. This seems an attitude more common to Givewell and Charity Science. Their ratioanle for this is I believe laid out in a blog post called “Why we can’t take expected value calcuations literally, even when they’re explicit”. I haven’t read it.
I believe that everything ultimately would be about idealized expected value calcuations. However, we can’t have ideal expected value calcuations. I believe there are too many factors in any expected value calculation, especially for more specific interventions, for anyone to ever capture them, and we don’t have enough resolute ability to gain information about the factors of expected value to assign reliable or sufficiently precise values to them. For example, Givewell in assessing the effectiveness of a charity will take into account the competence and personal fit of the team working at a given organization. I think that’s a level of sensititivty that would definitely be a factor in an idealized expected value calcuation, but one which I doubt is taken into account by effective altruists in the expected value calcuations they actually use.
In Doing Good Better, Will MacAskill writes about how he and his team at 80,000 Hours (80k) coached one mentee about how expected value calculations, once personal fit and all the rest were taken into account, were the dominant factor in her decision to pursue a career in politics. She was in a reference class of PPE graduates from Oxford, which are disproportionately represented in the U.K. Parliament, and seemed otherwise competent enough to be an apt politician. Further, effective altruism as a whole follows the field of economics and its high confidence across the whole profession in certain policies as being improvements over the status quo as a sufficient indicator these would make beter policies. So, for an 80k member to justify pursuing a career in politics, for which we already have so much good information about for estimating the expected value, as long as the candidate in question stands a good chance of becoming an MP and can vote in a way that will increase the likely implementation of very effective but unpopular or unnoticed policy initiatives, her personal characteristics don’t matter as much.
This isn’t true for the Against Malaria Foundation (AMF) or MIRI. Givewell uses a cluster-thinking approach using as many heuristical and empirical approaches to assessing a charity as they can to minimize the chance they get something wrong. So, there is not prior track record, or frame of reference, for how to bulid the next AMF, or an effectivef existential risk reduction organization. We don’t have a table of prior probabilities to estimate the value of a factor based on the characteristics of an item relative to other itesm in its reference class. So, Givewell is forced to use methods other than expected value, because otherwise they’ll always fall short of the standards they aspire to. If it’s a significant factor Rob Mather is the executive director of AMF in its mission to prevent malaria cases leading to deaths, however many millions, among all possible global health inteventions, than it matters even more who the executive director of MIRI is to save the lives of billions of living people and the countless human population of the future. Michael Dickens is an effective altruist who exemplifies this, as he values animals highly, and he recently stated he is now substantially more likely to donate to MIRI now that their current executive director, Nate Soares, values nonhuman animals and the effect future technologies will have on their welfare, whereas MIRI’s previous executive director, Luke Muehlhauser, does not. Perhaps MIRI should have multiple competitors, each with different stauff, pursuing the same ultimate goals in their techincal research, but otherwise running their organizations quite differently, to minimize the dependence on one organization to save the world. And yet, these are only a couple factors in assessing high-risk, high-return, far-off, and empirically sensitive scenarios. It’s not worth it to nitpick my example of MIRI, its staff, or A.I. risk, because I just wanted to provide one vivid example of how intractable and insufficient expected value calucations are as a lone tool, or even as a primary tool among many, even if we think we’re not wrong in how robust they are.
So, effectively, there is little or no difference between the evaluation approach Givewell uses and the one I’d ideally endorse, because their way of attacking a problem from so many different angles is a giant algorithm which, while not as simple as we might want, better approximates what the output of a perfect expected value calculation would be better than any EV calucation we’ll actually use would.
AFAICT, this was target #5 of MIRI’s summer fundraiser. As is, MIRI probably lacks the funding to do this.