From an investorâs perspective, it does make sense to make both sorts of investments, but only because there are diminishing marginal returns to income on well-being. If there were no diminishing marginal returns to income on well-being, the best thing for your well-being would be whatever has the highest expected return on investing!
[...]
Hence, if you think funding some project in psychedelics really has higher expected (moral) value than anything else, including GiveWellâs picks, it would be better (by your lights) to give to that, and recommend your listeners to do likewise (emphasis added). Put another way, note thereâs something odd about saying âyeah, I really do think A would have the most impact, and all that matters here is impact, but you and I should do B anyway.â
I basically agree with the model here â that there arenât diminishing returns on moral value. That said, a couple of notes on the specific situation:
a) From the perspective of inspiring action, it would make sense to me if Tim saw his listeners as being somewhat risk-averse (as most people are!) and tried to recommend GiveWell in the expectation that this would raise more overall money than a higher-risk option. This approach might still be Timâs best way to maximize his impact as a fundraiser. (No idea whether this is something he actually tries to do.)
b) Some of the opportunities Tim has supported (e.g. scientific studies by a particular lab) arenât necessarily in a position to accept small donations, and so wouldnât make sense to recommend for listeners. (That said, there are times when these opportunities have been available to small donors, and heâs advertised them.)
Many of these recommendations appear here because they are particularly good fits for individual donorsâdue to being able to make use of fairly arbitrary amounts of donations from individuals, and in some cases because the recommender thought theyâd be particularly likely to appeal to readers. This shouldnât be seen as a list of our strongest grantees overall (although of course there may be overlap).
Funnily enough, this actually is an analogy to investing; you need a certain amount of capital to invest in certain hedge funds, startups, etc. What a wealthy person does with their portfolio isnât necessarily the same thing they can recommend to a broad audience.
That said, I think one specific, valuable project would be sifting through the landscape of psychedelic funding opportunities.
This also strikes me as valuable, though in light of point (b) above, you might want to select âbest in classâ funding opportunities for donors of different sizes (e.g. the best place to give if you plan to donate under $1000).
That said, this is possibly worse than creating some kind of psychedelics fund that can combine many small donations into grants of a size that make sense for universities to process. (I wouldnât be surprised if this existed already and I wasnât aware of it.)
Re (a), that would be a sufficient justification, I agree: you suggest the option that is less cost-effective in the expectation more people will do it and therefore its expectation value is higher nonetheless. My point was that, if you have a fixed total of resources then, as an investor, the lower-risk, lower ROI option can be better (due to diminishing marginal utility) but, as a donor, you just want to put the fixed total to the thing with higher ROI.
That said, this is possibly worse than creating some kind of psychedelics fund that can combine many small donations into grants of a size that make sense for universities to process
I am not aware of this, but I have had a bit of discussion with Jonas Vollmer about setting up a new EA fund that could do this. This hypothetical âhuman well-being fundâ would be an alternative to the global health and development fund. While the latter would (continue to) basically back âtried-and-testedâ GiveWell recommendations (which are in global health and development), the former could, inter alia, engage in hits-based giving and take a wider view.
I basically agree with the model here â that there arenât diminishing returns on moral value. That said, a couple of notes on the specific situation:
a) From the perspective of inspiring action, it would make sense to me if Tim saw his listeners as being somewhat risk-averse (as most people are!) and tried to recommend GiveWell in the expectation that this would raise more overall money than a higher-risk option. This approach might still be Timâs best way to maximize his impact as a fundraiser. (No idea whether this is something he actually tries to do.)
b) Some of the opportunities Tim has supported (e.g. scientific studies by a particular lab) arenât necessarily in a position to accept small donations, and so wouldnât make sense to recommend for listeners. (That said, there are times when these opportunities have been available to small donors, and heâs advertised them.)
From Open Philâs latest set of suggestions for individual donors:
Funnily enough, this actually is an analogy to investing; you need a certain amount of capital to invest in certain hedge funds, startups, etc. What a wealthy person does with their portfolio isnât necessarily the same thing they can recommend to a broad audience.
This also strikes me as valuable, though in light of point (b) above, you might want to select âbest in classâ funding opportunities for donors of different sizes (e.g. the best place to give if you plan to donate under $1000).
That said, this is possibly worse than creating some kind of psychedelics fund that can combine many small donations into grants of a size that make sense for universities to process. (I wouldnât be surprised if this existed already and I wasnât aware of it.)
Hello Aaron,
Re (a), that would be a sufficient justification, I agree: you suggest the option that is less cost-effective in the expectation more people will do it and therefore its expectation value is higher nonetheless. My point was that, if you have a fixed total of resources then, as an investor, the lower-risk, lower ROI option can be better (due to diminishing marginal utility) but, as a donor, you just want to put the fixed total to the thing with higher ROI.
I am not aware of this, but I have had a bit of discussion with Jonas Vollmer about setting up a new EA fund that could do this. This hypothetical âhuman well-being fundâ would be an alternative to the global health and development fund. While the latter would (continue to) basically back âtried-and-testedâ GiveWell recommendations (which are in global health and development), the former could, inter alia, engage in hits-based giving and take a wider view.