I think this is essentially a straw man?
Everyone I know who doesn’t like donating to AI safety basically thinks it’s because p(influencing the outcome positively) is too low.
simon
Prediction markets seem to be a great business (mostly gambling with all the problems associated with it) so “funding” in the sense of investing in them could be sensible while “funding” in the donation sense not. (And then later donation to AMF or similar).
In general, I’m hesitant to donate to stuff that’s plausibly just a really good business in its own right.
Note that in the context of trading/investing, the two terms are often used differently. There, “mean reversion” often means negative autocorrelation of returns, which can either be ~causal or driven by price level noise (which in turn is more like a “regression to the mean” idea). If you invest in a mean reversion strategy you tend to have an actual mechanism in mind though.
“Regression to the mean” is a less ambiguous term and generally means what you describe.
Thanks a lot Joey, this is definitely worth reading for people in the wider EA space, not only larger scale donors or people working in philanthropy directly.
What I’ve found particularly helpful are the rough quantitative guidelines regarding “charity time consumed per amount donated” and “how to donate as a function of annual amount and time spent per week”.
This is very valuable to better position myself from an earning-to-give perspective.
I think it might perhaps be interesting to write a short summary of that for the forum, perhaps targeted more at a median e2g EA? (If that doesn’t exist already.)Separately, it’s great to see that the book really embraces plurality in what areas donors prioritise without too strong a view on what’s preferable in the author’s opinion.
Yes thanks
boy did this age in favour of “good judgement” as a factor!
To add a small side note to this, in particular the point around the effectiveness essay:
I suspect the EA community and in particular 80k hours tend to underestimate how hard it is to do better by being more ambitious (for the typical engaged EA, at least). Eg counterfactually increasing your income from 150k to 600k by “being more ambitious” and working longer hours or negotiating your salary more aggressiely is not a very high probability outcome. Achieving this increase by having better judgement around what area to specialise in is perhaps more likely. Likewise, taking more risk by being an entrepreneur does not 10x your career donations in expectation if you have a decent job.
I would discount the multipliers in 6 & 8 a lot (or at least their component attributable to ambition and risk taking), while I believe they are > 1.
Just to point out the obvious: encouraging some of these professionals to think more about earning to give can also be very valuable.
That’s right, but it should be possible to model that in a very similar hierarchical manner and adjust accordingly, too, if you buy into the original framework laid out in the post.
(I haven’t fully thought it thru but it does strike me as fundamentally possible with the same caveats of not knowing parameters, not that I’d suggest using the toy model style maths in practice).
Thanks for sharing this, awareness of this type of bias is very relevant for the EA community.
The interpretation of $\sigma_V / \sigma_\mu$ (squared) is subtle in practice. I think a clean way to express it is the (square root of the) ratio of prior precision to “measurement” precision—that fits with the hierarchical model used to explain it in the paper you reference.In practice this is not trivial to guesstimate.
An interesting rabbit hole to understand this further is the “Tweedie correction” [1].
It should also be pointed out that once you’ve shrunk the estimate, that’s it: EV maximising will pick the posterior winner without accounting for the posterior variance—also something not everyone is comfortable with.
[1] https://efron.ckirby.su.domains/papers/2011TweediesFormula.pdf
“Trump is pressuring the Fed to adopt policies that would cause inflation.”
That’s more cleanly expressed as a curve steepener (front lower, back higher), so bullish short end vs bearish back.
“AI-induced job loss might cause the Fed to be less concerned about inflation.”
This sounds more bullish bonds because low inflation concerns → fed can cut. Also (more importantly) the fed has a dual mandate so low employment → cut.
I personally like CTA style trend following strategies but generally advise people who are not very familiar against that exposure. If you think about leverage in percentage terms and not eg in volatility units I would be a bit nervous.
Levered risk parity (bonds + equities) can get very spicy if correlations break (eg high inflation), but it has worked well in the secular declining rates environment from the 80s to 2010s.
I think Corey Hoffstein’s work and podcast are great!
I think it’s a bit tricky to reason about “the ecosystem” on a global level.
Directionally I’d say earning-to-give deserves more popularity (perhaps even as a default, given direct work seems oversubscribed) and more community support and, yes, it’s hard and can feel less rewarding to find a well paying job and to donate a large fraction of your income!
Some of these people should probably consider earning to give? This has perhaps been de-emphasised more recently, but the counterfactual impact can be very good if there’s not clear match in direct work.
Additionally I suspect that the “structural issue” is often simply funding constraints.
Is there a running list of small, impactful & very capacity-constrained giving opportunities somewhere?
How do people think about investing vs donating over time in practice?
When coupling investment and optimal donation problems, there is an apparent paradox if we consider:
* expected utility (EU) is pretty much linear in donations
* risk aversion with respect to own impact is non-altruistic
--> one should allocate everything to the best donation opportunity
* if there are positive EV (above risk free rate or above market beta or similar) investment opportunities one can invest and therefore increase the EU because EV translates linearly into EU
--> one should go all in on the best investment
--> seemingly there is a moral obligation to invest imprudently in order to donate with max EU
This is clearly wrong and suspiciously close to SBF-type (double+epsilon)-or-nothing scenarios.The way I currently think about is that a one-period problem is a very poor approximation (eg if I hit an absorbing barrier down the line the outcome is bad in EU terms, my future income stream is time-varying and uncertain and a function of my liquidity), therefore risk averse investing is still optimal, even when risk averse donating is not justified.
Additionally (and hand-wavingly), I think that somewhat risk averse & diversified donations are also good because of beyond-quantifiable and moral uncertainty, diversification generating new future donation opportunities, multi-period-ness, and game theory around adverse selection & because it’s reasonable not to be completely altruistic.
What are other good ways to think about the coupled donation and investment problem?
Thanks for the good write up!
I feel folks who are sympathetic to “hinge of history” type arguments should really think about whether this is the one during their lifetimes not because of AI but because of US democracy.
It’s reasonable to be somewhat skeptical based on priors given the statistical power of this (very worthy and interesting!) study? I didn’t dig deeper, but back of the envelope if you draw from 10000 iid households with an infant each and a 4% probability you’d expect a standard deviation around 0.2%, so there’s not much room for slicing the data a lot finer or additional correlation creeping in without a decent amount of sampling error. Obviously, with smarter analysis you can do a bit better and it’s hard and expensive to get more data, but it’s easy to believe the results are biased upwards a bit. The study is a great step in the right direction.
Yes, exactly. My point is that people are pretty aware and claiming otherwise is a bit of a straw man type fallacy—but I might be wrong, perhaps I interact with different people :D