In the short run it’s possible that posting recommendations about whatever causes are currently getting mainstream media attention might attract more donations. But in the long run it’s important that donors be able to trust that EA evaltuators will make their donation recommendations honestly and transparently, even when that trades off with marketing to new donors. Prioritizing transparent analysis (even when it leads to conclusions that some donors might find offputting) over advertising & broad donor appeal is a big part of the difference between EA and traditional charities like Oxfam.
lexande
Note that the page says
> Our financial year runs from 1st July to 30th June, i.e. FY 2024 is 1st July 2023 to 30th June 2024.
so the “YTD 2024” numbers are for almost eight months, not two, and accordingly it looks like FY 2024 will have similar total revenue to FY 2023 (and substantially less than FY 2021 and FY 2022).
I mostly meant the fact that it’s currently restricted to Germany, though also to some extent the focus on interventions that fit into currently-popular anti-AfD narratives over other sorts of governance-improvement or policy-advocacy interventions (without clear justification as to why you believe the former will be more effective).
My objection is not primarily to what Effektiv-Spenden itself published but to the motivation that Sebastian Schienle articulated in the comment I was replying to. As I said there are potentially good reasons to publish such research, I just think “trying to appeal to people who don’t currently care about global effectiveness and hoping to redirect them later” is not one of them.
(I think ideally Effektiv-Spenden would do more to distinguish this from other cause areas, “beta” seems like an understatement, but I wouldn’t ordinarily criticize such web design decisions if there weren’t people here in the comments explicitly saying they were motivated by manipulative marketing considerations.)
As I noted in my first comment, I think this sort of “bait and switch”-like advertising approach risks undermining the key strengths of EA and should generally be avoided. EA’s comparative advantage is in being analytically correct and so we should tell people what we believe and why, not flatter their prejudices in the hopes that “we can then guide to the place where money goes furthest”. I can see other potential benefits to Effektiv-Spenden or other EAs researching the effectiveness of pro-democracy interventions in Germany, but optimizing for that sort “gateway drug” effect seems likely to be net harmful.
I think it’s important that EA analysis not start with its bottom line already written. In some situations the most effective altruistic interventions (with a given set of resources) will have partisan political valence and we need to remain open to those possibilities; they’re usually not particularly neglected or tractable but occasional high-leverage opportunities can arise. I’m very skeptical of Effektiv-Spenden’s new fund because it arbitrarily limits its possible conclusions to such a narrow space, but limiting one’s conclusions to exclude that space would be the same sort of mistake.
The focus on a particular country would make sense in the context of career or voting advice but seems very strange in the context of donations since money is mostly internationally fungible (and it’s unlikely that Germany is currently the place where money goes furthest towards the goal of defending democracy). The limited focus might make strategic sense if you thought of this as something like an advertising campaign trying to capitalize on current media attention and then eventually divert the additional donors to less arbitrarily circumscribed cause areas (as suggested by your third bullet point), but I think that sort of relating to donors as customers to advertise to rather than fellow participants in collaborative truth-seeking risks undermining confidence in Effektiv Spenden and the principles that make EA work.
How would you handle it if your analysis reached the conclusion that the most effective pro-democracy intervention were donating to a particular political party or other not-fully-tax-deductible group? I’m not familiar with the details of German charity law but I would worry that recommending such donations might jeopardize Effektiv Spenden’s own tax-deductible status, while excluding such groups (which seem more likely to be relevant here than for other cause areas) from consideration would further undermine the principle of transparently giving donors the advice that most effectively furthers their goals.
- 10 Feb 2024 13:31 UTC; 11 points) 's comment on Introducing the Effektiv Spenden “Defending Democracy” fund by (
If you’re in charge of investing decisions for a pension fund or sovereign wealth fund or similar, you likely can’t personally derive any benefit from having the fund sell off its bonds and other long-term assets now. You might do this in your personal account but the impact will be small.
For government bonds in particular it also seems relevant that I think most are held by entities that are effectively required to hold them for some reason (e.g. bank capital requirements, pension fund regulations) or otherwise oddly insensitive to their low ROI compared to alternatives. See also the “equity premium puzzle”.
Beyond just taking vacation days, if you’re a bond trader who believes in a very high chance of xrisk in the next five years it
probablymight make sense to quit your job and fund your consumption out of your retirement savings. At which point you aren’t a bond trader anymore and your beliefs no longer have much impact on bond prices.
From an altruistic point of view, your money can probably do a lot more good in worlds with longer timelines. During an explosive growth period humanity will be so rich that they will likely be fine without our help, whereas if there’s a long AI winter there will be a lot of people who still need bednets, protection from biological xrisks, and other philanthropic support. Furthermore in the long-timeline worlds there’s a much better chance that your money can actually make a difference in solving AI alignment before AGI is eventually developed. So if anything I think the appropriate altruistic investment approach is the opposite of what this post suggests; even if you think that timelines will be short you should bet that they will be long.
From a personal point of view, it’s likewise true that marginal dollars are much more useful to you during an AI winter than during an explosive growth period (when everyone will be pretty rich anyway), so you should make trades that move money from short-timeline futures to long-timeline ones. But I do agree with the post that short timelines should increase your propensity to consume today. (The “borrow today” proposal is impractical since nobody will actually lend you significant amounts of money unsecured, but you might want to spend down savings faster than you otherwise would.) (Edit: Though the amount it makes sense to consumption-shift is smaller than you might expect.)
I think a fair number of market participants may have something like a probability estimate for transformative AI within five years and maybe even ten. (For example back when SoftBank was throwing money at everything that looked like a tech company, they justified it with a thesis something like “transformative AI is coming soon”, and this would drive some other market participants to think about the truth of that thesis and its implications even if they wouldn’t otherwise.) But I think you are right that basically no market participants have a probability estimate for transformative AI (or almost anything else) 30 years out; they aren’t trying to make predictions that far out and don’t expect to do significantly better than noise if they did try.
A few years ago I asked around among finance and finance-adjacent friends about whether the interest rates on 30 or 50 year government bonds had implications about what the market or its participants believed regarding xrisk or transformative AI, but eventually became convinced that they do not.
As far as I can tell nobody is even particularly trying to predict 30+ years out. My impression is:
A typical marginal 30-year bond investor is betting that interest rates will be even lower in 5-10 years, and then they can sell their 30 year bond for a profit since it will have a higher locked-in interest rate than anything being issued then.
Lots of market actors have a regulatory obligation (e.g. bank capital requirements) to buy government bonds which drives the interest rate on such bonds down a lot, to the point that it can be significantly negative for long periods even when the market generally expects the economy to grow. Corporate bonds have less of this issue but are almost never issued for such long durations.
It’s true that the market clearly doesn’t believe in extremely short timelines (like real GDP either doubling or going to zero in the next 5-10 years). But I think it mostly doesn’t have beliefs about 30+ years out, or if it does their impacts on prices are swamped by its beliefs about nearer-term stuff.
Nobody will give you an unsecured loan to fund consumption or donations with most of the money not due for 15+ years; most people in our society who would borrow on such terms would default. (You can get close with some types of student loan, so if there’s education that you’d experience as intrinsically-valued consumption or be able to rapidly apply to philanthropic ends then this post suggests you should perhaps be more willing to borrow to fund it than you would be otherwise, but your personal upside there is pretty limited.)
- 13 Jan 2023 16:45 UTC; 46 points) 's comment on AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years by (
Is there a link to what OpenPhil considers their existing cause areas? The Open Prompt asks for new cause areas so things that you already fund or intend to fund are presumably ineligible, but while the Cause Exploration Prize page gives some examples it doesn’t link to a clear list of what all of these are. In a few minutes looking around the Openphilanthropy.org site the lists I could find were either much more general than you’re looking for here (lists of thematic areas like “Science for Global Health”) or more specific (lists of individual grants awarded) but I may be missing something.
Maybe, though given the unilateralist’s curse and other issues of the sort discussed by 80k here I think it might not be good for many people currently on the fence about whether to found EA orgs/megaprojects to do so. There might be a shortage of “good” orgs but that’s not necessarily a problem you can solve by throwing founders at it.
It also often seems to me that orgs with the right focus already exist (and founding additional ones with the same focus would just duplicate effort) but are unable to scale up well, and so I suspect “management capacity” is a significant bottleneck for EA. But scaling up organizations is a fundamentally hard problem, and it’s entirely normal for companies doing so to see huge decreases in efficiency (which if they’re lucky are compensated for by economies of scale elsewhere).
the primary constraint has shifted from money to people
This seems like an incorrect or at best misleading description of the situation. EA plausibly now has more money than it knows what to do with (at least if you want to do better than GiveDirectly) but it also has more people than it knows what to do with. Exactly what the primary constraint is now is hard to know confidently or summarise succinctly, but it’s pretty clearly neither of those. (80k discusses some of the issues with a “people-constrained” framing here.) In general large-scale problems that can be solved by just throwing money or throwing people at them are the exception and not the rule.
For some cause areas the constraint is plausibly direct workers with some particular set of capabilities. But even most people who want to dedicate their careers to EA could not become effective e.g. AI safety researchers no matter how hard they tried. Indeed merely trying may be negative impact in the typical case due to opportunity cost of interviewers’ time etc (even if EV-positive given the information the applicant has). One of the nice things about money is that it basically can’t hurt, and indeed arguments about the overhead of managing volunteer/unspecialised labour were part of how we wound up with the donation focus in the first place.
I think there is a large fraction of the population for whom donating remains the most good they can do, focusing on whatever problems are still constrained by money (GiveDirectly if nothing else) because the other problems are constrained by capabilities or resources which they don’t personally have or control. The shift from donation focus to direct work focus isn’t just increasing demandingness for these people, it’s telling them they can’t meaningfully contribute at all. Of course inasmuch as it’s true that a particular direct work job is more impactful than a very large amount of donations it’s important to be open and honest about this so those who actually do have the required capabilities can make the right decisions and tradeoffs. But this is fundamentally in tension with building a functioning and supportive community, because people need to feel like their community won’t abandon them if they turn out to be unable to get a direct work job (and this is especially true when a lot of the direct work in question is “hits-based” longshots where failure is the norm). I worry that even people who could potentially have extraordinarily high impact as direct workers might be put off by a community that doesn’t seem like it would continue to value them if their direct work plans didn’t pan out.
I really enjoyed this post, but have a few issues that make me less concerned about the problem than the conclusion would suggest:
- Your dismissal in section X of the “weight by simplicity” approach seems weak/wrong to me. You treat it as a point against such an approach that one would pay to “rearrange” people from more complex to simpler worlds, but that seems fine actually, since in that frame it’s moving people from less likely/common worlds to more likely/common ones.
- I lean towards conceptions of what makes a morally relevant agent (or experience) under which there are only countably many of them. It seems like two people with the exact same full life experience history are the same person, and the same seems plausible for two people whose full-life-experience-histories can’t be distinguished by any finite process, in which case each person can be specified by finitely much information and so there are at most countably many of them. I think if you’re willing to put 100% credence on some pretty plausible physics you can maybe even get down to finitely many possible morally relevant morally distinct people, since entropy and the speed of light may bound how large a person can be.
- My actual current preferred ethics is essentially “what would I prefer if I were going to be assigned at random to one of the morally-relevant lives ever eventually lived” (biting the resulting “sadistic conclusion”-flavoured bullets). For infinite populations this requires that I have some measure on the population, and if I have to choose the measure arbitrarily then I’m subject to most of the criticisms in this post. However I believe the infinite cosmology hypotheses referenced generally come along with fundamental measures? Indeed a measure over all the people one might be seems like it might be necessary for a hypothesis that purports to describe the universe in which we in fact find ourselves. If I have to dismiss hypotheticals that don’t provide me with a measure on the population as ill-formed and assign zero credence to universes without a fundamental measure that’s a point against my approach but I think not a fatal one.
It seems like this issue is basically moot now? Back in 2016-2018 when those OpenPhil and Karnofsky posts were written there was a pretty strong case that monetary policymakers overweighted the risks of inflation relative to the suffering and lost output caused by unemployment. Subsequently there was a political campaign to shift this (which OpenPhil played a part in). As a result, when the pandemic happened the monetary policy response was unprecedentedly accomodative. This was good and made the pandemic much less harmful than it would have been otherwise, at the cost of elevated but very far from catastrophic inflation this year (which seems well worth it given the likely alternative). And indeed Berger in that 80k interview brings the issue up primary as a past “big win”, mission accomplished, and says it’s unclear whether they will take much further action in this space.
A major case where this is relevant is funding community-building, fundraising, and other “meta” projects. I agree that “just imagine there was a (crude) market in impact certificates, and take the actions you guess you’d take there” is a good strategy, but in that world where are organizations like CEA (or perhaps even Givewell) getting impact certificates to sell? Perhaps whenever someone starts a project they grant some of the impact equity to their local EA group (which in turn grants some of it to CEA), but if so the fraction granted would probably be small, whereas people arguing for meta often seem to be acting like it would be a majority stake.
Note that those graphs of malaria cases and malaria deaths by year effectively have pretty wide error bars, with diferent sources disagreeing by a lot:
(source)
Presumably measurement methodology has improved some since 2010 but the above still suggests that the underlying reality is difficult enough to measure that one should not be too confident in a “malaria deaths have flatlined since 2015” narrative. But of course this supports your overall point regarding how much uncertainty there is about everything in this sort of context.