What’s especially interesting is that the one article that kick-started her career was, by truth-orientated standards, quite poor. For example, she suggested that Amazon was able to charge unprofitably low prices by selling equity/debt to raise more cash—but you only have to look at Amazon’s accounts to see that they have been almost entirely self-financing for a long time. This is because Amazon has actually been cashflow positive, in contrast to the impression you would get from Khan’s piece. (More detail on this and other problems here).
Depressingly this suggests to me that a good strategy for gaining political power is to pick a growing, popular movement, become an extreme advocate of it, and trust that people will simply ignore the logical problems with the position.
Yup, I agree with that, and am typically happy to make such requested changes.
Thanks very much for doing this useful work! This seems like the sort of project that should definitely exist, but basically inexplicably fails to come about until some random person decides to do it.
I hate to give you more work after you have perhaps already put in more time on this than everyone else combined, but I can think of two things that might make this even more useful:
Conclusions about the types of grants that performed well or badly, e.g.
Did they tend to be larger organisations or individuals?
Were they more speculative or have a concrete roadmap?
Were they research based, skill acquiring or community organising?
Comparisons to other granters.
A 30-40% success ratio isn’t that informative if readers don’t have a strong sense of what success means to you (because readers can’t see what you’ve classified as a failure), so we don’t know how good or bad this is for the LTFF. But if we could compare it to the other EA Funds, or to OpenPhil or GiveWell or the SFF, that could give useful context and help people decide which one to donate to.
I’d also be curious about whether evaluators generally should or shouldn’t give the people and organizations being evaluated the chance to respond before publication.
My experience is that it is generally good to share a draft, because organisations can be very touchy about irrelevant details that you don’t really care much about and are happy to correct. If you don’t give them this opportunity they will be annoyed and your credibility will be reduced when the truth comes out, even if it doesn’t have any real logical bearing on your conclusions. This doesn’t protect you against different people in the org having different views on the draft, and some objecting afterwards, but it should get you most of the way there.
On the other hand it is a little harder if you want to be anonymous, perhaps because you are afraid of retribution, and you’re definitely right that it adds a lot of time cost.
I don’t think there’s any obligation to print their response in the main text however. If you think their objections are valid, you should adjust your conclusions; if they are specious, let them duke it out in the comment section. You could include them inline, but I wouldn’t feel obliged to quote verbatim. Something like this would seem perfectly responsible to me:
Organisation X said they were going to research ways to prevent famines using new crop varieties, but seem to lack scientific expertise. In an email they disputed this, pointing to their head of research, Dr Wesley Stadtler, but all his publications are in low quality journals and unrelated fields.
This allows readers to see the other POV, assuming you summarise it fairly, without giving them excessive space on the page or the last word.
I agree that any organisation that is soliciting funds or similar from the public is fair game. It’s unclear to me to what extend this also applies to those which solicit money from a fund like the LTFF which is itself primarily dependant on soliciting money from the public.
According to the PhilPapers survey, over half of philosophers favour a compatibilist approach to free will—i.e. that free will is compatible with determinism.
I also recommend the LessWrong writing on the subject.
One example might be Dominic Cummings, who is clearly very influenced by EA ideas, even if he has been somewhat (in my opinion unfairly) shunned by the EA community. It seems he was one of the major forces pushing the UK government towards more decisive action on things like lockdowns. See e.g. here or here.
I’m afraid I don’t quite understand why such an org would end up unfunded. Such an organisation is not longtermist or animal rights or global poverty specific, and hence seems to fall within the natural remit of the Meta/Infrastructure fund. Indeed according to the goal of the EAIF it seems like a natural fit:
While the other three Funds support direct work on various causes, this Fund supports work that could multiply the impact of direct work, including projects that provide intellectual infrastructure for the effective altruism community, run events, disseminate information, or fundraise for effective charities. [emphasis added]
Nor would this be disallowed by weeatquince’s policy, as no other fund is more appropriate than EAIF:
we aim for the funds to be mutually exclusive. If multiple funds would fund the same project we make the grant from whichever of the Funds seems most appropriate to the project in question.
I think based on my EA Funds experience so far, I’m less optimistic that the cost would be incredibly small. E.g., I would expect less correlation between “EAIF managers think something is good to fund from a longtermist perspective” and “LTFF managers think something is good to fund from a longtermist perspective” (and vice versa for ‘meta’ grants) than you seem to expect. This is because grantmaking decisions in these areas rely a lot on judgment calls that different people might make differently even if they’re aligned on broad “EA principes” and other fundamental views. I have this view both because of some cases I’ve seen where we actually discussed (aspects of) grants across both the EAIF and LTFF managers and because within the EAIF committee large disagreements are not uncommon (and I have no reasons to believe that disagreements would be smaller between LTFF and EAIF managers than just within EAIF managers).
I think based on my EA Funds experience so far, I’m less optimistic that the cost would be incredibly small. E.g., I would expect less correlation between “EAIF managers think something is good to fund from a longtermist perspective” and “LTFF managers think something is good to fund from a longtermist perspective” (and vice versa for ‘meta’ grants) than you seem to expect.
This is because grantmaking decisions in these areas rely a lot on judgment calls that different people might make differently even if they’re aligned on broad “EA principes” and other fundamental views. I have this view both because of some cases I’ve seen where we actually discussed (aspects of) grants across both the EAIF and LTFF managers and because within the EAIF committee large disagreements are not uncommon (and I have no reasons to believe that disagreements would be smaller between LTFF and EAIF managers than just within EAIF managers).
Thanks for writing up this detailed response. I agree with your intuition here that ‘review, refer, and review again’ could be quite time consuming.
However, I think it’s worth considering why this is the case. Do we think that the EAIF evaluators are similarly qualified to judge primarily-longtermist activities as the LTFF people, and the differences of views is basically noise? If so, it seems plausible to me that the EAIF evaluators should be able to unilaterally make disbursements from the LTFF money. In this setup, the specific fund you apply to is really about your choice of evaluator, not about your choice of donor, and the fund you donate to is about your choice of cause area, not your choice of evaluator-delegate.
In contrast, if the EAIF people are not as qualified to judge primarily-longtermist (or primarily animal rights, etc.) projects as the specialised funds’ evaluators, then they should probably refer the application early on in the process, prior to doing detailed due diligence etc.
I don’t suppose you would mind clarifying the logical structure here:
The EAIF makes grants towards longtermist projects if a) the grantseeker decided to apply to the EAIF (rather than the Long-Term Future Fund), b) the intervention is at a meta level or aims to build infrastructure in some sense, or c) the work spans multiple causes (whether the case for them is longtermist or not).
My intuitive reading of this (based on the commas, the ‘or’, and the absence of ‘and’) is:
a OR b OR c
i.e., satisfying any one of the three suffices. But I’m guessing that what you meant to write was
a AND (b OR c)
which would seem more sensible?
However, I’m not following the suggested reasoning for why we should expect such a preference to be common.
I definitely have the intuition the funds should be essentially non-overlapping. In the past I’ve given to the LTFF, and would be disappointed if it funded something that fit better within one of the other funds that I chose not to donate to.
With non-overlapping funds, donors can choose their allocation between the different areas (within the convex hull). If the funds overlap, donors can no longer donate to the extremal points. This is basically a tax on donors who want to e.g. care about EA Meta but not Longtermist things.
Consider the ice-cream case. Most ice-cream places will offer Vanilla, Chocolate, Strawberry, Mint etc. If instead they only offered different blends, someone who hated strawberry—or was allergic to chocolate—would have little recourse. By offering each as a separate flavour, they accommodate purists and people who want a mixture. Better for the place to offer each as a standalone option, and let donors/customers combine. In fact, for most products it is possible to buy 100% of one thing if you so desire.
This approach is also common in finance; firms will offer e.g. a Tech Fund, a Healthcare Fund and so on, and let investors decide the relative ratio they want between them. This is also (part of) the reason for the decline of conglomerates—investors want to be able to make their own decisions about which business to invest in, not have it decided by managers.
Thanks for writing up this detailed account of your work; I’m glad the LTFF’s approach here seems to be catching on!
Thanks very much for sharing this! Very interesting. I’m sure I will refer back to this article in the future.
One quick question: when you have the two charts—“Separating out the technical safety researchers and the strategy researchers”—could you make explicit which is which? It’s possible to work it out based on the colour of the dots if you try of course.
Is it just me or is the landmass on that globe not to scale?
One argument for it is that by being deployed and studied now, we can prove the general technique in time for the next pandemic.
Do you know if it possible to give to an EA Fund from a DAF?
I was able to do this by disbursing to CEA and writing ‘LTFF’ in the comments.
It’s true that political action is a necessary step towards achieving meaningful, lasting change.
I realise that these articles are meant for a popular audience. However, I was surprised to see this extremely strong claim—the idea that meaningful and lasting change is literally impossible without political action—asserted without any evidence, as if it were self-evident. If anything it seems self-evidently false to me.
For a small-scale example, consider rescuing the archetypal child drowning in a pond. Saving the child is meaningful; doing so might be one of the best things you ever do in your entire life. And the child might easily live another 80 years, outlasting many policies and countries, not to mention the potential for the child to one day have their own children, so it seems like a lasting change. Just because it isn’t political doesn’t mean we should dismiss this.
For a much larger impact, consider Norman Borlaug. His scientific work on new crop varieties hugely increased our capacity to produce food; the common quote that he saved a billion people is probably an exaggeration, but this work clearly had an extraordinarily positive impact on many millions of people. It is true that he did work with governments, but the core of his achievement was scientific, not ‘political action’.
Closer to home, consider GWWC itself. I suspect you would agree that GWWC has had a very meaningful impact, and GWWC, the EA movement it spawned, and the downstream consequences, seem likely to persist for some time. Yet even though some GWWC members have tried to influence politics, the central impacts and original motivation was about individual contributions, not political lobbying.
Indeed, this claim is directly contradicted later in the article:
Charity, however, can also play an important role in improving the world. It can help improve lives directly by, for example, providing support to low-income communities. There are some highly effective charities that alleviate suffering, reduce the burdens of disease, and help children receive an education. These are amazing giving opportunities, and many of them will have a lasting impact.
Overall I feel like this article bends over backward to be positive towards political action. Even the ‘Importance of Charity’ section spends around 2⁄3 of the time talking about why charity is good for politics, which seems very misleading. Yes, if we save a girl from malaria, maybe she will grow up to be a politician… but that’s not why we donate to AMF. We donate because malaria is bad and causes a lot of suffering directly. I would rather see this section focus on the core, true reason, rather than a rather tertiary one. Not only would this be more honest, I also think it would be more persuasive.
I think a more balanced approach also would include criticism of demagoguery, including perhaps some of the points mentioned here. Your aim should not just be to merely persuade the reader that charity is acceptable, but that the types of charity we support are significantly better than most popular political causes. After all, often political action leads to meaningful and lasting negative change!
I think the answer is going to be ‘no’, but I was pleasantly surprised by the next line in the Quora article you linked, which I think represents the real happy outcome:
We are still married 17 years later and have two rowdy boys.
Congratulations on your success, and thanks for writing this up so clearly!
To be clear, I think this specific post was a reasonable fit for the forum, insomuch as it is a proposal for a newsletter, for the reasons you outlined. I agree the forum should accept ideas that are merely plausibly promising, so they can be refined, create useful discussion, and give people the opportunity for growth. And indeed I did not downvote this post.
The issue I was responding to was the question of whether every instalment of the newsletter should be shared on the forum:
It would be helpful to know if people think I should post each issue on the Forum. I know other newsletters, like EA London, do this but I don’t want to clutter the Forum with posts every 2 weeks if people think it’s too off-topic!
Given that the post explicitly raised the question I think it is perfectly legitimate to answer it in the negative.
Sorry if this was not clear. It actually did not even occur to me that this individual post might be inappropriate, so I made no attempt in my comment to distinguish this from my view.
Have you looked into Street Votes? The idea is to allow individual streets to hold a vote on whether to change their zoning to allow e.g. everyone to add an extra floor. Many housing reform policies hurt existing homeowners, who are then motivated to oppose them. This idea is to make deregulation a win-win-win, by allowing them to capture a lot of the value creation.
Also, minor, but the link here does not mention anything about schools:
Socially, more multifamily housing increases diversity and leads to better schools and environments for children.