Thanks! Small correction: Animal Welfare YTD is labeled as $53M, when it looks like the underlying data point is $17M (source and 2023 full-year projections here)
TylerMaule
Both posts contain a more detailed breakdown of inputs, but in short:
80k seems to include every entry in the Open Phil grants database, whereas my sheet filters out items such as criminal justice reform that don’t map to the type of funding I’m attempting to track.
They also add a couple of ‘best guess’ terms to estimate unknown/undocumented funding sources; I do not.
If you expect to take in $3-6M by the end of this year, borrowing say $300k against that already seems totally reasonable.
Not sure if this is possible, but I for one would be happy to donate to LTFF today in exchange for a 120% regrant to the Animal Welfare Fund in December[1]
- ^
This would seem to be an abuse of the Open Phil matching, but perhaps that chunk can be exempt
- ^
the comparison-in-practice I’m imagining is (say) $100k real dollars that we’re aware of now vs $140k hypothetical dollars
That is very different from the question that Caleb was answering—I can totally understand your preference for real vs hypothetical dollars.
So these are all reasons that funding upfront is strictly better than in chunks, and I certainly agree. I’m just saying that as a donor, I would have a strong preference for funding 14 researchers in this suboptimal manner vs 10 of similar value paid upfront, and I’m surprised that LTFF doesn’t agree.
Perhaps there are some cases where funding in chunks would be untenable, but that doesn’t seem to be true for most on the list. Again, I’m not saying there is no cost to doing this, but if the space is really funding-constrained as you say 40% of value is an awful lot to give up. Is there not every chance that your next batch of applicants will be just as good, and money will again be tight?
A quick scan of the marginal grants list tells me that many (most?)[1] of these take the form of a salary or stipend over the course of 6-12 months. I don’t understand how the time-value of money could be so out of whack in this case—surely you could grant say half of the requested amount, then do another round in three months once the large donors come around?[2]
IDK 160% annualized sounds a bit implausible. Surely in that world someone would be acting differently (e.g. recurring donors would roll some budget forward or take out a loan)?
I would be curious to hear from someone on the recipient side who would genuinely prefer $10k in hand to $14k in three months’ time.
Regarding the funding aspect:
As far as I can tell, Open Phil has always given the majority of their budget to non-longtermist focus areas.
This is also true of the EA portfolio more broadly.
GiveWell has made grants to less established orgs for several years, and that amount has increased dramatically of late.
IMHO seems possible to be rigorous with imaginary money, as some are with prediction markets or fantasy football. Particularly so if the exercise feels critical to the success of the platform.
I think the site looks great btw, just pushing back on this :)
Could you not dogfood just as easily with $50 (or fake money in a dev account)?
You may find this spreadsheet useful for that type of information
Substantially less money, through a combination of Meta stock falling, FTX collapsing, and general market/crypto downturns[3]...[3] Although Meta stock is back up since I first wrote this; I would be appreciative if someone could do an update on EA funding
Looking at this table, I expect the non-FTX total is about the same[1]—I’d wager that there is more funding commited now than during the first ~70% of the second wave period.[2]
I think most people have yet to grasp the extent to which markets have bounced back:
The S&P 500 Total Return Index is within 6% of its all-time high; only ever spent ~4 months above today’s value
META is −26% from ATH, but now highest since Jan ’22 and ~10 months ever above this price
- ^
Dustin’s net worth looks to be about about -$7Bn from the peak (per his preferred source). Meanwhile, GiveWell (2-3x), Founders Pledge (3x), and GWWC (1.5x) numbers all seem to be higher
- ^
I still think that the waves framing is useful and captures the prevailing narrative tbc
I second these suggestions. To get more specific re cause areas:
Each source uses a different naming convention (and some sources are just blank)
I’d suggest renaming that column ‘labels’ and instead mapping to just a few broadly defined buckets which add up to 100%—I’ve already done much of that mapping here
Borrowing money if short timelines seems reasonable but, as others have said, I’m not at all convinced that betting on long-term interest rates is the right move. In part for this reason, I don’t think we should read financial markets as asserting much at all about AI timelines. A couple of more specific points:
Remember: if real interest rates are wrong, all financial assets are mispriced. If real interest rates “should” rise three percentage points or more, that is easily hundreds of billions of dollars worth of revaluations. It is unlikely that sharp market participants are leaving billions of dollars on the table.
(a) The trade you’re suggesting could take decades to pay off, and in the meantime might incur significant drawdown. It’s not at all clear that this would be a prudent use of capital for ‘sharp money’.
(b) Even if we suppose that sharps want to bet on this, that bet would be a fraction of their capital, which in turn is a fraction of the total capital in financial markets. If all of the world’s financial assets are mispriced, as you say, why should we expect this to make a dent?There are notable examples of markets seeming to be eerily good at forecasting hard-to-anticipate events:
Setting aside that the examples given are inapposite[1], surely there are plenty in both directions? To pick just one notable counterexample: The S&P 500 broke new all-time highs in mid-Feb 2020, only to crash 32% the following month, then rise 70% over the following year. So markets did a very poor job of forecasting COVID, as well as the subsequent response, on a time horizon of just a few months!
- ^
Both of these were in rapid response to recent major events (albeit ahead of common wisdom), as opposed to an abstract prediction years in the future
- ^
I’m definitely not suggesting a 98% chance of zero, but I do expect the 98% rejected to fare much worse than the 2% accepted on average, yes. The data as well as your interpretation show steeply declining returns even within that top 2%.
I don’t think I implied anything in particular about the qualification level of the average EA. I’m just noting that, given the skewedness of this data, there’s an important difference between just clearing the YC bar and being representative of that central estimate.
A couple of nitpicky things, which I don’t think change the bottom line, and have opposing sign in any case:
In most cases, quite a bit of work has gone in prior to starting the YC program (perhaps about a year on average?) This might reduce the yearly value by 10-20%
I think the 12% SP500 return cited is the arithmetic average of yearly returns. The geometric average, i.e. the realized rate of return should be more like 10.4%
I worry that this presents the case for entrepreneurship as much stronger than it is[1]
The sample here is companies that went through Y-Combinator, which has a 2% acceptance rate[2]
As stated in the post, roughly all of the value comes from the top 8% of these companies
To take it one step further, 25% of the total valuation comes from the top 0.1%, i.e. the top 5 companies (incl. Stripe & Instacart)
So at best, if a founder is accepted into YC, and talented enough to have the same odds of success as a random prior YC founder, $4M/yr might be a reasonable estimate of the EV from that point. But I guess my model is more like Stripe and Instacart had great product market fit and talented founders, and this can make a marginal YC startup look much more valuable than it is.
Yeah I think we’re on the same page, my point is just that it only takes a single digit multiple to swamp that consideration, and my model is that charities aren’t usually that close. For example, GiveWell thinks its top charities are ~8x GiveDirectly, so taken at face value a match that displaces 1:1 from GiveDirectly would be 88% as good as a ‘pure counterfactual’
Most matches are of the free-for-all variety, meaning the funds will definitely go to some charity, just a question of who gets there first (e.g. Facebook & Every.org). While this might sound like a significant qualifier, it’s almost as good as a pure counterfactual unless you believe that all nonprofits are ~equally effective.
The ‘worst case’ is a matching pool restricted to one specific org, where presumably the funds will go there regardless, and doesn’t really add anything to your donation.
Conversely, as Lizka noted, even the best counterfactual only makes sense in theory if the recipient org is at least half as effective as the best charity you know of.
I’m not sure I fully understand the last question. It sounds like you’re referring to a matching pool specific to one charity, in which case no downside, but could be quite different if the pool covers a wider array of nonprofits.
I continue to think that animal welfare—in particular the fight against factory farming—is severely under-funded, even compared to other worthy options such as GiveWell top charities.