(Addendum 2023-02-04: Given that Sam Bankman-Fried and Gary Wang appear to have committed serious fraud at FTX, and as a consequence have lost most of their fortunes, it would appear EA is at minus two billionaires since I wrote this. I haven’t thought much on how this changes the forecast, but here’s a rough guess. First, as I mention in a footnote, the model predicts the number of additional billionaires, meaning the net change in EA billionaires from mid-2022 to mid-2027. So the original prediction of 3.5 additional ones would now need 5.5 new billionaires to come true. That seems improbable. I now think ~2 additional billionaires (90% CI: −1 to 7) between now and mid-2027 seems about right, enough to make it ±0 since August 2022. That is because (1) I’m less optimistic about EAs’ money-making abilities, (2) I’m less optimistic about EA membership growth and (3) I think it’s more likely now than before that wealthy EA donors stay anonymous or dissociate from EA. But time will tell.)
Dwarkesh Patel argues that “there will be many more effective altruist billionaires”. He gives three reasons for thinking so:
People who seek glory will be drawn to ambitious and prestigious effective altruist projects. One such project is making a ton of money in order to donate it to effective causes.
Effective altruist wealth creation is a kind of default choice for “young, risk-neutral, ambitious, pro-social tech nerds”, i.e. people who are likelier than usual to become very wealthy. Effective altruists are more risk-tolerant by default, since you don’t get diminishing returns on larger donations the same way you do on increased personal consumption.
These early-stage businesses will be able to recruit talented effective altruists, who will be unusually aligned with the business’s objectives. That’s because if the business is successful, even if you as an employee don’t cash out personally, you’re still having an impact (either because the business’s profits are channelled to good causes, as with FTX, or because the business’s mission is itself good, as with Wave).
The post itself is kind of fuzzy on what “many” means or which time period it’s concerned with, but in a follow-up comment Patel mentions having made an even-odds bet to the effect that there’ll be ≥10 new effective altruist billionaires in the next five years. He also created a Manifold Markets question which puts the probability at 38% as I write this. (A similar question on whether there’ll be ≥1 new, non-crypto, non-inheritance effective altruist billionaire in 2031 is currently at 79% which seems noticeably more pessimistic.) I commend Patel for putting his money where his mouth is!
Summary
With (I believe) moderate assumptions and a simple model, I predict 3.5 additional effective altruist billionaires in 2027. With more optimistic assumptions, I predict 6.0 additional billionaires. ≥10 additional effective altruist billionaires in the next five years seems improbable. I present these results and the assumptions that produced them and then speculate haphazardly.
Assumptions
If we want to predict how many effective altruist billionaires there will be in 2027, we should attend to base rates.
As far as I know, there are five or six effective altruists billionaires right now, depending on how you count. They are Jaan Tallinn (Skype), Dustin Moskovitz (Facebook), Sam Bankman-Fried (FTX), Gary Wang (FTX) and one unknown person doing earning to give. We could also count Cari Tuna (Dustin Moskovitz’s wife and cofounder of Open Philanthropy). It’s possible that someone else from FTX is also an effective altruist and a billionaire. Of these, as far as I know only Sam Bankman-Fried and Gary Wang were effective altruists prior to becoming billionaires (the others never had the chance, since effective altruism wasn’t a thing when they made their fortunes).
William MacAskill writes:
Effective altruism has done very well at raising potential funding for our top causes. This was true two years ago: GiveWell was moving hundreds of millions of dollars per year; Open Philanthropy had potential assets of $14 billion from Dustin Moskovitz and Cari Tuna. But the last two years have changed the situation considerably, even compared to that. The primary update comes from the success of FTX: Sam Bankman-Fried has an estimated net worth of $24 billion (though bear in mind the difficulty of valuing crypto assets, and their volatility), and intends to give essentially all of it away. The other EA-aligned FTX early employees add considerably to that total.
There are other prospective major donors, too. Jaan Tallinn, the cofounder of Skype, is an active EA donor. At least one person earning to give (and not related to FTX) has a net worth of over a billion; a number of others are on track to give hundreds of millions in their lifetime. Among Giving Pledge signatories, there are around ten who are at least somewhat sympathetic to either effective altruism or longtermism. And there are a number of other successful entrepreneurs who take EA or longtermism seriously, and who could increase the total aligned funding by a lot. So, while FTX’s rapid growth is obviously unusual, it doesn’t seem like a several-orders-of-magnitude sort of fluke to me, and I think it would be a mistake to think of it as a “black swan” sort of event, in terms of EA-aligned funding.
Let’s make the following assumptions:
Suppose right now there are about 9,500 engaged effective altruists; let’s say we’re 90% certain the number is somewhere between 6,500 and 14,000.[1]
Suppose we’re 90% certain that P(billionaire|effective altruist) is in the 0.0177% to 0.0526% range.[2]
0.0177% is the estimate I get for P(billionaire|Ivy League graduate) (ranging from 0.0082% for Brown University to 0.0535% for Harvard University). 0.0526% is the base rate you get if there are 5 effective altruist billionaires and 9,500 effective altruists (5 ÷ 9,500 = 0.0526%).
This means our model predicts ~2.9 billionaires right now, an underestimate, which makes perfect sense given that we used the 0.0526% figure as an upper bound. You can take this as evidence that the model is conservative, but I take it as evidence that we’re lucky to have five billionaires in our midst. (The model gives ≥5 billionaires in 2022 a probability of ~9%.)
Suppose we’re 90% certain that effective altruism will grow by somewhere between −5% and +40% each year.[3]
This involves updating upward slightly from the current rate (est. median of +14% → median of +18%) due to recent media exposure; I’m not sure if said exposure warrants a larger update.
This implies we’ll add, in expectation, roughly another 12,000 effective altruists from now until 2027, but with a ~31% probability of ≥19,000 (a tripling).
Implications
Given these (pretty generous, in my opinion) assumptions, the probability that we’ll have ≥10 additional effective altruists in 2027 is roughly 14% (with 37% of ≥5 and 76% of ≥1). That’s way less than Patel’s proposed 50% (or the market’s 38% as of my writing this). Instead, the model predicts we’ll get 3 or 4 additional effective altruist billionaires in that time (which is not bad).[4]
Even if we say with certainty that P(billionaire|effective altruist) is 0.0526%, ignoring the Ivy League base rate, the probability that we’ll have ≥10 additional billionaires in 2027 is roughly 31%. Either way, 50% looks improbable to me: in order to get there, you’d need to assume both P(billionaire|effective altruist) = 0.0526% and a ~25% annual growth rate of engaged effective altruists (meaning we expect the number of effective altruists to triple in five years).
Here are the different scenarios (with median numbers reported):
P(billionaire if EA) | Number of EAs (2022) | Annual EA growth | P(≥10 EA billionaires in 2027) | New EA billionaires (2027) |
---|---|---|---|---|
3.1‱ | 9,500 | 17.5% | 13.9% | 3.5 |
5.3‱ | 9,500 | 17.5% | 31.2% | 6.0 |
5.3‱ | 9,500 | 25.0% | >50% | >10 |
I could see the recent success of FTX influencing things in either direction:
Maybe it inspires people to try to do the same thing; this might cause P(billionaire|effective altruist) to increase in the near future.
Maybe it (and the sense that effective altruism now has a lot of money) moves people away from earning to give and towards direct work; this might cause P(billionaire|effective altruist) to decrease in the near future.
What about Patel’s three arguments? As far as I can tell, P(billionaire|effective altruist) is somewhat smaller than P(billionaire|Harvard graduate) in the wake of FTX, though Harvard graduates have some advantages over effective altruists, like being much older on average, better credentialed and perhaps more talented and well-connected. I think this is evidence that there really is something special about effective altruists. They (or I suppose I should say, we) seem to be unusually likely to be very wealthy, and perhaps also unusually likely to become very wealthy. But I also think it suggests that P(billionaire|effective altruist) will at some point regress towards the saner Ivy League mean.
I’ll end with two more speculative thoughts.
First, I’m not sure there’s more glory in effective altruist ventures than there is in traditional start-ups. Trying to think of various founders, of course the effective altruists are more famous within effective altruism and in adjacent spheres, but more broadly I don’t really see a difference; I see successful non-effective-altruist start-up founders getting lots of prestige and attention. This matters because Patel’s first (and to a lesser extent second and third) argument is premised on effective altruism being unusually prestigious.
Second, I think the distinction between “effective altruist → billionaire” and “billionaire → effective altruist” is important. I’d guess that, if we do get ≥10 additional effective altruist billionaires in the next five years, some of them will have become effective altruists only after getting rich (at least one-third, let’s say). But that would not really vindicate Patel’s arguments, which are all reasons to think that effective altruists will become billionaires, not the other way around.
That said, the number of billionaires is steadily increasing, and that’s one reason to think there will be more effective altruist billionaires in the future. There will just be more billionaires period. In fact, if we extrapolate the trends a little more, we’ll all be billionaires one day.
Appendix: Squiggle Model
Here is the Squiggle model used to produce the predictions:
numberOfEas2022 = 6.5k to 14k
// time in years after 2022
numberOfAdditionalEasAtTime(t) = {
annualEaGrowthRate = -0.05 to 0.4
numberOfEas2022 * ((annualEaGrowthRate + 1) ^ t)
}
numberOfAdditionalEasAtTime(t) = numberOfAdditionalEasAtTime(t) - numberOfEas2022
p_BillionaireIfIvyBaseRate = 0.000177
p_BillionaireIfEaBaseRate = 5.0 / 9.5k
p_BillionaireIfEa = p_BillionaireIfIvyBaseRate to p_BillionaireIfEaBaseRate
// time in years after 2022
numberOfAdditionalEaBillionairesAtTime(t) =
numberOfAdditionalEasAtTime(t) * p_BillionaireIfEa
{
// additional billionaires
numberOfAdditionalEaBillionairesAtTime:
numberOfAdditionalEaBillionairesAtTime,
numberOfAdditionalEaBillionaires2027:
numberOfAdditionalEaBillionairesAtTime(5),
medianAdditionalEaBillionaires2027:
numberOfAdditionalEaBillionairesAtTime(5) |> quantile(.5),
p_10AdditionalEaBillionairesIn2027:
1.0 - numberOfAdditionalEaBillionairesAtTime(5) |> cdf(10),
// additional eas
numberOfAdditionalEasAtTime: numberOfAdditionalEasAtTime,
numberOfAdditionalEas2027: numberOfAdditionalEasAtTime(5),
medianAdditionalEas2027: numberOfAdditionalEasAtTime(5) |> quantile(.5),
}
You can paste it directly into a Squiggle playground and play around with the parameters yourself. Note that you may see slightly different results as the precise numbers depend on the random Monte Carlo draws (I think).
- ↩︎
8,455 people have taken the Giving What We Can pledge, but not all of those will be, or are still, effective altruists, and some effective altruists won’t have taken the pledge. There are 21,468 members of the Effective Altruism Facebook group, but I’m confident that many of them aren’t committed effective altruists. In 2019, Rethink Priorities estimated 4,700-10,000 active effective altruists, with a median of 6,500; with a 14% growth rate that’d bring us to 6,500 × 1.14 × 1.14 × 1.14 = 9,600 people at the end (?) of this year, kinda.
- ↩︎
If there are 5 effective altruist billionaires and 9,500 effective altruists, we get a base rate for P(billionaire|effective altruist) of 5 ÷ 9,500 = 0.0526%. (A couple of years ago it would’ve been more like 3 ÷ 6,500 = 0.0462% or so.)
But we can also look at Ivy League alumni. They are supposed to be smart, driven and well-connected, and probably stand about as good a chance as effective altruists at becoming billionaires? I get P(billionaire|Ivy League graduate) = 0.0177%. The average age of an Ivy League billionaire is 63.3 years (very close to the overall billionaire average of 64.3 years); effective altruists are generally much younger.
The P(billionaire|Ivy League graduate) estimate went like this. I could easily find Forbes data of all 2021 billionaires and their alma maters. But estimating the number of living graduates proved more difficult. In the end, I found an article saying Yale University had roughly 130,000 living alumni in 2010. I also knew how many active (graduate and undergraduate) students there are right now for each university. I then assumed that the 130,000 number’s still correct, and that the number of alumni scales linearly with the number of current students. That way I estimated the number of living graduates for each university and summed them all together, getting a total of 231 billionaires and 1,307,317 graduates. P(billionaire|Ivy League graduate) = 231 ÷ 1,307,317 = 0.0177%.
- ↩︎
Looking at the growth rate of the Facebook group, it seems to be in the +12-20% range for the past few years. Benjamin Todd writes: “I think that if you track [...] ‘total number of people willing to change career (or take other significant steps)’, it’s still growing reasonably (perhaps ~20% per year?)”; elsewhere, he’s estimated a growth rate of 0-30% (median 14%) for committed effective altruists. That’s sounds reasonable to me, though given the recent media coverage I up it to a median of ~18%.
- ↩︎
The observant reader may have noticed that the model allows for a number of additional billionaires in 2027 in the negative. That makes sense in that we may lose some of the ones we have currently (they may no longer be billionaires or effective altruists), but I don’t know if Patel is predicting the number of new billionaires, or the difference between how many there are then and how many there are now. E.g. if we get 10 new billionaires but lose one old one, my model would say we have 9 additional ones, but I suspect Patel’s bet would resolve in the positive (because there are 10 new ones).
Here’s an argument for an EA billionaire advantage.
Suppose you have a $500 million fortune and an opportunity presents itself for a gamble where you have a 1% chance of taking that to $60 billion but a 99% chance of ending up with nothing. Just in dollars this is positive expected value because $600 million > $500 million. But I think most people would reject that bet due to risk aversion and the declining marginal utility of money.
To a highly motivated EA, though, it looks like a better deal so you’re more likely to go for it.
There’s actually surplus of high-risk-high-reward people in the world, to the point where people would sacrifice the $500 million for a 1% chance of getting $40 billion. They’re not just paying the extra fee for the possibility of becoming a billionaire and lording over everyone else, they’re also paying another even more extra fees fee to compete against other people who are competing for that slot, due to the sheer number of people who psychologically want to become a billionaire and lord over everyone else.
In other words, it becomes a lottery.
Even worse, in fact, because the real world has information asymmetry and is rigged to scam in more complicated ways than lotteries. Such as data poisoning and perimeterless security.
Upvoted because I don’t think this tension is discussed enough, even if to refute it.
It strikes me that the median non-EA is more risk averse than EAs should be, so moving non-EA to EA you should probably drop some of your risk aversion. But it does also seem true that the top performing people in your field might disproportionately be people who took negative EV bets and got lucky, so we don’t necessarily want to be less risk averse than them.
^This is a really important and I completely missed this. It’s similar to how the winner of an auction tends to be the type of person who mistakenly spends more than the item was worth to them (or anyone). The most visible EAs (billionaires) could be the winners in a game with massive net loss overall. Crypto is exactly that kind of thing.
Ah yes, for what it’s worth, I do allude to this (as does Patel, who I’m paraphrasing): “Effective altruists are more risk-tolerant by default, since you don’t get diminishing returns on larger donations the same way you do on increased personal consumption.”
I feel like this should be accounted for in the EA base rate, but maybe the effect has gotten or will get more pronounced now as Sam Bankman-Fried is vocal about having this mindset.
I think you’re missing a few billionaires in your 5-6 number.
Jed McCaleb (founder of Ripple) is a funder of SFF.
There are many wealthy crypto people that have either donated to EA causes or are heavily involved that have an illiquid or highly fluctuating net worth due to the crypto markets. I would guess there are 5-10 that were billionaires at some point but likely have high 9 figure net worths now.
Also do you count people that sympathize with EA ideas as EAs? Fred Ehrsam and Brian Armstrong have both wrote positively about EA in the past. I have seen on Twitter a handful of 9-10 figure net worth crypto hedge fund managers talk about Less Wrong and a few talk about EA.
You can’t really use the S&P 500 as a way to predict these guys’ net worths either.
If there is another crypto bull market and Bitcoin hits $200k, I remember seeing a BOTEC that half of all the new billionaires in the world will be due to crypto.
I’d also add Vitalik Buterin to the list.
Thanks!
I interpret it more strictly than that. One of the markets I mention refers to people “who identify as effective altruists”, and the other as “either a) public self-identification as EA, b) signing the Giving What Can pledge or c) taking the EA survey and being a 4 or 5 on the engagement axis”.
I suspect this would exclude some/most of the people you mention?
Fwiw, here are the model outputs with some other assumptions in current # of EA billionaires:
7 current EA billionaires (as upper bound): 4.0 expected new billionaires, 18% chance of >=10.
7 current EA billionaires (ignoring Ivy League base rate): 8.8 expected new billionaires, 41% chance of >= 10.
10 current EA billionaires (as upper bound): 4.4 expected new billionaires, 27% chance of >=10.
10 current EA billionaires (ignoring Ivy League base rate): 12.2 expected new billionaires, 55% chance of >= 10.
15 current EA billionaires (as upper bound): 5.3 expected new billionaires, 32% chance of >=10.
15 current EA billionaires (ignoring Ivy League base rate): 17.3 expected new billionaires, 67% chance of >= 10.
Yeah, true, crypto seems like an interesting wild card which could make the current base rate conservative.
We should start a suite of prediction markets for the future size and funding of EA.
It looks like the implication here is that spending large amounts of money on altcoins, and then earnng to give if you get lucky, is one of the best things an EA can do.
This is not true; becoming a crypto billionaire is a losing strategy. Every time crypto billionaires are created, it’s usually because the crypto market cap doubled. Each time it doubles, that is one less time that it can double before it reaches the cap of replacing fiat currency and becoming all money. It will probably stop doubling long before then, because major governments are much more willing to violently defend their fiat currency than most people seem to be aware of.
Also, each time it doubles, media attention also roughly doubles. So the more prevalent crypto is in your mind, the less room for growth remains.
There are even stronger reasons why crypto is obviously a very bad strategy for increasing money, but I am not willing to talk about them in a public forum. However, “trying to become a crypto billionaire” is a much worse idea than it sounds. If watching Silicon Valley turned you off to becoming a genius tech billionaire, then you should know that the crypto approach is so much worse in every way.
“It will probably stop doubling long before then, because major governments are much more willing to violently defend their fiat currency than most people seem to be aware of.”
I think this is wrong as logic, and very few countries are in that position. Most countries borrow in foreign currency anyways, so don’t really care that much. The few that do not (US, EU) are very unlikely to attack a business sector which is increasingly politically engaged and supported, much less ignore rule of law in order to address an economic issue that isn’t directly threatening anyone.
On the other hand, yes, I think that the near term market cap is effectively bounded at a level far below that of “all money”—though I’ll caution that my track record predicting crypto is far worse than my other predictions, so take it with a huge grain of salt.
I obviously know little of the billionaire shadow cabal which runs the world, but I’ll comment on this:
Establishing a prominent, highly visible, highly scalable flagship project could greatly increase publicity and the receptiveness of high net worth individuals.
I’ve founded a few advocacy orgs before. People seem much more enthusiastic to support a big ambitious project than a disparate bunch of good ideas. Currently, there are multiple fairly small established organisations, but nothing like The Gates Foundation seeks to eradicate malaria. In practical terms, there’s no reason why multiple highly effective initiatives are less worth funding, but it’s a psychology thing IMO.
Do correct me if you think this is wrong, haha.
Downvoting this due to the first sentence, which alleges a cabal of billionaires running the world with effectively zero evidence.
I believe that was a joke66%.
Makes sense to me!
I updated this post with the following addendum:
Given that Sam Bankman-Fried and Gary Wang appear to have committed serious fraud at FTX, and as a consequence have lost most of their fortunes, it would appear EA is at minus two billionaires since I wrote this. I haven’t thought much on how this changes the forecast, but here’s a rough guess.
First, as I mention in a footnote, the model predicts the number of additional billionaires, meaning the net change in EA billionaires from mid-2022 to mid-2027. So the original prediction of 3.5 additional ones would now need 5.5 new billionaires to come true. That seems improbable. I now think ~2 additional billionaires (90% CI: −1 to 7) between now and mid-2027 seems about right, enough to make it ±0 since August 2022. That is because (1) I’m less optimistic about EAs’ money-making abilities, (2) I’m less optimistic about EA membership growth and (3) I think it’s more likely now than before that wealthy EA donors stay anonymous or dissociate from EA. But time will tell.
Possibly a [fancy term I’m forgetting] upwards bias here. If there had been fewer EA billionaires it would be less likely we would be discussing this and you would be writing this post.
A few very quick points:
1. Neat to see this being modeled! It’s an important see of variables, could use more interesting attempts.
2. Note that you can click “copy share link” in Squiggle to have a link that opens that direct model. https://www.squiggle-language.com/playground/#code=eNqtVE1vgkAQ%2FSsTTmArLkZrStImmpjGU5PaI6kZy6IbYdFlqTHW%2F96hWkMVEJty3Hlf%2B2bD1kjm8XqcRhGqjeEGGCb89vts6AsdK8PVKqUTIYUWGI5XqZjNQj7WSsiZ4RoyjaZcPQdDTNqs3YYHuLO7C9AxOJ2FJz3ZaoEWEQchYcNRJYCB5goysCd%2F2H2fzEQsMSSdvn4lgqktEtt6EuhDQmazJxWv9fwFNadZk9msmzkxu7OHnYZpgGkWUG%2FAseANtOXJXY0MFwDNU9vs1svJQIRhBhaKj4LRx2aACT8Ep9yMOb3eGWyIOVTXZtCCe2qzAEfzCgvqpFT5TzvJaeWr%2Bd16ST%2BN8yxZhsNmKQoeeTDN%2BZSLn6dx9%2BD6hNt66tRI72pts2sd5CPuC5T%2FLQ6fj7BKUWoRctM%2Bmi0nDivjjuSJmUOPq3md5bsfmA7L3Io2xzG5%2BBrcqmHFRvZNVb6zisprsQs6pV%2BDsfsCPtfA6w%3D%3D
3. Some of the key variables get to be negative, which worries me. You can use “truncateLeft()” to remove the negative values, or “SampleSet.map({|x| x < 0 ? 0 : x})” to set them to 0. (Which is likely more appropriate here).
4. I know Nuno did some very similar modeling here, but hasn’t written about it much yet.
https://github.com/quantified-uncertainty/squiggle-models/blob/master/bill-gates-wealth/gates.squiggle
5. Be sure to apply to the upcoming Squiggle competition!
https://forum.effectivealtruism.org/posts/ZrWuy2oAxa6Yh3eAw/usd1-000-squiggle-experimentation-challenge
Thanks!
3 -- I think I mention this in a footnote:
So congrats, you are officially an observant reader. ;) (Edit: Though I realise that I’m muddling things by using “new” when I actually mean the difference between then and now.)
4 -- Nice, looks like he’s modelling future capital (not merely # of billionaires) but seems similar enough. I’m not sure if it’s in a finished state, but I see Nuno’s getting a
chanceOfNewBillionnairePerYearOptimistic
of ~18% which seems significantly more pessimistic than me, which is interesting given that some other ppl here seem to be more optimistic than me.5 -- Oh, will do!
Some perhaps naïve observations/questions:
Is the P(billionaire|effective altruist) number fairly calculated? It seems most of your model assumes EA billionaires are people who first become an EA, then a billionaire. You do mention this being important but it did not seem like it was integrated into the model. Perhaps it would make sense to instead make two categories and predict each one separately:
One category is billionaires that become EA later (like Moskowitz). Maybe here a base rate could be EA billionaires / all billionaires in the world. Then you can calculate how many more billionaires there will be until 2027 and get an estimate for how many of these will later decide to donate to EA.
Another refinement here could be to get a sense about how many of current billionaires have heard of EA—maybe outreach here is poor and it might be that current non-EA donating billionaires start donating in the future (like Musk, although he has head about EA).
Another is EAs that become billionaires. Not sure who that currently is, but would have been SBF if that didn’t go so badly. I would perhaps even try to use the number of E2G EAs, and not the overall number of EAs to calculate the base rate here.
Related to point 1, a, ii, above—I feel like using the Ivy rate introduces bias. At least at UPenn where I went, a seemingly very high proportion of graduates tried to get rich. I feel like this proportion might be in the 20%-70% range. I think this differs from EA where I feel like the number of people really trying to get rich is probably closer to 5%-20% (maybe e.g. the EA survey has ways to find out). Also if the proportion of Ivy League graduates who become billionaires is significantly higher than graduates from all universities, perhaps this is not applicable to EA? Not sure what the makeup in terms of academic credentials is in the 9500 EA number you use, but it might be reasonable to be less optimistic about EAs ability to become billionaires (one anecdotal reason from my experience is that it seems the network one builds at an Ivy League is a major factor for billionaire success).
A last, and I think minor point is the timeline. For the number of future EA billionaires you expect to come from future EAs becoming billionaires, you might want to take into account the time it takes from becoming an EA, through to deciding to E2G and then finally starting executing on a plan to become rich. That could take several years meaning your number might be a bit optimistic for 2027. But then it might be a good estimate for ~2035. Ways to get info on this parameter could be to look at CrunchBase and look at time from incorporation to billion dollar valuation. And then add a few years before incorporation before that.
That said, I find your analysis super helpful. I am using it to get a sense of the likelihood of financing a bioweapons shelter/refuge in the next few years and after reading your analysis I became much more enthusiastic about this being possible (I previously used the expected value of Founder’s Pledge and was quite pessimistic about significant funding being available 5-10 years from now) . So thanks a ton for doing this analysis and posting it!
Thanks, I’m glad you found it useful!
I’m not sure what you mean by fair exactly, but you’re right that I don’t distinguish the billionaire → EA pipeline from the EA → billionaire pipeline in the model (only mentioning it in text). It seems possible that your proposal of splitting these is good, though that may make it harder to calculate reasonable base rates (are there even any examples of EA → billionaire people left post-FTX?). Numbers on earning-to-give EAs could definitely be useful here if you have them.
Well, you’re probably right that there are reasons why we might expect the Ivy League base rate to be skewed high when using it for EA (the average age of Ivy League alumni being much higher than that of EAs is the most obvious one IMO), but also, the Ivy League base rate is actually substantially lower than the EA base rate (at least it was pre-FTX), and so including it has the effect of reducing the overall estimate, and if you didn’t include it you’d increase the overall estimate, which seems like the opposite of what you’re arguing for? It’s definitely possible that EAs are less likely to become and/or be billionaires even than the average Ivy League alumnus/a, but in that case it’s pretty suspicious that there have been several EA billionaires already.
Yeah this is true—there’s definitely some sort of diffusion/momentum thing going on that my model isn’t fully accounting for. (Then again, I think there’s something to be said for “simple model + qualitative reasoning around it” over “complicated model that accounts for everything”!) I guess when I calculate base rates I do only include actual billionaires, not about-to-become billionaires, so it shouldn’t be that biased.
Thanks for responding. Take only what you think is useful from my comments—you have thought much more deeply about this than I have and seem on top of the issues I have raised. Just a couple of responses in case it might be helpful (otherwise please disregard them):
Sorry, I have not seen such numbers. Just thought perhaps there might be some numbers lying around somewhere, e.g. results from surveys. I actually think perhaps the best number would be E2G EAs that pursue for-profit entrepreneurship—not sure if even the quant traders have a high probability of becoming billionaire donors. But this number might be even harder to come by.
I think I would not exclude the Ivy League base rate. Instead some possibilities could be (and please disregard this if it does not seem promising—I have not thought deeply about it!):
Perhaps one path could be to actually discard the EA base rate. My intuition here is that the number of EAs who later become billionaires is so low that the base rate calculated from it does not carry much weight (not sure if statistical significance is the right term here, and if not something close to it). Instead one could use an adjusted Ivy League base rate. And adjusting it based on some assumptions about “strength of talent”, fraction of population that pursues becoming rich and maybe some other adjustments, which would lower the final estimate.
Alternatively keep both base rates but still adjust the Ivy League base rate downwards due to the observations I made. That should also lower the final estimate.
Your point of having a simple model is a good one—I am not sure how much more accurate the forecast would be by making a more complex model. And I think you point out well in the post that one should not lean too heavily on the model but take into consideration other sources of evidence.
You might already know this, but you can link directly to the Squiggle Model, without the need for copy-pasting.
Thanks! I guess I vaguely sort of might’ve guessed but didn’t really think about it when I wrote that.
Perhaps it is linked here in one place or another (if so, sorry, not able to find it after a couple of scans or my memory is failing me!) but Metaculus also has what I believe is a relevant prediction, at the time of writing putting the chance of another donor on the scale of Effective Ventures in 2026 at 50%.