Private equity investor (E2G)
Co-Treasurer @ EA UK
Trustee @ EA for Christians
Trustee @ ICM UK
Director @ EA Good Governance Project
MBA @ INSEAD
Private equity investor (E2G)
Co-Treasurer @ EA UK
Trustee @ EA for Christians
Trustee @ ICM UK
Director @ EA Good Governance Project
MBA @ INSEAD
Surely itâs not a case of either-or. EA exists because we all found that existing charity was not up to scratch, hence we do want EA to take different approaches. However, I think itâs important to also have people from outside EA (but with good value alignment) to provide diversity of thought and make sure there are no blindspots.
Doesnât CEA know any?
Do you get 1 karma just for posting this comment? đ
Estonia doesnât surprise me. Itâs very tech-heavy and EA skews heavily to tech people
What were the conditions of the grant? What follow-up was there after the grant was made? Was there a staged payment schedule based on intermediate outputs? If this grant went to a for-profit and no output was produced, can the money be clawed back?
Thanks for the great analysis!
The lack of interest in GHD by the Leaders Forum is often communicated as if GHD should be deprioritised, but I think a fair amount of causation goes the other way. Historically, people promoting GHD have not been invited to the Leaders Forum.
I think itâs similar with engagement. Highly engaged EAs are less likely to support GHD, but that ignores the fact that engagement is defined primarily based on direct work not E2G or careers outside EA, hence people interested in GHD are naturally classified as less engaged even if they are just as committed.
Sure, the claim hides a lot of uncertainties. At a high level the article says âA implies X, Y and Zâ, but you canât possibly derive all of that information from the single number A. Really whatâs the article should say is âX, Y and Z are consistent with the value of Aâ, which is a very different claim.
i donât specifically disagree with X, Y and Z.
I do think you should hedge more given the tower of assumptions underneath.
The title of the post is simultaneously very confident (âthe market impliesâ and âbut not moreâ), but also somewhat imprecise (âtrillionsâ and âvalueâ). It was not clear to me that the point you were trying to make was that the number was high.
Your use of âbut not moreâ implies you were also trying to assert the point that it was not that high, but I agree with your point above that the market could be even bigger. If you believe it could be much bigger, that seems inconsistent with the title.
I also think âvalueâ and ârevenueâ are not equivalent for 2 reasons:
Value should factor in the consumer surplus
Even if you only look at the producer surplus, then you should look at profit not revenue
Your claim is very strong that âthe market implies Xâ, when I think what you mean is that âthe share price is consistent with Xâ.
There are a lot of assumptions stacked up:
The share price represents the point at which the marginal buyer and marginal seller transact. If you assume both are rational and fundamental, then this represents the NPV of future cash flows for the marginal buyer /â seller. Note this is not the same as the median /â mean expectation.
You can use some other market expectations for discount rates etc. to translate that into some possible forecast of cash flow. If you are of the view that AI will fundamentally change the market economy, this assumption seems flawed.
The market does not tell you anything about the profile of those cash flows (i.e. all in the short-term vs. spread out over the long-term), so you need to make your own assumption on growth and maturity to get to a cash flow forecast.
You can use assumptions around financing, taxes, capex, etc. to convert from cash flows into pre-tax profit.
Then an assumption of margin to convert from pre-tax profit to revenue. This seems very difficult to forecast. Arguably, margin is at least as important as revenue in determining profit.
You cannot derive revenue, or the shape of revenue growth, from a stock price. I think what you mean is consensus forecasts that support the current share price. The title of the article is provably incorrect.
Thanks for sharing. Itâs a start, but itâs certainly not a proven Theory of Change. For example, Tetlock himself said that nebulous long-term forecasts are hard to do because thereâs no feedback loop. Hence, a prediction market on an existential risk will be inherently flawed.
Preventing catastrophic risks, improving global health and improving animal welfare are goals in themselves. At best, forecasting is a meta topic that supports other goals
Thanks for sharing, but nobody on that thread seems to be able to explain it! Most people there, like here, seem very sceptical
You might be right but just to add a datapoint: I was featured in an article in 2016. I donât regret it but I was careful about (1) the journalist and (2) what I said on the record.
I think forecasting is attractive to many people in EA like myself because EA skews towards curious people from STEM backgrounds who like games. However, Iâm yet to see a robust case for it being an effective use of charitable funds (if there is, please point me to it). Iâm worried we are not being objective enough and trying to find the facts that support the conclusion rather than the other way round.
Insolvency happens on an entity by entity level. I donât know which FTX entity gave money to EA orgs (if anyone knows, please say), and whether it went first via the founders personally. I would have thought itâs possible that FTX full repays its creditors, so there is value in the shares, but then FTXâs investors go after the founders personally and they are declared bankrupt.
Iâm hugely in favour of principles first as I think it builds a more healthy community. However, my concern is that if you try too hard to be cause neutral, you end up artificially constrained. For example, Global Heath and Wellbeing is often a good introduction point to the concept of effectiveness. Then once people are focused on maximisation, itâs easier to introduce Animal Welfare and X-Risk.
When you are a start-up non-profit, it can be hard to find competent people outside your social circle, which is why I created the EA Good Governance Project to make life easier for people.
I think all the points still stand albeit the numbers in the example look dated now! Anything you think should be changed?