Yes, maybe we should model it as 10bn meta and 10bn other stuff, now worth 2.5bn and 7bn.
Something like that seems right.
Though I don’t believe the Forbes figure for Dustin – it seems to assume that most of his wealth comes from his meta stake, and he’s said on Twitter that he’d sold a lot of his stake (and hopefully invested in stuff that’s gone up). Last spring, Open Phil also said their assets were down 40% when Meta was down 60%, which could suggest Meta was about half of the assets at that point. So I expect it’s too low.
Also seems like there might be some new donors in the last year.
The original rumour was that Alameda would have net negative assets if FTT coin collapsed. Though there’s a chance it’s actually OK.
Thank you for writing—seems like a good summary of what I’ve seen.
Also maybe of interest, I think the current EA portfolio is actually allocated pretty well in line with what this heuristic would imply:
I think the bigger issue might be that it’s currently demoralising not to work on AI or meta. So I appreciate this post as an exploration of ways to make it more intuitive that everyone shouldn’t work on AI.
Upvoted, though I was struck by this part of the appendix:
Appendix: Other reasons to diverge from argmax In order of how much we endorse them:Value of information is usually incredibly high You don’t know the whole option setMoral uncertaintyConcave altruism (i.e. Jensen’s inequality!)The optimiser’s curseWorldview diversificationPrincipled risk aversion, as at GiveWellStrategic skulduggeryDecrease variance of your portfolio for more impact compounding(?)
Appendix: Other reasons to diverge from argmax
In order of how much we endorse them:
Value of information is usually incredibly high
You don’t know the whole option set
Concave altruism (i.e. Jensen’s inequality!)
The optimiser’s curse
Principled risk aversion, as at GiveWell
Decrease variance of your portfolio for more impact compounding(?)
While I totally agree with the the conclusion of the post (the community should have a portfolio of causes, and not invest everything in the top cause), I feel very unsure that a lot of these reasons are good ones for spreading out from the most promising cause.
Or if they do imply spreading out, they don’t obviously justify the standard EA alternatives to AI Risk.
I noticed I felt like I was disagreeing with your reasons for not doing argmax throughout the post, and this list helped to explain why.
1. Starting with VOI, that assumes that you can get significant information about how good a cause is by having people work on it. In practice, a ton of uncertainty is about scale and neglectedness, and having people work on the cause doesn’t tell you much about that. Global priorities research usually seems more useful.
VOI would also imply working on causes that might be top, but that we’re very uncertain about. So, for example, that probably wouldn’t imply that that longtermist-interested people should work on global health or factory farming, but rather spread out over lots of weirder small causes, like those listed here: https://80000hours.org/problem-profiles/#less-developed-areas
2. “You don’t know the whole option set” sounds like a similar issue to VOI. It would imply trying to go and explore totally new areas, rather than working on familiar EA priorities.
3. Many approaches to moral uncertainty suggest that you factor in uncertainty in your choice of values, but then you just choose the best option with respect to those values. It doesn’t obviously suggest supporting multiple causes.
4. Concave altruism. Personally I think there are increasing returns on the level of orgs, but I don’t think there are significant increasing returns at the level of cause areas. (And that post is more about exploring the implications of concave altruism rather than making the case it actually applies to EA cause selection.)
5. Optimizer’s curse. This seems like a reason to think your best guess isn’t as good as you think, rather than to support multiple causes.
6. Worldview diversification. This isn’t really an independent reason to spread out – it’s just the name of Open Phil’s approach to spreading out (which they believe for other reasons).
7. Risk aversion. I don’t think we should be risk averse about utility, so agree with your low ranking of it.
8. Strategic skullduggery. This actually seems like one of the clearest reasons to spread out..
9. Decreased variance. I agree with you this is probably not a big factor.
You didn’t add diminishing returns to your list, though I think you’d rank it near the top. I’d also agree it’s a factor, though I also think it’s often oversold. E.g. if there are short-term bottlenecks in AI that create diminishing returns, it’s likely the best response is to invest in career capital and wait for the bottlenecks to disappear, rather than to switch into a totally different cause. You also need big increases in resources to get enough diminishing returns to change the cause ranking e.g. if you think AI safety is 10x as effective as pandemics at the margin, you might need to see the AI safety community roughly 10x in size relative to biosecurity before they’d equalise.
I tried to summarise what I think the good reasons for spreading out are here.
For a longtermist, I think those considerations would suggest a picture like:
50% into the top 1-3 issues
20% into the next couple of issues
20% into exploring a wide range of issues that might be top
10% into other popular issues
If I had to list a single biggest driver, it would be personal fit / idiosyncratic opportunities, which can easily produce orders of magnitude differences in what different people should focus on.
The question of how to factor in neartermism (or other alternatives to AI-focused longtermism) seems harder. It could easily imply still betting everything on AI, though putting some % of resources into neartermism in proportion to your credence in it also seems sensible.
Some more here about how worldview diversification can imply a wide range of allocations depending on how you apply it: https://twitter.com/ben_j_todd/status/1528409711170699264
Also see Brian Christian briefly suggesting a cause allocation rule a bit like this towards the end of 80k’s interview with him.
We were discussing solutions to the explore-exploit problem, and one is that you allocate resources in proportion to your credence the option is best.
Isn’t there a similar argument to covid – the best case scenario is bounded at zero hours lost, while the bound on the worst case is very high (losing tens of thousands of hours), so increasing uncertainty will tend to drag up the mean?
The current forecasts try to account for a bunch of uncertainty, but we should also add in model uncertainty – and model uncertainty seems like it could be really high (for the reasons in Dan’s comment). So this would suggest we should round up rather than down.
Does anyone have comments on how the huge degree of uncertainty should change our actions?
My intuition is that high uncertainty is argument in favour of leaving town, since it seems like it’s worse to underestimate the risks (death) than overestimate them (some inconvenience).
Or another idea might be that if the risk turns out to be lower than the best guess, you can just return to town. Whereas if it was higher, then you’re dead. So leaving town is a more robust strategy.
But I could also imagine this is totally the wrong way of thinking about it. E.g. maybe if we’re thinking about hours of EA work (instead of personal hours), we should be pretty risk neutral about them, and just go with expected hours lost vs. gained.
Thanks this is helpful!
Just a heads up my latest estimate is here in footnote 15:
I went for 300 technical researchers though say the estimate seems more likely to be too high than too low, so seems like we’re pretty close.
(My old Twitter thread was off the top of head, and missing the last year of growth.)
Glad to see more thorough work on this question :)
I think of Shapley values as just one way of assigning credit in a way to optimise incentives, but from what I’ve seen, it’s not obvious it’s the best one. (In general, I haven’t seen any principled way of assigning credit that always seems best.)
Good point that CFT is a more science-grounded alternative to IFS. Tim LeBon is a therapist in the UK who has seen community members, does remote sessions, and offers CFT.
This is a cool post. Though, I wonder if there’s switching between longtermism as a theory of what matters vs the idea you should try to act over long timescales (as with a 200yr foundation).
You could be a longtermist in terms of what you think is of moral value, but believe the best way to benefit the future (instrumentally) is to ‘make it to the next rung’. Indeed this seems like what Toby, Will etc. basically think.
Maybe then relevant reference class is more something like ‘people motivated to help future generations but who did that by solving certain problems of the day’, which seems a very broad and maybe successful reference class - eg encompassing many scientists, activists etc.
PS shouldn’t the environmentalism, climate change and anti nuclear movements be part of your reference class?
I agree the basic version of this objection doesn’t work, but my understanding is there’s a more sophisticated version here:
Where he talks about how a the case for an individual being longtermist rests on a tiny probability of shifting the entire future.
I think the response to this might be that if we aggregate together the longtermist community, then collectively it’s no longer pascalian. But this feels a bit arbitrary.
Anyway, partly wanted to post this paper here for further reading, and partly an interested in responses.
Short update on the situation: https://twitter.com/ben_j_todd/status/1561100678654672896
where you can dilute the philosophy more and more, and as you do so, EA becomes “contentless” in that it becomes closer to just “fund cool stuff no one else is really doing.
Makes sense. It just seems to me that the diluted version still implies interesting & important things.
Or from the other direction, I think it’s possible to move in the direction of taking utilitarianism more seriously, without having to accept all of the most wacky implications.
So you just keep going, performing the arbitrage. In other moral theories, which aren’t based on arbitrage, but perhaps rights, or duties (just to throw out an example), they don’t have this maximizing property, so they don’t lead so inexorably to repugnant conclusions
I agree something like trying to maximise might be at the core of the issue (where utilitarianism is just one ethical theory that’s into maximising).
However, I don’t think it’s easy to avoid by switching to a rights or duties. Philosophers focused on rights still think that if you can save 10 lives with little cost to yourself, that’s a good thing to do. And that if you can 100 lives with the same cost, that’s an even better thing to do. A theory that said all that matters ethically is not violating rights would be really weird.
Or another example is that all theories of population ethics seem to have unpleasant conclusions, even the non-totalising ones.
If one honestly believes that all moral theories end up with uncountable repugnancies, why not be a nihilist, or a pessimist, rather than an effective altruist?
I don’t see why it implies nihilism. I think it’s shows the moral philosophy is hard, so we should moderate our views, and consider a variety of perspectives, rather than bet everything on a single theory like utilitarianism.
I think once you take account of diminishing returns and the non-robustness of the x-risk estimates, there’s a good chance you’d end up estimating the cost per present life saved of GiveWell is cheaper than donating to xrisk. So the claim ‘neartermists should donate to xrisk’ seems likely wrong.
I agree with Carl the US govt should spend more on x-risk, even just to protect their own citizens.
I think the typical person is not a neartermist, so might well end up thinking x-risk is more cost-effective than GiveWell if they thought it through. Though it would depend a lot on what considerations you include or not.
From a pure messaging pov, I agree we should default to opening with “there might be an xrisk soon” rather than “there might be trillions of future generations”, since it’s the most important message and is more likely to be well-received. I see that as the strategy of the Precipice, or of pieces directly pitching AI xrisk. But I think it’s also important to promote longtermism independently, and/or mention it as an additional reason to prioritise about xrisk a few steps after opening with it.
Thanks I made some edits!
This seems plausible to me but not obvious, in particular for AI risk the field seems pre-paradigmatic such that there aren’t necessarily “low-hanging fruit” to be plucked; and it’s unclear whether previous efforts besides field-building have even been net positive in total.
Agree though my best guess is something like diminishing log returns the whole way down. (Or maybe even a bit of increasing returns within the first $100m / 100 people.)
I just wanted to leave a very quick comment (sorry I’m not able to engage more deeply).
I think yours is an interesting line of criticism, since it tries to get to the heart of what EA actually is.
My understanding of your criticism is that EAs attempts to find an interesting middle ground between full utilitarianism and regular sensible do-gooding, whereas you claim there isn’t one. In particular, we can impose limits on utilitarianism, but they’re arbitrary and make EA contentless. Does this seem like a reasonable summary?
I think the best argument that an interesting middle ground exists the fact that EAs in practice have come up with ways of doing that that aren’t standard (e.g. only a couple of percent of US philanthropy is spent on evidence-backed global health at best, and << 1% on ending factory farming + AI safety + ending pandemics).
More theoretically, I see EA as being about something like “maximising global wellbeing while respecting other values”. This is different from regular sensible do-gooding in being more impartial, more wellbeing focused and more focused on finding the very best ways to contribute (rather than the merely good). I think another way EA is different is being more skeptical, open to weird ideas and trying harder to take a bayesian, science-aligned approach to finding better ways to help. (Cf the key values of EA.)
However, it’s also different from utilitarianism since you can practice these values without saying maximising hedonic utility is the only thing that matters, or a moral obligation.
(Another way to understand EA is the claim that we should pay more attention to consequences, given the current state of the world, but not that only consequences matter.)
You could respond that there’s arbitrariness in how to adjudicate conflicts between maximising wellbeing and other values. I basically agree.
But I think all moral theories imply crazy things (“poison”) if taken to extremes (e.g. not lying to the axe murder as a deontologist; deep ecologists who think we should end humanity to preserve the environment; people who hold the person-affecting view in population ethics who say there’s nothing bad about creating a being who’s life is only suffering).
So imposing some level of arbitrary cut offs on your moral views is unavoidable. The best we can do is think hard about the tradeoffs between different useful moral positions, and try to come up with an overall course of action that’s non-terrible on the balance of them.
I agree thinking xrisk reduction is the top priority likely depends on caring significantly about future people (e.g. thinking the value of future generations is at least 10-100x the present).
A key issue I don’t see discussed very much is diminishing returns to x-risk reduction. The first $1bn spent on xrisk reduction is (I’d guess) very cost-effective, but over the next few decades, it’s likely that at least tens of billions will be spent on it, maybe hundreds. Additional donations only add at that margin, where the returns are probably 10-100x lower than the first billion. So a strict neartermist could easily think AMF is more cost-effective.
That said, I think it’s fair to say it doesn’t depend on something like “strong longtermism”. Common sense ethics cares about future generations, and I think suggests we should do far more about xrisk and GCR reduction than we do today.
I wrote about this in an 80k newsletter last autumn:
Carl Shulman on the common-sense case for existential risk work and its practical implications (#112)Here’s the basic argument:Reducing existential risk by 1 percentage point would save the lives of 3.3 million Americans in expectation.The US government is typically willing to spend over $5 million to save a life.So, if the reduction can be achieved for under $16.5 trillion, it would pass a government cost-benefit analysis.If you can reduce existential risk by 1 percentage point for under $165 billion, the cost-benefit ratio would be over 100 — no longtermism or cosmopolitanism needed.Taking a global perspective, if you can reduce existential risk by 1 percentage point for under $234 billion, you would save lives more cheaply than GiveWell’s top recommended charities — again, regardless of whether you attach any value to future generations or not. Toby Ord, author of The Precipice, thinks there’s a 16% chance of existential risk before 2100. Could we get that down to 15%, if we invested $234 billion?I think yes. Less than $300 million is spent on the top priorities for reducing risk today each year, so $200 billion would be a massive expansion.The issue is marginal returns, and where the margin will end up. While it might be possible to reduce existential risk by 1 percentage point now for $10 billion — saving lives 20 times more cheaply than GiveWell’s top charities — reducing it by another percentage point might take $100 billion+, which would be under 2x as cost-effective as GiveWell top charities.I don’t know how much is going to be spent on existential risk reduction over the coming decades, or how quickly returns will diminish. [Edit: But it seems plausible to me it’ll be over $100bn and it’ll be more expensive to reduce x-risk than these estimates.] Overall I think reducing existential risk is a competitor for the top issue even just considering the cost of saving the life of someone in the present generation, though it’s not clear it’s the top issue.My bottom line is that you only need to put moderate weight on longtermism to make reducing existential risk seem like the top priority.
Carl Shulman on the common-sense case for existential risk work and its practical implications (#112)Here’s the basic argument:
Reducing existential risk by 1 percentage point would save the lives of 3.3 million Americans in expectation.
The US government is typically willing to spend over $5 million to save a life.
So, if the reduction can be achieved for under $16.5 trillion, it would pass a government cost-benefit analysis.
If you can reduce existential risk by 1 percentage point for under $165 billion, the cost-benefit ratio would be over 100 — no longtermism or cosmopolitanism needed.
Taking a global perspective, if you can reduce existential risk by 1 percentage point for under $234 billion, you would save lives more cheaply than GiveWell’s top recommended charities — again, regardless of whether you attach any value to future generations or not. Toby Ord, author of The Precipice, thinks there’s a 16% chance of existential risk before 2100. Could we get that down to 15%, if we invested $234 billion?I think yes. Less than $300 million is spent on the top priorities for reducing risk today each year, so $200 billion would be a massive expansion.The issue is marginal returns, and where the margin will end up. While it might be possible to reduce existential risk by 1 percentage point now for $10 billion — saving lives 20 times more cheaply than GiveWell’s top charities — reducing it by another percentage point might take $100 billion+, which would be under 2x as cost-effective as GiveWell top charities.I don’t know how much is going to be spent on existential risk reduction over the coming decades, or how quickly returns will diminish. [Edit: But it seems plausible to me it’ll be over $100bn and it’ll be more expensive to reduce x-risk than these estimates.] Overall I think reducing existential risk is a competitor for the top issue even just considering the cost of saving the life of someone in the present generation, though it’s not clear it’s the top issue.My bottom line is that you only need to put moderate weight on longtermism to make reducing existential risk seem like the top priority.
(Note: I made some edits to the above in response to Eli’s comment.)