As one practical upshot of this, I helped 80k make a round of updates to their online articles in light of FTX. See more.
Benjamin_Todd
More on what moderation might mean in practice here:
https://80000hours.org/2023/05/moderation-in-doing-good/
I really liked this post. I find the undergrad degree metaphor useful in some ways (focus on succeeding in your studies over 4 years, but give a bit of thought to how it sets you up for the next stage), but since the end game is only 3 years (rather than a normal 40 year career), overall it seems like your pacing and attitude could end up pretty different.
Maybe the analogy could be an undergrad where their only goal is to get the “best” graduate degree possible. Then high school = early game, undergrad = midgame, graduate degree = end game. Maybe you could think of “best” as “produce the most novel & useful result in their thesis” or “get the highest possible score”.
Another analogy could be an 18 year old undergrad who knows they need to retire at age 25, but I expect that throws the analogy off a lot, since if impact is the goal, I wouldn’t spend 4 years in college in that scenario.
Yes I agree that could be a good scenario to emerge from this – a very salient example of this kind of thinking going wrong is one of the most helpful things to convince people to stop doing it.
The 80k team are still discussing it internally and hope to say more at a later date.
Speaking personally, Holden’s comments (e.g. in Vox) resonated with me. I wish I’d done more to investigate what happened at Alameda.
If you’re tracking the annual change in wealth between two periods, you should try to make sure the start at the end point are either both market peaks or both market lows.
e.g. from 2017 to 2021, or 2019 to Nov 2022 would be valid periods for tracking crypto.
If you instead track from e.g. 2019 to 2021, then you’re probably going to overestimate.
Another option would be to average over periods significantly longer than a typical market cycle (e.g. 10yr).
Thanks!
My social life is pretty much only people who aren’t in the EA community at this point.
Small comment on this:
actively fighting against EA communities to become silos and for EA enterprises to have workers outside EA communities would be of great value
It depends on the org, but for smaller orgs that are focused on EA community building, I still think it could make sense for them to pretty much only people who are very interested in EA. I wouldn’t say the same about e.g. most biorisk orgs though.
Yes, I’d basically agree – he didn’t influence the thinking that much but he did impact what you could get paid to do (and that could also have long term impacts on the structure of the community).
Though, given income inequality, the latter problem seems very hard to solve.
That’s useful—my ‘naive optimizing’ thing isn’t supposed to be the same thing as naive utilitarianism, but I do find it hard to pin down the exact trait that’s the issue here, and those are interesting points about confidence maybe not being the key thing.
Just a small clarification, I’m not saying we should abandon the practical project, but it could make sense to (relatively speaking) focus on more tractable areas / dial down ambitions / tilt more towards the intellectual project until we’ve established more operational competence.
I also agree dissociating has significant costs that need to be weighed against the other reasons.
I’d agree a high degree of confidence + strong willingness to act combined with many other ideologies leads to bad stuff.
Though I still think some ideologies encourage maximisation more than others.
Utilitarianism is much more explicit in its maximisation than most ideologies, plus it (at least superficially) actively undermines the normal safeguards against dangerous maximisation (virtues, the law, and moral rules) by pointing out these can be overridden for the greater good.
Like yes there are extreme environmentalists and that’s bad, but normally when someone takes on an ideology like environmentalism, they don’t also explicitly & automatically say that the environmental is all that matters and that it’s in principle permissible to cheat & lie in order to benefit the environment.
nor do I think it would be a good counterargument if it was.
Definitely not saying it has any bearing on the truth of utilitarianism (in general I don’t think recent events have much bearing on the truth of anything). My original point was about who EA should try to attract, as a practical matter.
I basically agree and try to emphasize personality much more than ideology in the post.
That said, it doesn’t seem like a big leap to think that confidence in an ideology that says you need to maximise a single value to the exclusion of all else could lead to dangerously optimizing behaviour...
Having more concern for the wellbeing of others is not the problematic part. But utilitarianism is more than that.
Moreover it could still be true that confidence in utilitarianism is in practice correlated with these dangerous traits.
I expect it’s the negative component in the two factor model that’s the problem, rather than the positive component you highlight. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5900580/
Yes, something like that: he of course had an influence on what you could get paid for (which seems hard to avoid given some ppl have more money than others) but I don’t think he had a big influence on people’s thinking about cause pri.
Hmm that does seem worse than I expected.
I wonder if it’s because gwwc has cut back outreach or is getting less promotion by other groups (whereas 80k continued it’s marketing as before, plus a lot of 80k’s reach is passive), or whether it points to outreach actually being harder now.
I had you in mind as a good utilitarian when writing :)
Good point that just saying ‘naively optimizing’ utilitarians is probably clearest most of the time. I was looking for other words that would denote high-confidence and willingness to act without qualms.
Thank you!
Yes I think if you make update A due to a single data point, then you realise you shouldn’t have updated on a single data point, you should undo update A. Like your original reasoning was wrong.
That aside, in the general case I think it can sometimes be justified a lot to update on a single datapoint. E.g. if you think an event was very unlikely, and then that event happens, your new probability estimate for the event will normally go up a lot.
In other cases, if you already have lots of relevant points, then adding a single extra one won’t have much impact.
One extra point is that I think people have focused too much on SBF. The other founders also said they supported EA. So if we’re just counting up people, it’s more than one.
Some of the ones I’ve seen:
80k’s metrics seems unaffected so far, and it’s one of the biggest drivers of community growth.
I’ve also heard that EAG(x) applications didn’t seem affected.
GWWC pledgers were down, though a lot of that is due to them not doing a pledge drive in Dec. My guess is that if they do a pledge drive next Dec similar to previous ones, the results will be similar. The baseline of monthly pledges seems ~similar.
Good point there are reasons why work could get more valuable the closer you are – I should have mentioned that.
Also interesting points about option value.
I agree with many of the points, especially that personal fit is a big deal and that doing a PhD is also in part useful research (rather than pure career capital), and what matters is time until the x-risk rather than random definitions of AGI, but I’m worried this bit understates the reasons for urgency quite a bit:
you might then conclude that delaying your career by 6 years would cause it to have 41⁄91 = 45% of the value. If that’s the case, if the delay increased the impact you could have by a bit more than a factor of 2, the delay would be worth it.
This is on a model in which work becomes moot after a transition point. But it’s assuming work before the transition is equally valuable no matter the year.
However, the AI safety community is probably growing at 40%+ per year, and (if timelines are short) it’ll probably still be growing at 10-20%+ when the potential existential risk arrives. This roughly means that moving a year of labour invested in AI safety community building one year earlier makes it 10-20% more valuable. This would mean an extra year of labour now is worth 3-10x one in 10 years, all else equal.
Or to turn to direct work, there are serial dependencies i.e. 100 people working for 1 year won’t achieve anywhere near as much as 10 people working for 10 years. This again could make extra labour on alignment now many times more valuable work in 10 years.
Another argument is that since the community can have more impact in world with short timelines, people should act as if they’re shorter than they are.
This could mean, for instance, if your best guess is 33% timelines under 10yr, 33% medium timelines and 33% longer timelines, it might be optimal for people to allocate something like 70%, 15%, 15%. Yes in this world some people focus on long-term career capital, but it would be less than normal.
Estimating the size of these effects is hard – my main point is that they can be very large, especially as timelines get short. (Many of these effects feel like they go up non-linearly as timelines shorten.)
So, while I agree that if someone’s median timeline estimate changes from, say, 25 years to 20 years, that’s not going to have much effect on the question; I think how much to focus on career capital could be pretty sensitive to, say, your probability on <10 year timelines.
This is great. I’m so glad this analysis has finally been done!
One quick idea: should ‘speed-ups’ be renamed ‘accelerations’? I think I’d find that clearer personally, and would help to disambiguate it from earlier uses of ‘speed-up’ (e.g. in Nick’s thesis).