Just making sure you saw Eli Nathan’s comment saying that this year plus next year they didn’t/won’t hit venue capacity so you’re not taking anybody’s spot
Howie_Lempel
No worries!
tl;dr I wouldn’t put too much weight on my tweet saying I think I probably wouldn’t be working on x-risk if I knew the world would end in 1,000 years and I don’t think my (wild) guess at the tractability of x-risk mitigation is particularly pessimistic.
***
Nice post. I agree with the overall message of as well as much of Ben’s comment on it. In particular, I think emphasizing the significance of future generations, and not just reducing x-risk, might end up as a crux for how much you care about: a) how much an intervention reduces x-risk v. GCRs that are unlikely to (directly?) lead to existential catastrophe; b) whether civilization just manages to avoid x-risk v. ends up on track to flourish as much as possible and last a lot longer than (e.g.) the typical mammalian species.
***
That said, I mostly came here to quickly caution against putting too much weight on this:
Alyssa Vance’s tweet about whether the longtermism debate is academic
Howie’s response is interesting to me, as it implies a fairly pessimistic assessment of tractability of x-risks given that 1,000 years would shift the calculations presented here by over an OOM (>10 generations).
That’s mostly for the general reason that I put approximately one Reply Tweet’s worth of effort into it. But here are some specific reasons not to put too much weight on it and also that I don’t think it implies a particularly pessimistic assessment of the tractability of x-risk.[1]
I’m not sure I endorse the Tweet on reflection mostly because of the next point.
I’m not sure if my tweet was accounting for the (expected) size of future generations. A claim I’d feel better about would be “I probably wouldn’t be working on x-risk reduction if I knew there would only be ~10X more beings in the future than are alive today or if I thought the value of future generations was only ~10X more than the present.” My views on the importance of the next 1,000 years depend a lot on whether generations in the coming century are order(s) of magnitude bigger than the current generation (which seems possible if there’s lots of morally relevant digital minds). [2]
I haven’t thought hard about this but I think my estimates of the cost-effectiveness of the top non-longtermist opportunities are probably higher than implied by your table.
I think I put more weight on the badness of being in a factory farm and (probably?) the significance of chickens than implied by Thomas’s estimate.
I think the very best global health interventions are probably more leveraged than giving to GiveWell.
I find animal welfare and global poverty more intuitively motivating than working on x-risk, so the case for working on x-risk had to be pretty strong to get me to spend my career on it. (Partly for reasons I endorse, partly for reasons I don’t.)
I think the experience I had at the time I switched the focus of my career was probably more relevant to global health and animal welfare than x-risk reduction.
My claim was about what I would in fact be doing, not about what I ought to be doing.
[1] Actual view: wildly uncertain and it’s been a while since I last thought about this but something like the numbers from Ben’s newsletter or what’s implied by the 0.01% fund seem within the realm of plausibility to me. Note that, as Ben says, this is my guess for the marginal dollar. I’d guess the cost effectiveness of the average dollar is higher and I might say something different if you caught me on a different day.
[2] Otoh, conditional on the world ending in 1,000 years maybe it’s a lot less likely that we ended up with lots of digital minds?
I agree with Caleb that theoretical AIS, infinite ethics, and rationality techniques don’t currently seem to be overprioritized. I don’t think there are all that many people working full-time on theoretical AIS (I would have guessed less than 20). I’d guess less than 1 FTE on infinite ethics. And not a ton on rationality, either.
Maybe your point is more about academic or theoretical research in general? I think FHI and MIRI have both gotten smaller over the last couple of years and CSER’s work seems less theoretical. But you might still think there’s too much overall?
My impression is that there’s much more of a supply of empirical AI safety research and, maybe, theoretical AI safety research written by part-time researchers on LessWrong. My impression is that this isn’t the kind of thing you’re talking about though.
There’s a nearby claim I agree with, which is that object level work on specific cause areas seems undervalued relative to “meta” work.
Academic-like research into interesting areas of AI risk is far easier to get funded by many funders than direct research into, say, vaccine production pipelines.
My guess is that this has less to do with valuing theory or interestingness over practical work, and more to do with funders prioritizing AI over bio. Curious if you disagree.
Know that other people have gone through the disillusionment pipeline, including (especially!) very smart, dedicated, caring, independent-minded people who felt strong affinity for EA. Including people who you may have seen give talks at EA Global or who have held prestigious jobs at EA orgs.
Also, I think even people like this who haven’t gone through the disillusionment pipeline are often a lot more uncertain about many (though not all) things than most newcomers would guess.
Thanks for writing this post. I think it improved my understanding of this phenomenon and I’ve recommended reading it to others.
Hopefully this doesn’t feel nitpicky but if you’d be up for sharing, I’d be pretty interested in roughly how many people you’re thinking of:
“I know at least a handful of people who have experienced this (and I’m sure there are many more I don’t know)—people who I think are incredibly smart, thoughtful, caring, and hard-working, as well as being independent thinkers. In other words, exactly the kind of people EA needs. Typically, they throw themselves into EA, invest years of their life and tons of their energy into the movement, but gradually become disillusioned and then fade away without having the energy or motivation to articulate why.”
I’m just wondering whether I should update toward this being much more prevalent than I already thought it was.
“My best guess is that I don’t think we would have a strong connection to Hanson without Eliezer”
Fwiw, I found Eliezer through Robin Hanson.
111% of deaths averted?
Agree they have a bunch of very obnoxious business practices. Just fyi you can change a seeing so nobody can see whose pages you look at.
I think Open Philanthropy has done some of this. For example:
The Open Philanthropy technical reports I’ve relied on have had significant external expert review. Machine learning researchers reviewed Bio Anchors; neuroscientists reviewed Brain Computation; economists reviewed Explosive Growth; academics focused on relevant topics in uncertainty and/or probability reviewed Semi-informative Priors.2 (Some of these reviews had significant points of disagreement, but none of these points seemed to be cases where the reports contradicted a clear consensus of experts or literature.)
Was this in the deleted tweet? The tweet I see is just him tagging someone with an exclamation point. I don’t really think it would be accurate to characterise that as “Torres supports the ‘voluntary human extinction’ movement”
Yeah that does sell me a bit more on delegating choice.
I think that’s an improvement though “delegating” sounds a bit formal and it’s usually the authority doing the delegating. Would “deferring on views” vs “deferring on decisions” get what you want?
Thanks for writing this post. I think it’s really to distinguish the two types of deference and push the conversation toward the question of when to defer as opposed to how good it is in general.
ButI think “deferring to authority” is bad branding (as you worry about below) and I’m not sure your definition totally captures what you mean. I think it’s probably worth changing even though I haven’t come up with great alternatives.
Branding. To my ear, deferring to authority has a very negative connotation. It suggests deferring to a preexisting authority because they have power over you, not deferring to a person/norm/institution/process because you’re bought into the value of coordination. Relatedly, it doesn’t seem like the most natural phrase to capture a lot of your central examples.
Substantive definition. I don’t think “adopting someone else’s view because of a social contract to do so” is exactly what you mean. It suggests that if someone were not to defer in one of these cases, they’d be violating a social contract (or at least a norm or expectation), whereas I think you want to include lots of instances where that’s not the case (e.g. you might defer as a solution to the unilateralist’s curse even if you were under no implicit contract to do so). Most of your examples also seem to be more about acting based on someone else’s view or a norm/rule/process/institution and not really about adopting their view.[1] This seems important since I think you’re trying to create space for people to coordinate by acting against their own view while continuing to hold that view.
I actually think the epistemics v. action distinction is a cleaner distinction so I might base your categories just on whether you’re changing your views v. your actions (though I suspect you considered this and decided against).
***
Brainstorm of other names for non-epistemic deferring (none are great). Pragmatic deferring. Action deferring. Praxological deferring (eww). Deferring for coordination.
(I actually suspect that you might just want to call this something other than deferring).
[1] Technically, you could say you’re adopting the view that you should take some action but that seems confusing.
He also talked to Rob Wiblin. https://80000hours.org/podcast/episodes/russ-roberts-effective-altruism-empirical-research-utilitarianism/
Glad you had a great experience though wish it could have been even better! I think it’s pretty counterintuitive that most of the value from many conferences comes from 1:1s so it totally makes sense that it took you by surprise.
I wouldn’t expect people to have found these in advance but, for next time, there’s a bunch of good “how to do EAG(X)” and “how to do 1:1s” posts on the forum. Some non comprehensive examples:
Generally the EAG and EA conferences tags seem good for finding this stuff.
https://forum.effectivealtruism.org/tag/effective-altruism-conferences
https://forum.effectivealtruism.org/tag/effective-altruism-global
I know the conference organizers have a ton of considerations when deciding how much content to blast at attendees (and it’s easy for things to sink to the bottom of everybody’s inbox) but some of these might be cool for them to send to future attendees.
I think going to conferences where you don’t know a bunch of people already is pretty scary so I’m impressed that you went for it anyway!
+1. Fwiw, I was going to subscribe and then didn’t when I saw how long it was.
Fwiw, I did some light research (hours not days) a few years ago on the differences between US and European think tanks and the (perhaps out of date) conventional wisdom seemed to be that they play a relatively outsized role in the U.S. (there are various hypotheses for why). So That may be one reason for the US/UK difference (though funders being in the US and many other issues could also be playing a role).
I also donated $5,800 (though not due to this post).
[Unfortunately didn’t have time to read this whole post but thought it was worth chiming in with a narrow point.]
I like Manager Tools and have recommended it but my impression is that some of their advice is better optimized for big, somewhat corporate organizations than for small startups and small nonprofits with an unusual amount of trust among staff. I’d usually recommend somebody pair MT with a source of advice targeted at startups (e.g. CEO Within though the topics only partially overlap) so you know when the advice differs and can pick between them.