I think ASB’s recent post about Peak Defense vs Trough Defense in Biosecurity is a great example of how the longtermist framing can end up mattering a great deal in practical terms.
Matt Putz
Exactly my plan! Of course, this was 100% on purpose!
[April fool’s post] Proposal to assign careers by birthdate
Super helpful, thanks for your answer!
Very glad to have helped!
Great post, thanks for writing it! This framing appears a lot in my thinking and it’s great to see it written up! I think it’s probably healthy to be afraid of missing a big multiplier.
I’d like to slightly push back on this assumption:
If output scales linearly with work hours, then you can hit 60% of your maximum possible impact with 60% of your work hours
First, I agree with other commenters and yourself that it’s important not to overwork / look after your own happiness and wellbeing etc.
Having said that, I do think working harder can often have superlinear returns, especially if done right (otherwise it can have sublinear or negative returns). One way to think about this is that the last year of one’s career is often the most impactful in expectation, since one will have built up seniority and experience. Working harder is effectively a way of “pulling that last year forward a bit” and adding another even higher impact year after it. I.e. a year that is much higher-impact than your average year, hence the superlinearity.
Another way to think about this is intuitively. If Sam Bankman-Fried had only worked 20% as hard, would he have made $4 billion instead of $20 billion? No. He would probably have made much much less. Speed is rewarded in the economy and working hard is one way to be fast.
This makes the multiplier from working harder bigger than you would intuitively expect and possibly more important relative to judgment than you suggest.
(I’m not saying everyone reading this should work harder. Some should, some shouldn’t.)
Edited shortly after posting to add: There’s also a more straightforward reason that the claim “judgment is more important than dedication” is technically true but potentially misleading: one way to get better judgment is investing time into researching thorny issues. That seems to be what Holden Karnofsky has been doing for a decent fraction of his career.
- Effectiveness is a Conjunction of Multipliers by Mar 25, 2022, 6:44 PM; 252 points) (
- May 20, 2024, 1:27 AM; 7 points) 's comment on Fund me please—I Work so Hard that my Feet start Bleeding and I Need to Infiltrate University by (LessWrong;
This is great, thanks!
(I accidentally asked multiple versions of this question at once.
This was because I got the following error message when submitting:
“Cannot read properties of undefined (reading ‘currentUser’)”
So I wrongly assumed the submission didn’t work.
@moderators)
[Question] What’s the best machine learning newsletter? How do you keep up to date?
Make the best case against: “Some non-trivial fraction of highly talented EAs should be part- or full-time community builders.” The argument in favor would be pointing to the multiplier effect. Assume you could attract the equivalent of one person as good as yourself to EA within one year of full-time community building. If this person is young and we assume the length of a career to be 40 years, then you have just invested 1 year and gotten 40 years in return. By the most naive / straightforward estimate then, a chance of about 1⁄40 of you attracting one you-equivalent would be break even. Arguably that’s too optimistic and the true break-even point is somewhat bigger than 1⁄40; maybe 1⁄10. But that seems prima facie very doable in a full-time year. Hence, a non-trivial fraction of highly-talented EAs should do community building.
(I have a few arguments against the above reasoning in mind, but I believe listing them here would be against the spirit of this question. I still would be genuinely interested to see this question be red-teamed.)
EA Hotel / CEEALAR except at EA Hubs
Effective Altruism
CEEALAR is currently located in Blackpool, UK. It would be a lot more attractive if it were in e.g. Oxford, the Bay Area, or London. This would allow guests to network with local EAs (as well as other smart people, of which there are plenty in all of the above cities). In as far as budget is less of a constraint now and in as far as EA funders are already financing trips to such cities for select individuals (for conferences and otherwise), it seems an EA Hotel would similarly be justified on the same grounds. (E.g. intercontinental flights can sometimes be more expensive than one month’s rent in those cities)
Studying stimulants’ and anti-depressants’ long-term effects on productivity and health in healthy people (e.g. Modafinil, Adderall, and Wellbutrin)
Economic Growth, Effective Altruism
Is it beneficial or harmful for long-term productivity to take Modafinil, Adderall, Wellbutrin, or other stimulants on a regular basis as a healthy person (some people speculate that it might make you less productive on days where you’re not taking it)? If it’s beneficial, what’s the effect size? What frequency hits the best trade-off between building up tolerance vs short-term productivity gains? What are the long-term health effects? Does it affect longevity?
Some people think that taking stimulants regularly provides a large net boost to productivity. If true, that would mean we could relatively cheaply increase the productivity of the world and thereby increase economic growth. In particular, it could also increase the productivity of the EA community (which might be unusually willing to act on such information), including AI and biorisk researchers.My very superficial impression is that many academics avoid researching the use of drugs in healthy people and that there is a bias against taking medications unless “needed”.
So I’d be interested to see a large-scale, longterm RCT (randomized controlled trial) that investigated these issues. I’m unsure about exactly how to do this. One straightforward example would be having two randomized groups, giving the substance to one of them for X months/years, and seeing whether that group has higher earnings after that period. Ideally, the study participants would perform office jobs, rather than manual labor (since that is where most of the value would come from); perhaps even especially cognitively demanding tasks, such as research or trading. In the case of research, metrics such as the number of published articles or number of citations would likely make more sense than earnings.
One could also check health outcomes, probably incl. mental health. Multiple substances or different dosing-regimes could be tested at once by adding study arms.
Notes:
- One of the reasons I would most care about this might be improving the effectiveness of people working to prevent X-risks, but I’m not sure whether that fits neatly into any of your categories (and whether that’s intentional).
- I’m not at all sure whether this is a good idea, but tried to err on the side of over-including since that seems productive while brainstorming; I haven’t thought about this much.
- It may be that such studies exist and I just don’t know about them (pointers?).
- It may be impossible to get this approved by ethics boards, though hopefully in some country somewhere it could happen?
Thanks for this! I think it’s good for people to suggest new pitches in general. And this one would certainly allow me to give a much cleaner pitch to non-EA friends than rambling about a handful of premises and what they lead to and why (I should work on my pitching in general!). I think I’ll try this.
I think I would personally have found this pitch slightly less convincing than current EA pitches though. But one problem is that I and almost everyone reading this were selected for liking the standard pitch (though to be fair whatever selection mechanism EA currently has, it seems to be pretty good at attracting smart people and might be worth preserving). Would be interesting to see some experimentation, perhaps some EA group could try this?
I like “(very or most) dedicated EA”. Works well for (2) and maybe (4).
From the perspective of a grant-maker, thinking about reduction in absolute basis points makes sense of course, but for comparing numbers between people, relative risk reduction might be more useful?
E.g. if one person thinks AI risk is 50% and another thinks it’s 10%, it seems to me the most natural way for them to speak about funding opportunities is to say it reduces total AI risk by X% relatively speaking.
Talking about absolute risk reduction compresses these two numbers into one, which is more compact, but makes it harder to see where disagreements come from.
It’s a minor point, but with estimates of total existential risk sometimes more than an order of magnitude apart from each other, it actually gets quite important I think.
Also, given astronomical waste arguments etc., I’d expect most longtermists would not switch away from longtermism once absolute risk reduction gets an order of magnitude smaller per dollar.
Edited to add: Having said that, I wanna add that I’m really glad this question was asked! I agree that it’s in some sense the key metric to aim for and it makes sense to discuss it!
What about individual Earning To Givers?
Is there some central place where all the people doing Earning To Give are listed, potentially with some minimal info about their potential max grant size and the type of stuff they are happy to fund?
If not, how do ETGers usually find non-standard funding opportunities? Just personal networks?
Hey Sean, thanks so much for letting me know this! Best of luck whatever you do!
I assume those estimates are for current margins? So if I were considering whether to do earning to give, I should use lower estimates for how much risk reduction my money could buy, given that EA has billions to be spent already and due to diminishing returns your estimates would look much worse after those had been spent?
Great question! Guarding against pandemics do advocacy for pandemic prevention and need many small donors due to legal reasons for some of their work. Here’s an excerpt from their post on the EA Forum:
While GAP’s lobbying work (e.g. talking to members of Congress) is already well-funded by Sam Bankman-Fried and others, another important part of GAP’s work is supporting elected officials from both parties who will advocate for biosecurity and pandemic preparedness. U.S. campaign contribution limits require that this work be supported by many small-to-medium-dollar donors.
I haven’t donated yet myself, in part because I did my yearly donations before learning about them. But I also only know very little about the organisation, so this is not an endorsement — it just felt like a very good example of something where small donors could plausibly beat large ones.
https://forum.effectivealtruism.org/posts/Btm562wDNEuWXj9Gk/guarding-against-pandemics
I agree that superlinearity is way more pronounced in some cases than in others.
However, I still think there can be some superlinear terms for things that aren’t inherently about speed. E.g. climbing seniority levels or getting a good reputation with ever larger groups of people.