(LW Developer here: there’s a code update ready-to-ship that updates the /reviewVoting page to show the outcome. It’s been a bit delayed in merging roughly because JP and I are in different timezones)
Raemon
I definitely still stand by the overall thrust of this post, which I’d summarize as:
”The default Recommended EA Action should include saving up runway. It’s more important to be able to easily switch jobs, or pivot into a new career, or absorb shocks while you try risky endeavors, than to donate 10%, especially early in your career. This seems true to me regardless of whether you’re primarily earning to give, or hoping to do direct work, or aren’t sure.”I’m not particularly attached to my numbers here. I think people need more runway than they think, and I think 6 months of runway isn’t enough for most people. But I’m not sure if it’s more like 12 months or 36.
...
The world is shaped a bit differently than in 2018 though. There’s more cryptorich people around. This has some impact on the strategically landscape but I’m not sure exactly how it shakes out.
I think it mostly points towards earning to save being more important. We are bottlenecked more on agency, and good ideas, than we are on money. There’s even more money now, so the main value of your money is in giving you flexibility to pursue really high value career paths.
(This might depend somewhat on how longtermist you are. Longtermism is sort of defined as ‘you think the the most important things are the things with the worst feedback loops’, and are most bottlenecked on knowledge.
...
One question is whether, if you got to pick one article to summarize this argument, you should go with my article here, or 80k’s similar article. It looks lik they’ve updated their post to say “save enough for 6 − 24 months of runway.” (The comments on this post suggest Ben Todd originally wrote “6 − 12″. I think 6-12 is clearly too little, but 6-24 seems plausible.”)
I haven’t read the 80k article in detail, but suspect it is more thorough than my post here. I do also suspect it could use a better headline/catchphrase to distill the advice down.
I couldn’t easily find that post on the EA Forum and am not sure how to crosspost it for the Decade Review, but it seems worth considering.
I wrote a fairly detailed self-review of this post on the LessWrong 2019 Review last year. Here are some highlights:
I’ve since changed the title to “You have about Five Words” on LessWrong. I just changed it here to keep it consistent.
I didn’t really argue for why “about 5”. My actual guess for the number of words you have is “between 2 and 7.” IConcepts will, in the limit, end up getting compressed into a form that one person can easily/clumsily pass on to another person who’s only kinda paying attention or only reads the headline. It’ll hit some eventual limit, and I think that limit is determined by people’s working memory capacity (about 4-7 chunks)
If you don’t provide a deliberate way to compress the message down, it’ll get compressed for you by memetic selection, and might end up distorting your message.
I don’t actually have strong beliefs about when the 2-7-word limit kicks in. But I observed the EA movement running into problems where nuanced articles got condensed into slogans that EAs misinterpreted (i.e. “EA is Talent Constrained”), so I think it already applies at the level of organization of EA-2018.
See the rest of the review for more nuanced details.
Oh man, this is pretty cool. I actually like the fact that it’s sort of jagged and crazy.
This was among the most important things I read recently, thanks! (Mostly via reminding me “geez holy hell it’s really hard to know things.”)
That is helpful, thanks. I’ve been sitting on this post for years and published it yesterday while thinking generally about “okay, but what do we do about the mentorship bottleneck? how much free energy is there?”, and “make sure that starting-mentorship is frictionless” seems like an obvious mechanism to improve things.
Mentorship, Management, and Mysterious Old Wizards
In another comment you mention:
(One example would be the high levels of self-censorship required.)
I’m curious what the mechanism underlying the “required-ness” is. i.e. which of the following, or others, are most at play:
you’d get voted out of office
you’d lose support from your political allies that you need to accomplish anything
there are costs imposed directly on you/people-close-to-you (i.e. stress)
A related thing I’m wondering is whether you considered anything like “going out with a bang”, where you tried… just not self-censoring, and… probably losing the next election and some supporters in the meanwhile but also heaving some rocks through the overton window on your way out.
(I can think of a few reasons that might not actually make sense, for either political or personal reasons, but am suddenly curious why more politicians don’t just say “Screw it I’m saying what I really think” shortly before retiring)
The issue isn’t just the conflation, but missing a gear about how the two relate.
The mistake I was making, that I think many EAs are making, is to conflate different pieces of the moral model that have specifically different purposes.
Singer-ian ethics pushes you to take the entire world into your circle of concern. And this is quite important. But, it’s also quite important that the way that the entire world is in your circle of concern is different from the way your friends and government and company and tribal groups are in your circle of concern.
In particular, I was concretely assuming “torturing people to death is generally worse than lying.” But, that’s specifically comparing within alike circles. It is now quite plausible to me that lying (or even mild dishonesty) among the groups of people I actually have to coordinate with might actually be worse than allowing the torture-killing of others who I don’t have the ability to coordinate with. (Or, might not – it depends a lot on the the weightings. But it is not the straightforward question I assumed at first)
- Dec 29, 2020, 8:01 PM; 4 points) 's comment on Morality as “Coordination”, vs “Do-Gooding” by (LessWrong;
Morality as “Coordination” vs “Altruism”
Just wanted to throw up my previous exploration of a similar topic. (I think I had a fairly different motivation than you – namely I want young EAs to mostly focus on financial runway so they can do risky career moves once they’re better oriented).
tl;dr – I think the actual Default Action for young EAs should not be giving 10%, but giving 1% (for self-signalling), and saving 10%.
I recently chatted with someone who said they’ve been part of ~5 communities over their life, and that all but one of them was more “real community” like than the rationalists. So maybe there’s plenty of good stuff out there and I’ve just somehow filtered it out of my life.
Alas, I started writing it and then was like “geez, I should really do any research at all before just writing up a pet armchair theory about human motivation.”
I wrote this Question Post to try to get a sense of the landscape of research. It didn’t really work out, and since then I… just didn’t get around to it.
Currently, there’s only so many people who are looking to make friends, or hire at organizations, or start small-scrappy-projects together.
I think most EA orgs started out as a small scrappy project that initially hired people they knew well. (I think early-stage Givewell, 80k, CEA, AI Impacts, MIRI, CFAR and others almost all started out that way – some of them still mostly hire people they know well within the network, some may have standardized hiring practices by now)
I personally moved to the Bay about 2 years ago and shortly thereafter joined the LessWrong team, which at the time was just two people, and is now five. I can speak more to this example. At the time, it mattered that Oliver Habryka and Ben Pace already knew me well and had a decent sense of my capabilities. I joined while it was still more like “a couple guys building something in a garage” than an official organization. By now it has some official structure.
LessWrong has hired roughly one person a year for the past 3 years.
I think “median EA” might be a bit of a misnomer. In the case of LessWrong, we’re filtering a bit more on “rationalists” than on EAs (the distinction is a bit blurry in the Bay). “Median” might be selling us a bit short. LW team members might be somewhere between 60-90th percentile. (heh, I notice I feel uncomfortable pinning it down more quantitatively than that). But it’s not like we’re 99th or 99.9th percentile, when it comes to overall competence.
I think most of what separates LW team members (and, I predict, many other people who joined early-stage orgs when they first formed), was a) some baseline competence as working adults, and b) a lot of context about EA, rationality and how to think about the surrounding ecosystem. This involved lots of reading and discussion, but depended a lot on being able to talk to people in the network who had more experience.
Why is it rate limited?
As I said, LessWrong only hires maybe 1-2 people per year. There are only so many orgs, hiring at various rates.
There are also only so many people who are starting up new projects that seem reasonably promising. (Off the top of my head, maybe 5-30 existing EA orgs hiring 5-100 people a year).
One way to increase surface area is for newcomers to start new projects together, without relying on more experienced members. This can help them learn valuable life skills without relying on existing network-surface-area. But, a) there are only so many projects ideas that are plausibly relevant, b) newcomers with less context are likely to make mistakes because they don’t understand some important background information, and eventually they’ll need to get some mentorship from more experienced EAs. Experienced EAs only have so much time to offer.
I expect to want to link this periodically. One thing I could use is clearer survey data about how often volunteering is useful, and when it is useful almost-entirely-for-PR reasons. People often are quite reluctant to think volunteering isn’t useful will say “My [favorite org] says they like volunteers!”. (My background assumption is that their favorite org probably likes volunteers and needs to say so publicly, but primarily because of long-term-keeping-people-engaged reasons. But, I haven’t actually seen reliable data here)
Congrats!
I just donated to the first lottery, but FYI I found it surprisingly hard to navigate back to it, or link others to it. It doesn’t look like the lottery is linked from anywhere on the site and I had to search for this post to find the link again.
The book The Culture Map explores these sorts of problems, comparing many cultures’ norms and advising on how to bridge the differences.
In Senegal people seem less comfortable by default expressing disagreement with someone above them in the hierarchy. (As a funny example, I’ve had a few colleagues who I would ask yes-or-no questions and they would answer “Yes” followed by an explanation of why the answer is no.)
Some advice it gives for this particular example (at least in several ‘strong hierarchy’ cultures), is instead of a higher-ranking asking direct questions of lower-ranking people, the boss can ask a team of lower-ranked people to work together to submit a proposal, where “who exactly criticized which thing” is a bit obfuscated.
I agree with this, and think maybe this should just be a top-level post