I quit trying to have direct impact and took a zero-impact tech job instead.
I expected to have a hard time with this transition, but I found a really good fit position and I’m having a lot of fun.
I’m not sure yet where to donate extra money. Probably MIRI/LTFF/OpenPhil/RethinkPriorities.
I also find myself considering using money to try fixing things in Israel. Or maybe to run away first and take care things and people that are close to me. I admit, focusing on taking care of myself for a month was (is) nice, and I do feel like I can make a difference with E2G.
Congrats Yonatan! Good luck deciding where to donate! Seems like there are a lot of good options now.
Yeah, ETG seems really strong to me at the moment! What do you think is a good threshold for the average EA in terms of annual USD donations that they can make at which they should seriously consider ETG?
TL;DR: The orgs know best if they’d rather hire you or get the amount you’d donate. You can ask them.
I’d apply sometimes, and ask if they prefer me or the next best candidate plus however much I’d donate. They have skin in the game and an incentive to answer honestly. I don’t think it’s a good idea to try guessing this alone
I wrote more about this here, some orgs also replied (but note this was some time ago)
(If you’re asking for yourself and not theoretically—then I’d ask you if you applied to all (or some?) of the positions that you think are really high impact. because if not—then I think once you know which ones would accept you, and once you can ask the hiring managers things like this, then your dillema will become much easier, almost trivial)
Thanks! Yeah, I’ve included that in the application form in one or two cases in the hope it’ll save time (well, not only time – I find interview processes super stressful, so if I’m going to get rejected or decline, I’d like (emotionally) for that to happen as early as possible) but I suppose that’s too early. I’ll ask about it later like you do. I haven’t gotten so far yet with any impact-focused org.
Seems to me from your questions that your bottle neck is specifically finding the interview process stressful.
I think there’s stuff to do about that, and it would potentially help with lots of other tradeoffs (for example, you’d happily interview in more places, get more offers, know what your alternatives are, ..)
That makes a lot of sense! I’ve been working on that, and maybe my therapist can help me too. It’s gotten better over the years, but I used to feel intense shame over mistakes I made or might’ve made for years after such situations, so that I’m still afraid of my inner critic. Plus I feel rather sick on interview days, which is probably the stress.
I have thoughts on how to deal with this. My priors are this won’t work if I communicate it through text (but I have no idea why). Still, seems like the friendly thing would be to write it down
My recommendation on how to read this:
If this advice fits you, it should read as “ah obviously, how didn’t I think of that?”. If it reads as “this is annoying, I guess I’ll do it, okay....”—then something doesn’t fit you well, I missed some preference of yours. Please don’t make me a source of annoying social pressure
Again, for some reason this works better when speaking than in writing. So, eh, … idk.. imagine me speaking?? get a friend to read this to you?
(whatever you chose, consider telling me how it went? this part is a mystery to me)
The goal of interviews is not to pass them (that’s the wrong goal, I claim). The goals I recommend are:
Reducing uncertainty regarding what places will accept you. (so you should get many rejections, it’s by-design, otherwise you’re not searching well)
Practicing interviews. Interviews are different than actual work, and there’s skill to build there. So after interviews, I’ll review stuff I didn’t know, and I’ll ask for feedback about my blind spots. I have some embarrassing stories about blind spots I had in interviews and would never notice without asking for feedback. Like, eh, taking off my shoes and walking around the room including the interviewer these are actual blind spots I had which are absolutely unrelated to my profession of software development
Something about the framing of “people who interview a lot beat others in getting better jobs”—and motivation to be one of those
Get yourself ice cream or so after interviewing
Important sub point: Positive reinforcement should be for “doing good moves” (like scheduling an interview, or like reviewing what you could do better), and NOT for passing interviews (which imply to your brain that not-passing is negative, and so if your brain has uncertainty about this—it will want to avoid interviewing)
Asking a close friend / partner / roommate what they think could work for you. They might say something like “play beat saber, that always makes you feel good” which I couldn’t guess
Sometimes people spend a lot of time on things like writing cover letters (or other things that I think are a wrong use of time and frustrating (and in my model of people: some part of them knows this isn’t a good idea and it manifests as stress/avoidance, though I’m no therapist)). I’d just stop doing those things, few things are (imo) worth the tradeoff of having more stress from interviews. It’s a tradeoff, not a game of “do interviews perfectly and sacrafice everything else”
1.a. and b.: Reframing it like that sounds nice! :-D Seems like you solved your problem by getting shoes that are so cool, you never want to take them off! (I so wouldn’t have expected someone to have a problem with that though…) I usually ask for feedback, and often it’s something like “Idk, the vibe seemed off somehow. I can’t really explain it.” Do you know what that could be?
2. I’m super noncompetitive… When it comes to EA jobs, I find it reassuring that I’m probably not good at making a good first impression because it reduces the risk that I replace someone better than me. But in non-EA jobs I’m also afraid that I might not live up to some expectations in the first several weeks when I’m still new to everything.
3. Haha! Excellent! I should do that more. ^.^
4. You mean as positive reinforcement? I could meet with a friend or go climbing. :-3
5. Aw, yes, spot on. I spent a significant fraction of my time over the course of 3–4 months practicing for Google interviews, and then never dared to apply anyway (well, one recruiter stood me up and I didn’t try again with another). Some of the riddles in Cracking the Coding Interview were so hard for me that I could never solve them in 30 minutes, and that scared me even more. Maybe I should practice minimally next time to avoid that.
Thank you so much for all the tips! I think written communication works perfectly for me. I don’t actually remember your voice well enough to imagine you speaking the text, but I think you’ve gotten everything across perfectly? :-D
I’ll only pounce on amazing opportunities for now and continue GoodX fulltime, but in the median future I’ll double down on the interviewing later in 2024 when our funds run out fully. Then I’ll let you know how it went! (Or I hope I’ll remember to!) For now I have a bunch more entrepreneurial ideas that I want to have at least tried. :-3
Congrats Yonatan! Good luck with your work and I hope you stay safe out there!
Thanks for sharing! I occasionally worry that I’d struggle emotionally to go back to E2G/most of my impact being via donations, so this is a helpful anecdatum.
Same… Anna Riedl recommended working for something that is at least clearly net positive, a product that solves some important problem like scaling Ethereum or whatever. Emotionally, the exact order of magnitude of the impact probably doesn’t make a proportional difference so that the motivation will be there, and the actual impact can flow from the donations. Haven’t tried it yet, but I will if I go back to ETG.
I might disagree with this. I know, this is controversial, but hear me out (and only then disagree-vote :P )
Some jobs are 1000x+ more effective than the “typical” job. Like charities
So picking one of the super-impactful ones matters, compared to the rest. Like charities
But picking something that is 1x or 3x or 9x doesn’t really matter, compared to the 1000x option. (like charities)
Sometimes people go for a 9x job, and they sacrifice things like “having fun” or “making money” or “learning” (or something else that is very important to them). This is the main thing I’m against, so if you can avoid this, great. For example, if you’re also excited to work on ethereum, and they have a great dev community that mentors you and so on
I do think it’s important to work on something that you enjoy
So I do think you should have a bar of “do enough good to have a good time”, but this is a super subjective bar, and I wouldn’t lose track of the ball that is “your motivation” (super under rated btw)
I’ll also note that (imo) most (though not all) companies are net positive. So having a bar of “net positive”, if it works for you emotionally, won’t reduce many options and I think it’s great
(and I recommend sometimes checking if there’s a high impact job that could use your skillset and applying)
(I’m also not against doing high-risk high-reward things, or projects that aren’t “recognized” by EA orgs. Such as open source stuff)
I do personally think I have a bar of not taking harmful jobs, not ruining coordination, things like that.
Oh, and: While you’re working on something fun, learning and making money, I do think (in the typical case) you could see yourself as “preparing” for a potential very high impact job you might have in the future, and I think our community would be better off if people would take this path happily and without guilt. Just don’t forget to check for the high impact jobs sometimes.
I have many many thoughts about this topic and I could go on forever, so I’ll arbitrarily stop here but feel free to ask followup questions or tell me I’m wrong
Haha! Where exactly do you disagree with me? My mind autocompleted that you’d proffer this objection:
If you work for a 9x job, chances are that you’re in an environment where most employees are there for altruistic reasons but prioritize differently so that they believe that the job is one of the best things you can do. Then you’ll be constantly exposed to social pressure to accept a lower salary, less time off, more overtime, etc., which will cut into the donations, risks burnout, and reduces opportunities to learn new skills.
What do you think?
I’m a bit worried about this too and would avoid 9x jobs where I suspect this could happen. But having a bunch of altruistic colleagues sounds great otherwise. :-D
I think I will need to aim for something a bit above background economic growth levels of good to pacify my S1 in the long run. ^.^
Yeah, I think maybe seeing a post like this would have helped me transition earlier too, now that you say so
What were the main reasons for this decision?Was this motivated by how much you could earn in a typical zero-impact tech job? I mean—would you still “quit trying to have direct impact” if your zero-impact tech job wouldn’t leave you with much extra money to donate?
The main reason for this decision is that I failed to have (enough) direct impact.
Also, I was working on vague projects (like attempting AI Safety research), almost alone (I’m very social), with unclear progress, during covid, this was bad for my mental health.
Also, a friend invited me to join working with him, I asked if I could do a two week trial period first, everyone said yes, it was really great, and the rest is (last month’s) history
Hey Yonatan, glad to see you doing this just wanted to drop a quick note saying that we’d really appreciate your support at Rethink Priorities! We wrote a post outlining our funding needs and I’d be happy to answer any questions you have.
I want an easy/polite/non-offensive way to say “sorry, this reply is way too long, so I’m not going to read it, so I prefer that you know explicitly that I won’t reply rather than feeling like I’m maybe ignoring you and maybe will get to it, and also this doesn’t mean that you’re wrong, or that a long-reply is the wrong choice, it mainly means I’m trying to prioritize my own life tasks and will be dropping some balls and this is one thing that I think would be healthy for me to drop, and I wish this didn’t have negative social implications for our relationship or for your feeling about yourself, because I was also raised in a culture where a reply like this would be rude and make me feel bad about myself, and I really wish it wasn’t like that [and I have no idea what to do about it, so I thought I’d raise the meta-problem explicitly in a forum-quick-take. [I wonder if anyone will notice me or if this, too, will be too long]]”
“Thank you for the comment. There’s a lot here. Could you highlight what you think the main takeaway is? I don’t have time to dig into this at present, so any condensing would be appreciated. Thanks again for the time and effort.” ??
To push back a bit, I feel like unless a reply is reeeeeeeeeeeeeally long I think its good practise to make the effort to read it and respond. Part of putting a post up I think implicitly means we should make the effort to engage with people who engage with us (within reason of course)To answer the question though, I think a short reply which thanks someone for the comment and perhaps mentions one of their points without comprehensively responding is also completely finr!
I think a short reply which thanks someone for the comment and perhaps mentions one of their points without comprehensively responding is also completely finr!
I wouldn’t want someone to do that to me (I can try to explain why if it’s unintuitive)
(I’m not thinking about replies to posts that I write, but rather—me joining some other conversation and commenting something)
So like you’re on Twitter or Facebook, you comment on someone’s post, and in response you get a wall of text?
I think that’s normally considered a faux pas on the part of the person posting a long response, and the polite thing for you to do is ignore it
I mean on the EA forum / lesswrong
TL;DR but I appreciate you 🙂
Depending on how long it takes you to read things normally, and how long the texts you’re thinking about actually are, it might be reasonably efficient to ask GPT-4 or something to summarize for you.
I don’t think it would work for the case I’m thinking about, but useful idea, and I better also try it on some cases that I intuitively wouldn’t expect to work
“I generally reply with about a paragraph/tweet, but you’ve written too much for me to do that, is there a specific point I can respond to that’s most important to you? Thanks for your time”
Thank them for the comment, and then link to this thread?
You, sir, get meta points
Btw, I’d generally recommend always at least skimreading a thing before you put it down, IME it leads to much better outcomes than just not reading it at all.
With all due respect, TL;DR
(“but not obviously bad”)
1. The amount I work out is not constrained by willpower anymore, it is constrained by how much my body can handle and how much free time I have2. The best workout game I found is “thrill of the fight”, I have some tips before you try it. Also, not everyone will like it3. Trying a game for ~10 minutes isn’t enough to “get it”. Most games in VR aren’t polished enough, don’t have a good tutorial, it will take more time to decide if you like them4. I wish someone would have told me this sooner5. Still unclear: Can I build muscles using VR? So far seems promising, but I’m less certain about this part6. I only have it for 2 weeks, so maybe you’ll think I’m going to grow out of it, but I don’t think so myself. It’s literally playing gamesAMA
Yesterday I was feeling a bit bad (headache) but still tried to play as much as I can.
I recommend not downloading games that are not workouts. I downloaded such a game and got addicted to it instead, spent a long time on it (it didn’t make me tired, so I could play for a long time), and I didn’t get the health benefits from it.
(Why doesn’t anyone give this warning?)
So I did end up getting a Quest 2 based on this advice.
The game, “Thrill of the Fight”, as you recommended, is exactly everything you say! It is fantastic for fitness and many other VR games seem promising.
Not exactly scientific, but my sense is that this is a general sentiment (maybe there’s some subgroup that doesn’t benefit and this is hard to see).
It’s not obvious you could get such fitness benefits from VR.
I think the information here is an incredible signal to noise, and really generous and transparent for you to share it.
Off topic, but Ian Fitz, the dev who built “Thrill of the Fight” seems like a great guy.
He is focused on a great product and seems to embody a lot of virtues (transparency, technical detail, being present in their community).
Here’s some tidbits from him:
Controller physics and limitations (interesting technical detail)
Game scaling mechanics
The “official guide”
At least in my limited experience, this is an huge amount of engagement and attention to detail.
Can you give a little more information on the games/apps you found problematic?
As you know, some games such as “Thrill of the Fight” are great exercise. Other games in this “healthy” class might include “Beat Saber” or “Pistol Whip”. These might provide great exercise too, but I’m unsure. Was the above one of the problematic games, or was it something else?
Also, have you tried “Supernatural” or other fitness apps?
This one is unusual: It’s price is a subscription ($10/month, w a free trial).
I like this because it gives the developers a strong incentive to keep me addicted for long.
I played it a few times (the FitXR boxing mode specifically) and I like it.
I’d rate the workout as “medium to hard” and I wouldn’t be surprised if it will become “as hard as I can take”
These were not fitness games, just some other non-fitness game that got me addicted. I assume that telling you its name would be a small info hazard because you’ll be curious to try it, but if you message me, I’ll give you the name
Didn’t try it
Interesting! How often and for how many hours per week do you work out now in VR (and how many hrs not in VR per week now)? And how often and for how many hours did you work out per week before?
Trying to answer your question: I estimate I do 30-60 minutes per day in VR, plus about 5 minutes without VR (doing TRX or pullups once in a while)
Before VR: There is a ton of variance. At good weeks I’d do 2-3 rollerblade trips per week (each is several hours) plus ~10 minutes per day of something like pullups (which is also something I can say more about) [I haven’t been in a “good week” for at least 3 months]
This questions seems a bit “wrong” because:
The intensity of the workout can be extremely high
2-3 minutes in this specific VR game were enough to get my brother extremely exhausted for about half a day
My activity tracker (Oura ring [note it is optimized for sleep tracking, not workout tracking]) rates many VR sessions as “exceeds the hight of our chart”, it is literally off the charts . Note this is higher than the hardest parts of rollerblade trips that I do with groups that are better than me and really like going up hills, followed by going up even more hills
My subjective experience matches this. I recognize when my body goes into extremely high pulse levels, and I can say more about sore muscles and so on
I am already able to do 3-5 rounds of this crazy game (3 minutes each), which would be unimaginable a few weeks ago
This means (A) I’m getting more fit, but also (B) the workouts are really short. I can currently tire myself out completely in about 15 minutes, which makes asking about “how much time I spend working out” like the wrong metric, I think
[Note you don’t have to play this crazy game, or you can play it more calmly, if all of this sounds too extreme. But to me it is exciting]
The only reason I’m not working out right now is in order to be productive. I literally need willpower in order to not-work-out, this is crazy. I’m saying this in case you’re trying to figure out if I’m ABLE to get enough sports out of VR
Thanks for the info! Yeah intensities of workouts matter too.
The amount I work out is not constrained by willpower anymore...I literally need willpower in order to not-work-out, this is crazy.The best workout game I found is “thrill of the fight”
The amount I work out is not constrained by willpower anymore...I literally need willpower in order to not-work-out, this is crazy.
The best workout game I found is “thrill of the fight”
This is really compelling.
I’m pretty sold, more than from any product ad I can think of! (also thinking $FB might be undervalued?)
Can you elaborate a bit more (in a few sentences) on the setup, e.g:
What headset or specific electronic gear do you use or recommend?
How much computing power is needed?
What physical space or equipment do you need (can you do it in an empty room or do you need a treadmill or something)?
My setup is “oculus quest 2” (amazon link) (I have the 128GB version though it doesn’t seem like an important decision) with only the gear that comes out of the box, no need to even connect it to a computer.
I think I might get some extra items like a fancy head strap, but for now I’m using the default one and I’m pretty happy with it.
physical space: You need a room with space.
How much space: The game that requires most space that I saw so far is “thrill of the fight” which strongly recommends at least 1.5*2 meters (and I’d add a bit more in all directions to make sure you’re not too close to punching a wall by mistake).
Treadmill or something: I don’t use anything like that, but also remember that there is an element of personal fit here, some people like treadmills, I don’t.
I’d like to add a few more things:
If I were you, I’d consider this “a promising direction to explore” and not “problem solved”, because
I only have this for 2 weeks, I’m not an “expert”. (But I’d hope that my friends would recommend this to me without waiting to be “experts”, so here I am)
Lots of VR games are not polished, you might have to look for some that you like
This is an important mindset to keep in mind. As I like to say, “keep your expectations low and you’ll be positively surprised”
Does boxing in VR (“thrill of the fight”) sound like something you could potentially enjoy? This game is an outlier in how much of a good workout it is, as far as I can tell.
You could also try playing at a friend’s place for a few hours before buying (or if none have it, then you’ll find yourself showing it to them. :) )
You’ll have to spend some more money on games. Each game costs about $8 to $40
For reviews, my friend recommends: https://uploadvr.com/
A Telegram bot to tell you when game prices drop: https://t.me/questStoreWatch
I recommend buying a few games at full price of course
I’m pretty happy this is useful to someone, feel free to ask things. :)
Thanks! This is really informative.
5. Still unclear: Can I build muscles using VR? So far seems promising, but I’m less certain about this part
Stronger By Science is my go-to for the technical & academic questions on muscle-building. I think they would probably say no to building muscles with VR. They would still think working out using VR is a good idea though, if: 1) it helped you build an exercise habit; 2) you develop proficiency with specific movements ; and, 3) you become more body aware—all without getting injured. This is always a huge problem in the gym, especially for new people—there’s a lot of learning the hard way and the resulting injuries set people back sometimes months or years.
Here’s their summary on strength-training:https://www.strongerbyscience.com/complete-strength-training-guide/
Thank you very much! Even just having a go-to for this topic is helpful for me
2. The best workout game I found is “thrill of the fight”, I have some tips before you try it. Also, not everyone will like it
What are your tips?
Configure the game (the “guardian”) to keep some space from things like walls, so you won’t punch them by accident
Don’t straighten your elbow completely when you punch (keep it a bit bent), otherwise you might damage it in real life
For the same reason, don’t do strange bad things with your posture, such as bending your spine sideways
Generally bad pain is bad, you know.
I can say more about this if it would help
Consider doing the first round for only 20 seconds or so to avoid becoming overly exhausted without noticing (this has happened to one person I saw).
How to stop?
You can just take your headset off. If you’re like me, you’re totally going to forget this, but you can.
You can also “hug” your opponent for ~5 seconds to end the fight.
Consider skipping this if you want to investigate the game’s mechanics completely by yourself, hpmor style, but here goes:
I’d start by fighting the “dummy”. You can see your stats on the right of the dummy, including how much damage your last punch did. Then go to “fight”
The game cares a lot about how strong you punch
Many punches you do will be too weak and do 0 damage
To see how much damage you did, check the color the punch when it hits (you’ll see)
blue = zero damage
yellow = nice damage
red = a ton of damage
You can doge, including by ducking (great workout if you ask me)
You can block. If your opponent’s punch hits your glove before it hits you, it will do zero damage to you
Professional boxers on Youtube say that this game is reasonably realistic (even if not perfect), I’d take that as a prior for most uncertainties that I have (mainly around what technique to use)
Consider starting at “easy”
There are even more tips in Youtube tutorials, but I would personally prefer to only be told about the ones I wrote here before I started playing
My brain was all like “omg this person is coming to hit us! nooo!!”
I fixed this by doing one round where I let the opponent hit me as much as he wanted, and my brain was indeed surprised that nothing bad happened in real life, but then it let me play
This is the most polished everyone-likes-it game as far as I can tell.
The harder difficulties are not only cognitively harder, they are also better workouts.
I recommend starting from the tutorial (and if you show the game to anyone: Don’t explain it to them, just show them the Tutorial)
Intensity: I rate it as “medium workout”
Similar to Beat Saber, but with guns (and less overall polished).
As most games, it doesn’t have a reasonably good tutorial (though I’d do what it has (nobody understands the “armor”, don’t worry)), and if a friend would start playing it, I’d give them a few pointers. Tell me if you want those
Like this one:
I haven’t tried almost any of them
Upvoting Is an Act of Community Building
It probably helps people feel welcome to the community.
I’ve been mostly a lurker around international EA activities for about 5 years, feeling that all the orgs have some wow factor that I could never touch. I think this mostly changed because (A) I met some people in EAG (they were actually real people, which really surprised my brain), and (B) I got brave and posted something, and it got 70+ upvotes pretty quickly.
I know, this is stupid, I’m supposed to pretend not to care about upvotes, whatever. Looking back, I think this might have been pivotal for past-Yonatan’s sense of being accepted into a community, of having someone in the important EA community care at all for.. I don’t know, my attempts at helping? about me? And it lead me to, well, behave differently.
Looking at myself now, I am posting and commenting a lot, I have two more drafts almost ready to go (one for CEA! They asked me for something! Unimaginable if you’d ask me 6 months ago. I tried acting cool and said I’d be happy to help, if you’re curious. Hey Ben if you’re reading this! Ok I’m off topic).
Anyway if you’re reading my shortform, you now know my “dark secret” of caring a ton about upvotes, and I hope that my “coming out” will remind you that it’s probably true for many others too.
For myself, I try, when I remember, to upvote stuff a lot, only if I actually like it of course, but I try to be somewhat lighter on the trigger, especially with community members who are not yet so involved
Epistemic status: I’ve been to 2 EAGs, both were pretty life changing, and I think my preparation was a big factor in this.
Take ~5 minutes to try to imagine positive (maybe impossible) outcomes. Consider brainstorming with someone.
For example “org X hires me” or “I find a co-founder” or “I get funded” (these sound ambitious to me, but pick whatever’s ambitious to you).
Bad visions in my opinion: “meet people”, “make friends”, “talk to interesting people”. Be more specific, for example, if you’d meet your new best friend at EAG, what exactly would you do together? What exactly would you talk about? Better would be “Looking for someone to co-work in VR 3 times a week”, if that’s what friendship means to you.
Is anyone having trouble with the vision section? Let me know, I’ll try to help
“how can I help people” + “how can people help me”.
“I want to hire senior backend developers” is specific.
“I want to meet people” is pretty bad.
This is where your vision goes.
If you write your wish here, someone might make it come true! (aka Playa Provides)
Mark based on how you want people to find you when they search for who to network with.
I think that if you can learn something from a forum post and/or youtube, like “what cause areas exist” or “what’s the cutting edge in animal welfare”, it’s a shame to waste potential 1-on-1 meeting time on those things (except at special cases, like if you already went over all the posts).
Be specific with your request (same link).
If you’re having trouble being specific, and your only agenda is to “talk”, consider going back to the “vision” section.
I did it both times and linked to my swapcard profile, and many people who care about the same things I care about knew that I’d be really happy to talk to them
Just like you wouldn’t schedule a meeting to ask someone what are the names of the U.S states (because you can check wikipedia), I’m against scheduling meetings to ask something you can check in the EA Forum (or lesswrong).
For example, if you’re curious what’s new in global health and wellbeing, check the “global health and wellbeing” tag, and sort by “new”.
(Maybe after checking the tag you’ll still want a meeting for some reason, but I’d at least check the tag first).
I wouldn’t do: Ask an org if they’re hiring without checking the 80k job board (and/or the org’s website, and/or the org’s tag).
Seems ok: Reaching out to someone from an org, saying “I see you’re hiring, I’m considering studying for 3 months and then applying, do you think it would be better for me to apply now and if I don’t pass then study and apply again in 3 months?”
Seems great: Asking this in Swapcard, maybe they can just reply in 5 seconds with “yeah sure apply, no problem to repeat after 3 months”? Or maybe they’ll say it’s better to meet.
My story from my first EAG:
Amazing! Things I would add for newcomers:
(If corrolated with your goals) Reach out to speakers/filter 10+ years of exp beacuse it is usually a good filter for people who you can learn from the most
Ask friends/EA staff who have already been to conferences if they can reccomend who to talk to
Order a motel close to the conference is usually very comfortable (use on booking filters: less than 1 mile + over 8 rating + best price to rating ratio)
Sleeping together with other members of your community is fun for feeling comfortable at the conferences, and having someone from ‘back home’ to share you experinces with (however, this might make it a comfort zone to not get to know other people)
Usually there are small events 1 (2?) days before, and bigger events up to ~3 days after.
But that’s just my vague rule of thumb. It’s better to try to find the groups where these events are organized
More (smaller/technical stuff) :
Swapcard data may be available in Google Sheets, which is way more comfortable to filter over (ask the EAG team)
You can manage your availability in swapcard, for people scheduling with you
You can narrow down your availability to your preferred slots and only then, if they get taken (or if someone doesn’t find a good slot), open more slots.
How to manage your availability: “My Schedule” --> “My meetings” --> “MANAGE AVAILABILITY”
Probably schedule first to earlier-in-the-event, because you might discover you want to meet more people (or people will discover you) and it’s better if you still have availability then
Swapcard will let you schedule 1-on-1s at the same time that you have events (boo)
Don’t go partying at the evening if there’s EAG the next day—these events are surprisingly exhausting, and sleeping well seems to be key, at least for me
Also, sleep really well before the event, and get flight tickets that will let you do that (if you’re anything like me regarding sleep)
After EAG, there will usually be an after party, plan your schedule accordingly
Busy EA’s and their preferences for small talk if you have a question for them:
This talk is also pretty good!
We just set up a tiny production system that helps coordinate busses for refugees from Ukraine using Whatsapp, with a UI in Google Sheets.
We built it on Tuesday, and already on Wednesday it was used to coordinate several busses.
At least one person (from the overqualified team I worked with) is a person who’d probably pick up an EA software project if they’d know which one.
This is one of the reasons I asked people to pitch ideas to EA CTOs (a post that I wish got a lot more attention)
A leading career option for me is joining them, and among other things rebuilding their tech (which is originally from 1991).
Thoughts? (consider forwarding this question to people involved in meta-science, that would help me!)
I specifically think:
My professional interests and skillset are very well fitted for this kind of thing.
I have some vision for improving arxiv, for example “let people tag an article as your-code-doesn’t-run” (and upvote existing tags). I hope to disincentivize people from publishing nonsense (knowing that others will see whatever tag becomes most upvoted on their article).
The dream would be to create a mature karma system like stackoverflow, where people could get reputation by things that help the community, and not only from publishing. Of course this is a very complicated thing to do, but arxiv are in a perfect position to do it.
I’m not an academic, maybe I have no idea what I’m talking about
Having EA run arxiv sounds potentially useful, maybe?
Is advancing science bad because it will help get AGI sooner?
Made this into a regular post, especially because it’s becoming more real
I’m imagining someone “with a profession”
(a mathematician / developer / product manager / researcher / something else) who’s been following AI Safety through Scott Alexander or LW or so, and want to do something more seriously now
To be clear, I am absolutely unqualified to give any advice here, and everyone is invited to point out disagreements.
I did this for ~3 months.
This is not my normal “software career” advice (of which I’m much more confident)
I’m going to prefer to be opinionated and wrong than to put so many disclaimers here that my words mean nothing. You’ll get my opinion
There is no clear “this is the way to solve AI Safety, just learn X and then do Y”.
Similar to how, maybe, there’s no “this is the way to solve cancer, just learn X and then do Y”, just much worse. With cancer we have I guess 1000 ideas or so, and many of them have already cured/detected/reduced cancer, at least with cancer there are clear things we want to learn (I think?). with AI Safety we have about 5-20 serious (whatever that means) ideas and I can’t personally say about any of them “omg that would totally solve the problem if we could get it to work”. Still, they each have some kind of upside (which I can make a cancer-research metaphor for), so for some definition of progress, that would make progress
Even worse, some solutions seem (to me and to many others) to cause more harm than good.
Historically, people who cared about AI Safety have pushed AI Capabilities a LOT (which I think is bad)
Even worse, there is no consensus.
Really smart people are discussing this online but not coming to clear (to me) conclusions, and they have (in my opinion) maybe the best online platform for healthy discourse in the world (lesswrong)
And so, looking at this situation, it seems to me like you have a choice.
You can either try to figure out something better than everyone else has (which includes also “figure out who of all these people is correct, without knowing who to trust”),
or you can chose some project to join without understanding why they’re doing whatever they’re doing (beyond hearing the pitch and nodding, or seeing they got a lot of upvotes somewhere)
If your path is going to be “just go do something”, then
my biggest piece of advice (or more realistically, my request), is “make sure you don’t cause damage”, and specifically make sure you know what the “Unilateralist’s Curse” is. It basically means that if we have 1000 people and they can all take some potentially-dangerous action, then the most happy-to-take-risks person (or maybe, the least-aware-of-risks person) is the one who’s going to take that action.
This includes suggestions like “cause an AI accident so people will be afraid” (which I hope nobody will do and I’m afraid that as people join the movement one of them will do that).
On a meta level, if you invite lots of risk-taking people to learn a bit about AI Safety and then they go do something bad, that is just as dangerous and I have no idea how as a community we’re going to avoid that long term. But in the bottom line, please don’t make the situation worse, that is really important in my opinion, and also, not-making-things-worse turns out to be surprisingly hard
If you’re looking for someone to trust about wether to join some Project X and you care about my opinion, I’d point you to the opinions of Yudkowsky and Nate (update: maybe also Zvi): did they write something about Project X? In other words, I’m pointing at them as trustworthy-in-my-(current)-opinion. Some people would say it is bad for me to do that rather than telling you to study AI Safety for 3 months and form your own opinions. I agree (if you’re interested in studying), but if not—well I think me writing this is making the situation better and not worse (and I also think my message here won’t brainwash anyone)
I also endorse taking a job that 80k recommends (consider getting coaching with them, signing up to their longtermist-census, and/or looking at the “top recommended” jobs in their job board (update: they might have removed that option 🙀)). This is somewhat the best we have as a community, and if you talk to 80k you’re unlikely to miss a “big (job) win” that you’d find otherwise, I think.
If you chose the path of “figure out something smart that nobody else has”… (this is my choice)
First of all, the field is flooded with people trying to do that. Almost nobody manages (in my opinion), so the priors are bad. I think this is important to acknowledge just like “most startups fail” is important to acknowledge. So, given these low priors, how would you approach the problem?
I’d encourage that you do something “high variance”, where if you’d do well, you might be better than everyone else in the field. [disclaimer about don’t-do-damage]
For example, if something seems obvious to you but nobody else is doing it—consider doing that.
For example, if something seems super cool and fun to you—consider doing that.
Prompt: “Is there something you like doing for fun that others consider work?” (I think Alex from 80k came up with this, I really like it)
For example, if you have a strange unusual talent—can you use it?
Examples from myself for places that I think I might have a big advantage over others:
It seems obvious to me that “getting feedback quickly and often” is super important and nobody is doing that. Everyone’s just reading books and stuff, this seems like an obvious mistake to me, so I’m doing it differently (not that I’m not reading at all).
Note this might go really well or I might fall on my face, but worst case, I do 0 impact but also 0 damage
Infosec around AI Safety seems inspiringly terrible, and I notice this because I did infosec for the Israeli military & government, which are orgs that actually care about their infosec (as opposed to most of the industry which seems to me to want to “look like they care about security” or something like that) [since then, EA started taking AIS Infosec more seriously, unclear what the situation is today. Specifically there’s an infosec reading group and an 80k article]
It seems to me like everyone’s dropping the ball of “having productive conversations about AI Safety over video”, and specifically—people (like me) who don’t live in a hub full of AI Safety people don’t have enough people to talk to, so I’m attempting to set something like that up. Consider joining! See #hangout for more [update: I sort of gave up on this]
I have priors about how to learn software development (or product management), and I’m using them to learn AI Safety. This is mainly around “feedback”
Seems like everyone’s forgetting to ask “if this research agenda would work amazingly, would it solve the problem?”, and are instead pursuing agendas that seem interesting/tractable. [I am probably totally wrong about this, don’t actually trust me please, I’m just trying to point out ways that I notice I think differently from most, and my attempt to embrace those (without causing damage) as opposed to taking the same path as everyone]
Apparently new people keep coming up with the same bad ideas again and again, there’s at least one post about that. The meme is that if you ask “why don’t we just do X” then you’re missing something. I still think it’s worth asking (for example, in #no-dumb-questions which won’t spam anyone), but expect the answer to teach you something and make you smarter, don’t expect a “you solved it, we’ll call the president now!” moment
Look after your mental health somehow, and/or quit before your mental health gets too bad
I play VR games, I wonder if people would like to join sometimes [update: no more. and also, not doing a great job with my own mental health]
You can apply to LTFF even with 0 experience, I think. Worst case you can apply again later, I think. [update: Nonlinear has a form to apply to tons of grants at once. but still, I think most people are stuck on “I’m not worthy of applying” or so, which I think is best addressed by trying, and if you’re rejected, then try again later]
Jobs pay money, consider applying to a job
Not having enough money is bad. I don’t think it’s good to encourage people to work without being paid. You have my support to go take a normal job and get paid.
Beware the meta trap
For example, “we will solve ai alignment by accelerating ai alignment research” (me: but.. which research? the research that accelerates alignment research that accelerates alignment research? at some point you’ve got to accelerate something object-level)
For example, “I will help by helping others help” (who will you help? are they doing something important? are they maybe working on helping others helping others help others recruit people who can help more people?)
For example, “I will investigate timelines so that we can redistribute the funding in a way that will make sense given the time we have left” (but.. are there any concrete research projects that you think are maybe getting too-much or too-little funding given something you might maybe discover about timelines? maybe, for example, everything good is already getting funding? or maybe only bad things are? what is even good?)
This is different if a funder (OpenPhil?) explicitly asks for help with timelines, and if you are in the path of “trust others” and you trust that if OpenPhil asks for something explicitly then they know what they’re talking about.
I am not saying that going meta is always bad and I do think some of these projects make sense, but it’s really hard to figure out which ones, and it seems to me like some people who want to help with alignment and go directly meta are trying to solve very wrong problems.
I am not against doing high quality product management, including user research, including figuring out who the important users are, and solving those people’s problems, startup-style. (but I’d only recommend this path to few people)
Beware the “reading forever” trap
If your algorithm is something like “while there is more to read that seems important, read that thing”, then you will never exit this loop (I think). Pick your solution to this problem, but don’t ignore it
If you start reading about how to solve the “reading forever trap”
(I’m tried, maybe I’ll continue this later. remember that I have no idea what I’m talking about and that other more qualified people have written about this too)
I’ve originally posted this in here, in the AI Alignment Slack. If you’re interested, I put a lot of my journey (my questions, my solution ideas, and so on) in the same channel.
Since then I’ve become more pessimistic and am leaning away from trying to solve AI Alignment myself. Maybe I’ll write about that too.
If I’d point you to one more resource, it would be AGI safety career advice by Richard Ngo.
Submit it totally anonymously
Because they don’t know. “Why don’t people apply?”—they ask. But this is basically a blind spot: If nobody gives them feedback, they won’t know.
You’re better fit for E2G?
The teams are too small?
There’s no local team in your area?
This is valuable information.
If enough people share this, it will save doing a user research project. Please be one of the people that shares, help understanding what’s going on!
From a software engineering point of view there are a couple of things that would potentially put me off applying to an EA-org:
Lack of mentorship (this is somewhat covered by your small teams point but this is the specific part that I think of). I’m sure this isn’t true for all EA orgs but the appeal of ie FAANG is that I am very confident that I’ll be able to get mentored by engineers at the top of the field, who likely already have a lot of experience mentoring, having good structures for mentors and are generally empowered to be great at that.
Small scope/scale for projects, particularly for frontend work. In SWE a big part of your career capital comes from being able to say you’ve worked on projects that are really big and/or really fast. There are plenty of fullstack jobs at EA orgs around at the moment but a lot of them are basically look after a website or build an app which will serve a niche community.
I think there has been discussion before about SWEs feeling like EA orgs don’t offer them enough career capital, but I can’t remember where and it doesn’t appear to have updated me much in favour of the EA orgs.
Scott Alexander had a really hard time evaluating donation causes
EA has a ton of articles about how to evaluate charities
What’s going on?
Should we stop writing these guides?
Do we need better guides?
Do we need some measure like “would this guide make Scott Alexander’s work easier”?
Applicants to ACX grants were almost by definition not working on problems with well-established solutions (in EA or otherwise), eg nobody was applying for an ACX grant to distribute bednets. That made the grants more difficult to evaluate than many popular EA causes, and also made it hard to rely on previous work.
The concern I’m raising is something like “our articles only help for [something like] well established solutions”. Or in other words, there is no situation where [someone is able to vet an org and this was only true because of reading the article]
The other example I have in mind is trying to help people in Israel find an impactful job, especially in tech. We can offer them 100 pages of theory on how to vet companies, but almost no concrete companies to recommend
If only someone was working on how to evaluate hard to evaluate projects
Ref for others:
From my limited experience, it really helps to get recommendations.
If you think I am useful to EA, or if you have something similar to say that I may share with grantmakers, please comment here, or email firstname.lastname@example.org, or DM, or something.
Thx ♥️ 🐈
Having worked with Yonatan on various community-building efforts, and discussing many technical and nontechnical projects with him, I’m very optimistic about the value he can give if he has the resources and freedom to do so. Happy to serve as a reference.
He is very aligned and happy to sacrifice his time, money, and credit to do more good.
Very helpful to EA community members and organizations, and makes sure to be very pleasant and accessible.
Thinks clearer than most about ways of doing good, and acts based on the resulting logical conclusions, even if controversial.
Evidently, very open, honest, and direct.
Great quick-and-dirty approach to starting new projects.
Very independent, and knows how to solicit design requirements and quick feedback.
He may not be the most ridiculous EA in Israel, but he is close 🐱🚀⛸🥽
To work with him effectively long-term, one should have very open communication and give him the freedom to pursue the goals and directions he believes in.
(Yonatan, I’m curious as to whether/how much you agree with these 😊)
Grantmakers are welcome to ask me for a reference. Yonatan is aligned and very dedicated, and is both knowledgeable about and helpful to many software engineers (see reviews here). He’s also been directly helpful to us with recruiting, and I’ve referred him to multiple EA org’s who are trying to hire software engineers.
Meta: This feels like something emotional where if somebody would look at my plan from the outside, they’d have obvious and good feedback, but my own social circle is not worried or knowledgable about AGI, and so I hope someone will read this.
It would be my best personal fit, running one or multiple software projects that require product work such as understanding what the users actually want.
My bottle neck: Talking to actual users with pain points (researchers? meta orgs with software problems? funders? I don’t know)
I think I have potential to grow into a role where I explain complicated things in a simple way, without annoying people. Advocacy seems scary, but I think my experience strongly suggests I should try.
Usually when I look closely at a field, I have new stuff to contribute. I do have impostor syndrome around AGI Safety research, but again, probably people like me should try (?) [I am not a mathematician at all. Am I just wrong here?]
What model specifically: If you’d erase all information I heard about experts speculating “when will we have AGI” and “what’s the chance it will kill us all?”, could I re-invent it? could I figure out which expert is right? This seems like the first layer, and an important one
My actionable items:
Talk to friends about AGI. They ask questions, like “can’t the AGI simply ADVICE us on what to do?”, and I answer.
We both improve our model (specifically, if what I say doesn’t seem convincing, then maybe it’s wrong?)
I slowly exit my comfort zone of “being the weird person talking about AGI”
Write my own model, post it for comments
Maybe my agreements/disagreements with this?
Seems hard and tiring
Give me the obvious stuff
It sounds like you’re a fairly senior software engineer, so my first thought is to look at engineering roles at AI safety orgs. There are a bunch of them! You’ve probably already seen this post, but just in case: AI Safety Needs Great Engineers.
It sounds to me like you’re concerned about a gap between the type of engineering work you’re good at, and the type of engineering work that AI safety orgs need. This is something I’ve also been thinking about a lot recently. I’m a full stack developer for a consumer product, which means I spend a lot of time discussing plans with product managers, writing React code, and sometimes working on backend APIs. Whereas it seems like AI safety orgs mostly need great backend engineers who are very comfortable setting up infrastructure and working with distributed systems, and/or machine learning engineers.
This suggests 2 options to me, if you want to stay focused on software engineering rather than research or something else:
Find a way that you can help using your existing skills. This sounds like your option A above, but to me option A reads like you want to work independently as a contractor or something? Idk, it sounds like you’re not too sure what it would look like in practice. But there are AI safety orgs that have job postings for full-stack or frontend/UX engineers. If this lines up with your skillset and personal fit, this could be a really good option. One example is Ought. They’re unusual in the AI safety space in that they’re building a user-facing product, so all of the frontend skills that apply at any other startup would apply here. I know other AI safety orgs have frontend roles too, but I think they’re more focused on building internal tooling.
Build up your backend/infrastructure/ML skills enough that you could fill one of the more common AI safety engineering roles, like this one. I don’t know how easy it is for a great frontend engineer to become a great backend/infra engineer. I expect it’s MUCH faster to make that leap than it is for a complete novice to become a great backend engineer. But how quickly you can do it depends on a lot of things like your existing experience, and how great a learning environment you’re able to put yourself in for learning the new stuff.
I’m personally trying to decide between these options right now. The first thing to check is whether you feel excited at all about option 2. If ramping up in those new areas sounds super unpleasant, then I think you can rule that option out right away. But if you feel excited about both options and think you could be successful at either (which is the situation I’m in), then it’s a tougher question. I’m planning to talk to a bunch of AI safety folks at EAG in a few weeks to help figure out how to maximize my impact, and I hope to have more clarity on the matter then. I’ll update this comment afterwards if I have anything new to add.
What I’m good at:
I think my experience is probably sufficient to apply to Anthropic or Redwood or any other place that doesn’t need an ML background. Including my background in backend/infra. I did many “tech lead” roles where I was basically in charge of everything, so I’m up for that.
What I enjoy:
The thing I would be missing, I imagine, is the social interaction or something like that.
I don’t think I’d enjoy sitting on a hard problem for weeks/months alone, I imagine I’d be sad.
I don’t want to relocate (at least not a fulltime relocation), so Anthropic is off the table
Why do you think that Anthropic or Redwood etc would be missing social interaction? I wouldn’t have assumed that… on the Anthropic post I linked they mention that they love pair programming.
Anthropic and Redwood will hire you with zero ML experience so please don’t spend time learning ML before applying
[I think this deserves its own comment]
Yes, good point, I shouldn’t have included ML in the list of things to learn in option 2.
> Give me the obvious stuff
I expect that people that read shortforms on the EA forum are not those that would give useful advice, and I think there are a lot of people that would be happy to give advice to someone with your skills
Related, “my own social circle is not worried or knowledgable about AGI”, might it make sense to spend time networking with people working on AI Safety and getting a feel for needs and opportunities in the area e.g. joining discussion groups?
Still, random questions on plan A as someone not knowledgable but worried about AI
Why product work only for meta orgs? Random examples that I know you know about: Senior Software Engineer at Anthropic, and they were looking for someone to help with some dev tooling. They seem to require product skills / understanding what the users actually need. (Not asking about Anthropic in particular, but non-meta in general)
What would make it easier to clear the bottleneck of talking to actual users with pain points?
What happened to the idea of internal prediction markets for EA orgs? I think it has potential and an MVP could be simple enough, e.g. I received this proposal for a freelance project a few days ago from a longtermist (non AI safety) EA org that made me update positively towards the general idea
we want an app that lets people bet “[edited] bucks” via slack, and then when the bet expires a moderator says whether they won or lost, and this adjusts their balance. If this data was fed into airtable, I could then build some visualisations etcThis would involve a slack bot/ app hosted in the cloud and an airtable integrationLet me know what you think! I’m super excited about this for helping us hone our decision making over time i.e. getting everyone in the habit of betting on outcomes, which apparently is a great way to get around things like the planning fallacy :D Also the /bet Slack interface seems very low friction & would be very easy for people to interact with
we want an app that lets people bet “[edited] bucks” via slack, and then when the bet expires a moderator says whether they won or lost, and this adjusts their balance. If this data was fed into airtable, I could then build some visualisations etc
This would involve a slack bot/ app hosted in the cloud and an airtable integration
Let me know what you think! I’m super excited about this for helping us hone our decision making over time i.e. getting everyone in the habit of betting on outcomes, which apparently is a great way to get around things like the planning fallacy :D Also the /bet Slack interface seems very low friction & would be very easy for people to interact with
Not sure if any of this helps, but I am really excited to see whatever you will end up choosing!
Related, “my own social circle is not worried or knowledgable about AGI”, might it make sense to spend time networking with people working on AI Safety
I don’t think it will help with the social aspect which I’m trying to point at
and getting a feel for needs and opportunities in the area[...]What would make it easier to clear the bottleneck of talking to actual users with pain points?
and getting a feel for needs and opportunities in the area
I think it’s best if one person goes do the user research instead of each person like me bothering the AGI researchers (?)
I’m happy to talk to any such person who’ll talk to me and summarize whatever there is for others to follow, if I don’t pick it up myself
e.g. joining discussion groups?
Could be nice actually
Why product work only for meta orgs?
I mean “figure out what AGI researchers need” [which is a “product” task] and help do that [which helps the community, rather than helping the research directly]
Internal prediction markets
I’m in touch with them and basically said “yes”, but they want full time and by default I don’t think I’ll be available, but I’m advancing and checking it
Especially given the critical mass of people who have high quality discussions around here.
Are there important missing features that would make you transition your social network activity here?
I am talking about the situation where, for example, EA-1 will talk to EA-2 (for 30 minutes or so) with no goal other than “being able” to ask EA-2 for help in the future.
Nobody is acknowledging the cost here, to the entire community, of having lots of people going around doing this kind of networking and/or suggesting that others do it.
What I suggest instead: If you are an EA-1 and want help from an EA-2, directly ask the EA-2 for the specific help you need. If for some reason this kind of outreach didn’t work for you—I invite you to message me and maybe I can help with phrasing or so.
The example I hear the most: “In order to get a job, I’ve got to network with people”. I am so against this. In almost all situations, it’s better for everyone if you just apply (!). Usually the person doing the networking hates networking anyway. And usually the person working at the org would prefer that not all candidates have a 30 minute off topic conversation with them before applying.
I don’t know how to rant about this, this is the best I have.
Hopefully your takeaway will be “I read something messy in shortform that didn’t make sense, but it had a point that networking has downsides and that there might be better alternatives”
+1 karma but disagree.
As I see it, the purpose of networking is to tell someone, “Hey, you seem cool. It looks like we share a non-zero amount of goals / values. No promises, but maybe I’ll find out about a cool opportunity later that I’ll share with you—although I don’t have one at the moment.”
Supposedly, you’re more likely to get introduced to a career opportunity by a casual acquaintance—maybe someone you had a college class with and are now friends with on LinkedIn—than a close friend. (Although of course this is weighting all of your acquaintances against just a handful of friends, but the implication is still that more acquaintances = more opportunities.)
Making sure we’re on the same page:
I’m talking about, for example, a student who is actively networking with senior people, hoping that one of the senior people will offer the student a job or something, without applying to these jobs. Do you agree this situation is negative?
Someone asked me “you already know the EA community, no? how come do you still get value from EAG?”
Well—I live in Israel. Contacting people from the international EA community is really hard. I need to discover they exist, email them, hope they reply, and at best—set up a 30 minute call or so. This is such high friction.
At EAG, I can run my project plans by.. everyone. easily. I even had productive Uber rides.
That’s the value of EAG for me.
Hiring managers are probably not reading through all profiles, they are probably running searches. If someone wants a backend dev, they’re probably running a search for “developer”, “software”, “python”, “backend”, or whatever.
If you don’t have the buzzwords that [your target employer is going to search for], add them!
If you want to do something that you have no experience in—that’s ok! But if you don’t write it anywhere, probably nobody will contact you about it.
I find text in this format less fun to read. Am I missing something?
I mean, I do like this format for code, but not for free text meant for humans. Maybe people are copying the example from the post body?
Remember the illusion of transparency. Whatever is bothering you might not be as obvious to others as it is to you.
You can still downvote, just remember it has emotional consequences
I’m sorry for your experience, I tried to compensate for it a bit:
I think your comment is modest and conscientious about it.
So my guess with that happened is that people didn’t like this statement:
It created an unofficial list of EA ideas that is likely to contain all the high quality ideas that weren’t funded yet.
First off, I guess one reason people disagreed with this statement, is that in some views, it’s very unlikely to be true. Myself, I have an aesthetic that the best things in the best instantiation of EA are really great and hard to see. So defining the frontier of EA by any one list doesn’t make sense.
Secondly, there are principled longtime EAs who have focuses that differ from the underlying priorities/worldview that drives interest in the FTX contest. So for these people, canonizing the list as the frontier of EA projects is objectionable. This objection is heightened by what they might see as the indirect way of going about it (note that the FTX leaders are careful not to do this). At the same time, these very views makes it hard to comment. My guess is that this sentiment drove your downvote, but I don’t really know.
AMA about Israel here:https://www.lesswrong.com/posts/zJCKn4TSXcCXzc6fi/i-m-a-former-israeli-officer-ama
TL;DR Philosophy: Adding mandatory fields means [saving time in calls you have with applicants] at the expense of [reducing the amount of applicants]. Is this a tradeoff you are interested in?
TL;DR recommendation: Make all the fields optional except for (1) CV/linkedin, and (2) email. Then, in the first call, ask whatever’s missing
A: Yep, you’ll get more bad applicants if you do this
A: Then stop this. My suggestion will get you lots more bad applicants and a few more good applicants, I think. If that’s not a good tradeoff for you, dump it. I think it is totally legit to prioritize your own time!!! As my friend says, “know the stats of the card you’re playing”
A: Well, my priors from startups are that long forms sure do hurt the funnel, but maybe in your specific case these priors are wrong?
I’d recommend you do hallway testing: Grab someone in the hall, ask them to fill out your form, watch them do it.
Or save stats about how many people start your form and how many complete it. You can do that easily by posting a bit.ly link that leads to your application form, it will count how many people click it. Better but harder: Google Analytics.
You don’t know it until you measure it.
Maybe there’s a better way to check if someone “wants it bad enough”? This doesn’t sound like an intentional decision
Are you telling your candidates that you are letting them do extra work to make sure they “want it bad enough”? (If you’re asking for a cover letter, then the answer is implicitly yes)
You’re right, here are some examples:
“What makes you a good fit for this role”—hits directly on people’s impostor syndrome, which is so common in EA that I am going to bet that even you, dear anonymous reader, have impostor syndrome
“How do you define good”—is a question that I am personally stuck on in the 80k hours form. This is so stupid. They WROTE that I’m not supposed to spend too long on it. I’m usually totally a “do things quickly” person. But here, I admit it, this is my situation. Or maybe I don’t want their advice enough to be worth their time? I don’t know
How did you know?? It’s like you’re reading my mind..!
Ok, I think that if you’re hiring from a small pool of EAs (who’re full of impostor syndrome regardless of their skill level, and who spend hours writing applications) and if you’re struggling to get more to apply, and this is important enough to be worth your management focus… HAVE ZERO REQUIRED FIELDS.
I said it. Zero.
If you only get someone’s email: You’ll have a mailing list of people interested in working for you, is that so bad? (Well maybe it is, in which case don’t do that)
I expect most people will fill in lots of the fields, and you can interview those people first, and then you can chose to invite the people who only submitted their CV, or send an email to everyone who only submitted their address, probably better than having them drop out of your form.
And a very few people (bots) will probably submit empty forms. Yes.
And if this doesn’t work out, revert back to mandatory fields.
The person who does the first call and asks “what’s your experience with EA” doesn’t need to be the startup’s founder. This task can be delegated
I wonder if 80k have ever done this for their enormous form. If they see this post, do hallway testing once, decide to make the form easier, and get 10% more applicants from now on, could I get a cookie?
A lot of form builders record this automatically, as well. Typeform does, for example.
(In case you are interested: the “Why are you a good candidate for the role(s) of ___” question you alluded to above causes a bit less than 1% of applicants to drop out of the CEA application form.)
Ok, I stand convinced!
Update: Somebody at EAGx told me that they didn’t contact me virtually because I have too many fields in my Calendly
(While I’m ranting about other people having too many fields in their forms)
EA Forum under rated feature: Subscribing to posts, either by author, or by tag
I’m working on understanding and solving problems around EA orgs having trouble hiring strong engineers, and for this I’d like to do some “user research”.
I believe I already made progress in this area for EA, but I don’t want to elaborate too much in case some developer will read this and it will bias my user research.
Could someone help me contact such people / suggest ideas on how I could do it?
It will be a ~15 minutes conversation (I’m flexible if you prefer interacting in some other way)
Open Philanthropy emailed me—I passed some screening for a position I am totally unqualified for
April Fools? X_X
Loved it, set up the Hebrew crowed sourced translation project, we translated everything and printed it. I estimate over 1000 people bought a copy, not counting online readers, and the number is probably way higher, which I’m really proud of :)
One of the most influential things I’ve read. While reading it:
I noticed I’m not happy in an important partnership and broke up. Main technique: slowly changing my mind
I decided that the meat industry is indeed bad, yep
Not counting smaller things, or things from more than a month after finishing it
People who’ll geek out about priors with me!
Here’s a 2016 intro to EA I found myself doing (Hebrew link), I discussed return-on-investment of donations and was especially excited about translating 80k material. I still consider career advice to be a topic very close to my heart, and 80k to be an unusually important org.
My first interaction with the international community, which I considered to probably be scary superhuman people that know everything and I was afraid to even bother them with my email. I know, silly. If you feel like I used to, I hope you feel more comfortable to reach out.
Just before EAG I posted my offer for coaching software developers, which many people replied, met me at EAG, and until today I keep talking to developers, mostly over video.
EAG changed the way I see myself in the EA community.
I used to be active there, trying to connect volunteers (collaborators) to projects that need them
Maybe I’ll write about it sometime
TL;DR: To avoid predictably-sad employees, advertise your company honestly, including the bad parts.
How to make new employees sad:
Advertise your company as being perfect, including: Your culture, best practices, and team. Use vague sentences like “of course not everything is perfect”, but hide the concrete negative things that could give new hires an accurate picture in advance
Let them discover the real situation a few months after joining
For me personally: When considering joining a company or cofounder, a major thing that “turns me on” is when they tell me about the bad parts. It’s obvious that those parts exist, the question is whether we can speak about them.
I also believe in this for employees, and for romantic relationships
Handling bureaucracy not only takes time: For some of us, it’s stressful and icky and aversive.
I’d happily spend an extra hour building software (fun!) instead of spending that hour on paperwork (which would deplete my willpower for the rest of the day).
-Written in appreciation to all the PAs out there
I don’t have direct experience, but others I know have successfully used Magic
How is nobody stressed out about countries freezing the assets of an entire country, practically changing the records of the banks to something else? Are we confident this will only happen in situations that we think are good and moral?
[I’m not an economist]
Of course we can’t be, but sanctions are also nothing new. And rogue countries like Russia also understand how sanctions work and would already use them if it could.
Instead, I recommend: “My prior is [something], here’s why”.
I’m even more against “the burden of proof for [some policy] is on X”—I mean, what does “burden of proof” even mean in the context of policy? but hold that thought.
An example that I’m against:
“The burden of proof for vaccines helping should be on people who want to vaccinate, because it’s unusual to put something in your body”
I’m against it because
It implicitly assumes that vaccines should be judged as part of the group “putting something in your body”
It’s a conversation stopper. It claims one of the sides of the conversation has nothing to do.
“my prior for vaccines is that they’re bad, because my prior for putting things in my body is bad (but I’m open to changing my mind from evidence, and I’m open to maybe using a different prior if you have a better idea)”
I also like:
“my prior is that governments should not force people to do things, and so I’m against forcing people to be vaccinated” or “my prior is that governments are allowed to force people to do things that, by scientific consensus, protect them”. I like that we’re discussing explicitly “which priors should we use to decide which policy to accept and which not to”
What got me to write a about this now:
I don’t like the discussion about who has the “burden of proof” to decide we should or shouldn’t have an AI pause. I would prefer discussing which prior to use for it.
Should our prior be “should we pause any new technology”, and so AI is “just” another new technology?
Should our prior be that an AI is an extinction risk like a meteor in “don’t look up”, and so should be paused unless we have further evidence showing reasons to not-pause it?
Should our priors be based on expert polls (do experts recommend a pause), and should we require evidence in order to change our mind from those polls?
My opinion: we should explicitly discuss which priors to use (which isn’t an easy question), and not just assume that one “side” has the “burden of proof”
So, I’ll give two more examples of how burden of proof gets used typically:
You claim that you just saw a unicorn ride past. I say that the burden of proof is on you to prove it, as unicorns do not exist (as far as we know).
As prime minister, you try and combat obesity by taxing people in proportion to their weight. I say that the burden of proof is on you to prove that such a policy would do more good than harm.
I think in both these cases, the statements made are quite reasonable. Let me try to translate the objections into your language:
my prior of you seeing a unicorn is extremely low, because unicorns do not exist (as far as we know)
My prior of this policy being a good idea is low, because most potential interventions are not helpful.
These are fine, but I’m not sure I prefer either of these. It seems like the other party can just say “well my priors are high, so I guess both our beliefs are equally valid”.
I think “burden of proof” translates to “you should provide a lot of proof for your position in order for me or anyone else to believe you”. It’s a statement of what peoples priors should be.
Why doesn’t this translate to AI risk.
“We should avoid building more powerful AI because it might kill us all” breaks to
No prior AI system has tried to kill us all
We are not sure how powerful a system we can really make scaling known techniques and adjacent to known techniques in the next 10-20 years. A system 20 years from now might not actually be “AGI” we don’t know.
This sounds like someone should have the burden of proof of showing near future AI systems are (1) lethal (2) powerful in a utility way, not just a trick but actually effective at real world tasks
And like the absence of unicorns caught on film someone could argue that 1⁄2 are unlikely by prior due to AI hype that did not pan out.
The counter argument seems to be “we should pause now, I don’t have to prove anything because an AI system might be so smart it can defeat any obstacles even though I don’t know how it could do that, it will be so smart it finds a way”. Or “by the time there is proof we will be about to die”.
I’ve always viewed burden of proof as a dialectical tool. To say one has the burden proof is to say that if they meet the following set of necessary and jointly sufficient conditions:1. You’ve made a claim2. You’re attempting to convince another of the claim.They have the obligation in the discussion to provide justification for the claim. If (1) isn’t the case, then of course you don’t have any burden to provide justification. If (2) isn’t the case (Say, everyone already agrees with the claim or someone just wants your opinion on something) it’s not clear to me you have some obligation to provide justification either.On this account, it’s not like burden of proof talk favors a side. And I’m not sure it implicitly assumes anything or is a conversation stopper. So maybe we can “keep burden of proof talk” by using this construal while also focusing more on explicit discussion of priors. Idk, just a thought I had while reading this.
I’m doing this Twitter Style >>
Specifically, raising money from for-profit investors and being accountable to them—this is something I wouldn’t do lightly.
I think most people under rated how hard it is. I’m surrounded by founders, I feel it.
My recommendation: Ask ~3 founders how hard it actually is to open a startup, before you decide it’s probably exciting and fun.
I think maybe this could be solved by raising money from EA and not from for-profit investors, but I don’t know.
This is a big part of modern startup advice
It might be the reason that nobody solved the problem you found so far.
Specifically, solving tragedy of the commons situations, and other inadequate equilibria seem like promising situations to me.
Where will the money come from? From EA funding. (Assuming you found something cost effective and so on)
This is the first time in the world’s history where people can get paid for solving problems that aren’t monitizable, and I think this is exciting.
Many many founders have impostor syndrome.
Trying to know everything, or “only” “everything that people tell you is basic knowledge” seems pointless, you’ll never get there, especially if you plan on asking more people what they think you “definitely need to know” and keeping a list of topics to learn. It will never end.
You’ve got to do Something Else Which Is Not That.
My top recommendation would be “learn to ask for help”
Lots of people dream about better social networks that promote higher quality discussions, even me! Some challenges, like “which logo to pick”, are things that can be solved along the way. Others, like “why would anybody join a social network if almost nobody is there?” are (I claim) a core part of the plan and need to be addressed in advance.
“If you give the same answer 5 times, write a post”
For example, if someone wants to open a startup, I first make sure they understand that most startups fail. This is not a knock-down argument to not-open-a-startup, it is just something important to notice and take into account.
This post is a similar thing-to-notice beyond the normal considerations of a startup.
A social network has snowball effects. This is nothing new, but I think it’s useful to state them explicitly:
More users lead to more users.
More content means more users reading content means more users writing content.
More money means more comfortable features means more engagement means (in modern social networks) more money.
And so on.
Common suggestions to sacrifice an element of the snowball effect without explaining what would balance it out:
“We won’t optimize for engagement”
Or “By optimizing for engagement plus something else, we will get more engagement than someone optimizing only for engagement”
“We won’t charge money”
“We won’t show ads (and so we’ll make less money)”
“We will charge money from the users directly” (which means more friction for getting users, which means less users)
“We will only allow high quality content”
This probably means less content
TL;DR of my “moderation is expensive” rant [skip if obvious]
If writing a machine learning algorithm that could recognize low-quality or false arguments would be easy: then somebody would have done it and made billions of dollars, which means a lot of people are already working on it. If you solve that: that’s your startup right there.
Similarly true for “hiring and managing 10,000 moderators”
Similarly true for gamification, but that’s a whole other rant
this (Scott Alexander on Moderation)
AKA “initial critical mass”.
Q: Isn’t advertising enough? People will see the vision and high quality content and all join!
A: This is called B2C marketing and we have priors for how well it works. TL;DR: Incredibly expensive.
The standard trick for getting critical mass, btw, is starting with a niche, like Facebook started with a specific university, or Amazon started specifically with books. The reason is that the specific interesting question is not “how many users exist in the platform”, but instead “if I’ll enter the platform, what’s the chance I’ll find something I want?”—so if the social network only has a few people but they’re all my close friends, then that’s probably good enough. [I can elaborate on marketplaces]
Which leads to “do you know your users or are you building something based on your imagination of them?”
But that is already a typical startup question.
People are already building:
The Fediverse (an open source decentralized social network)
The EA Forum / Lesswrong (seem very promising to me due to very high quality discussions and a critical mass of people that resonate with me a lot)
I would really try to avoid planning to do the same thing like one of these platforms, only with less development time, less users, less content, and so on. Imagining you have lots of users and a lot of high quality content is not enough, you’ve got to design some snowball effect to lead there (or at least that’s my claim).
Better answers, I think:
“There is a specific critical feature that I think the other social networks are lacking and [because of reasons] I think will make a big difference”
“Good idea, I want to join them and build my idea as a feature in one of those platforms!”, for example I happen to know that CEA want to build some features for connecting people, maybe that’s your idea?
Please argue with me and help me improve both my opinions and my writing’s usefulness
Thanks for insights. Now I am working a smaller idea—“EA directory of ideas” to address previous flaws from social network idea. It is many times simpler idea (than a social network) and solves many specific problems that exist right now. I am searching for feedback, wrote you a PM.
There’s a product (an Oura Ring) that I ordered to Prague and I really want to pick up at Oxford if I can, but it’s unclear how to make the delivery
TL;DR: Get others to predict the grant maker’s answer. But not with a prediction market.
Today an EA told me their funding request got rejected and they got no feedback about it. (Frustrating!)
They asked me to help them guess why they were rejected, and I offered some different ideas (one was “this specific fund doesn’t know how to vet [some aspect of your idea]”.
Wouldn’t it be great if the original grant maker could review what I wrote, and respond with correct/incorrect, or maybe mark the part that was most correct if any?
The applicant would get some feedback
We’d find out if I can correctly predict what grant makers would say (which would suggest maybe I could be a grant maker)
I would personally submit 10 such guesses (on tech ideas), just as a way to test my ability, for the small chance I’d be any good.
Would some grant maker comment on this? I know nothing about your domain (except that it’s confusing)
A few quick things:- I agree that many grantmakers don’t have enough time to give much feedback, and that this leads to suboptimal outcomes.- I think it’s pretty difficult for people outside these organizations to help much with what are basically internal processes. People outside have very little context, so I would expect them to have a tough time suggesting many ideas.- In this specific proposal, I think it would be tricky for it to help much. A lot of what I’ve seen (which isn’t all too much) around grant applications is about people sharing the negative information they have about applicants. I imagine this would be exceedingly awkward to show publicly.If people want to help with the larger grantmaking process, some things they could do include:
Advise groups requesting money. See if you could provide useful feedback (I think many groups could use a bigger team of advisors)
Help newish people to write more content on the EA Forum and similar. This can be a proving ground for some grant organizations.
I’m the person Yonatan is referring to. His feedback and your general feedback are very helpful, so thank you for that! I have been a lurker within EA for years and will write more content on the EA forum, including requesting feedback on the idea (soon). Hopefully that will help, although I don’t know because I didn’t get feedback.
Before I move into why I think grant makers should provide short feedback I want to be clear: I’m completely comfortable with being rejected and I completely understand that grant makers are very busy.
Having said that, I think grant makers should feedback the applications they reject. It doesn’t have to be more than 1-2 lines and one minute to write. I have applied to EA 6 months ago and got rejected and applied again last month and got rejected again. I had a lot of encouraging talks with EA’s (although criticism as well) and was more convinced this was going to get funding. I have no idea if they hated the idea and they think it will never work, or if they think it doesn’t fit them, they are not able to evaluate properly, etc. The potential impact of knowing why is very large. It might help me improve the idea, maximize the impact or pursue other paths that are more impactful and effective. I think that one minute feedback has a high expected value. Knowing why will also help me decide whether to reapply or not, either saving the grant makers future time if I don’t or improving the idea so it has more impact if I do. Feedback might help EA get less reapplications of higher quality, increasing overall impact and reducing the time to review. Win-win?
If I could ask EA Infra Fund one binary question about your grant, it would be “did you reject me because this idea is not in your domain?”
Here’s the full idea:
Here’s my super quick take, if I were evaluating this for funding:Startups are pretty competitive. For me to put money into a business venture, I’d want quite a bit of faith that the team is very strong. This would be pretty high bar. From looking at this, it’s not clear to me promising the team is at this point.Generally, the bar for many sorts of projects is fairly high.
Ok, for the record this is very far from my guess.
The closest thing I said was “Intra Fund don’t know how to evaluate startups, and specifically market places”
Update: A grant maker [Edit: They said this is a bad description of them] told me why this wouldn’t work
Are you able to relay what they said about why it wouldn’t work?
I asked for permission now to share it
See Ozzie’s comment above
[Personal fit within software] is neglected in EA. Need to write about that sometime