This really matches my experience. As a high skill worker (software engineer at a FAANG), I strongly view top down proposals without team buy-in as a leadership failure.
If your idea is good, you should be able to convince the team that it is good and ought to be implemented (contributing to the implementation yourself is going to win you big favor points). Going over the team’s head to force the solution by forcing the HR team to accept the proposal in the example is going to burn bridges. Maybe it’s necessary if the proposal is incredibly important, but mandating a solution on a team after pushback should generally be viewed as an organizational failure to mourn.
dan.pandori
I also had a sticker shock here at the number. Thanks for including the Glassdoor links, I was very surprised that base pay in the US overall is higher than London (which is presumably the most expensive UK market).
I want to mention that I like the rounded version a lot, and the angular version is better than the current ‘weird 5 stars’ but not quite as neat. I think the fact that the angular version looks almost exactly like a capital sigma is what throws me off (sigma means a lot of stuff).
I definitely sympathize with the argument against having a symbol for an idea. Both the good and the bad of symbolization is that it leads to identification.
This is an excellent point. Making a new name for an existing concept is generally bad, but utilitarianism (and the associated ‘for the greater good’) has been absolutely savaged in public perception.
I want to say that I didn’t downvote the post (I think its a relatively neat idea, and has garnered at least one good submission).
On the other hand, I find speculation on ‘why the downvotes?’ to be unproductive. Its reasonable to encourage people explain their opinions, but I’ve generally found that threads about downvotes are low quality with lots of guesses and trying to put words in other people’s mouths. I don’t think you’re doing that here very much, but it isn’t the kind of thread I’d like to see often if at all.
It also seems odd that there are so rarely threads in the other direction, asking people to explain why they liked a particular post :)
Fair enough. I would personally find it less off-putting if you framed it in terms of collecting feedback instead of focusing on the downvotes. For example, suppose I saw a thread starting with:
‘I’m curious on feedback to this post. Please take this survey[link]’and then the survey itself has questions about the positions 1/2/3/4/5 mentioned, and a question on whether the respondent up/downvoted.
Then that seems like a fine thread. You’re collecting genuine feedback, maybe it seems a little over the top, but it doesn’t come across as speculation on why someone disliked something. There’s also an easy way for me to provide that feedback without making a public statement that people can then argue with. If I downvote something, there is a very good chance that I don’t want to spend time explaining my reasoning on a public thread where I’m in a social contract to reply to objections.
This almost perfectly matches my experience as a full-stack programmer at a FAANG. I especially appreciate the point that getting along well with your team-mates is a huge deal. It is a surprisingly consistent source of enjoyment in my job that I can joke and post memes to my team.
I would not have applied without this post, and I think it also seriously increased my probability of applying to a variety of AI research roles (which I’d been putting off for years).
- 9 Aug 2022 21:07 UTC; 8 points) 's comment on I’m Offering Free Coaching for Software Developers in the EA community by (
This is a touchingly earnest comment. Also is your ldap qiurui? If those words mean nothing to you, I’ve got the wrong guy :)
Offering FAANG-style mock interviews
I forgot to mention in the body, but I should thank Yonatan for putting a draft of this together and encouraging me to post it. Thanks! I’ve been meaning to do this for a while.
Thanks! Current volume is reasonable but I will totally forward some your way if I get overwhelmed.
As someone who has recently been in the AI Safety org interview circuit, about 50% of interviews were traditional Leetcode style algorithmic/coding puzzle and 50% were more practical. This seems pretty typical compared to industry.
The EA orgs I interviewed with were very candid about their approach, and I was much less surprised by the style of interview I got than I was surprised when interviewing in industry. Anthropic, Ought, and CEA all very explicitly lay out what their interviews look like publicly. My experience was that the interviews matched the public description very well.
I worry that this post is claiming that EAs are uncommonly likely to recommend rules violations in order to achieve their goals (ie, ends justify the means). I don’t think that’s true, and I generally see EAs as trying very hard to be scrupulous and do right by all involved parties.
Concretely, I believe that if you went to an EA conference or a similar gathering and presented people with prisoners dilemma issues, or just lost wallets, they would behave more pro-socially than average for the country.
I think that the FTX collapse is a very salient example of EA folks committing crime (perhaps in the belief that the ends justified it?), but that doesn’t mean that EA increases the probability of crime.
I’m sorry to hear about your negative experience with GiveWell’s hiring cycle.
I think that it’s easy to under-estimate how hard it is to hire well though. For comparison, you can honestly give all the same complaints about the hiring practice of my parent company (Google).
It is slow, with many friends of mine experiencing up to a year of gap between application and eventual decision.
Later interviewers have no context on your performance on earlier parts of the application. This is actually deliberate though, since we want to get independent signal at each step. I wouldn’t be surprised if it was deliberate at GiveWell as well.
You often aren’t told what is important at each interview stage. You’re just posed technical or behavioral questions, and then you have to figure out what’s important to solve the problem. Again, this is somewhat deliberate to see if candidates are able to think through what the important parts of an issue are.
You certainly aren’t given feedback for improvement after rejection. An explicit part of interviewer training is noting that we shouldn’t say anything about a candidate’s performance (good or bad) to the candidate, for fear of legal repercussions. Some EA orgs have chosen to give rejection feedback despite this, but it seems to be both not standard and not necessarily wise for the organization.
Interviewing and hiring just kind of sucks. I’d love it if GiveWell was unusually excellent here, but I think that it’s at least important to recognize that their hiring practices are pretty normal.
I am telling you what Google told me (and continues to tell new interviewers) as part of its interview training. You may believe that you know the law better than Google, but I am too risk averse to believe that I know the law better than them.
Separately regarding trust, I don’t feel obligated to trust senior EAs. I sometimes read the analyses of senior EAs and like them, so I start to trust them more. Trust based on seniority alone seems bad, could you give some examples where you feel senior EAs are asking folks to trust them without evidence?
I guess I read that as a description of what they’re doing rather asking me to trust them. CEA can choose the admission criteria they want, and after attending my first EAG earlier this year I felt like whatever criteria they were using seemed to broadly make for a valuable event for me as an attendee.
I think you’re really underestimating how hard giving useful feedback at scale is and how fraught it is. I would be more sympathetic if you were running or deeply involved with an organization that was doing better on this front. If you are, congrats and I am appreciative!
A hypothetical example that I would view as asking for trust would be someone telling me not to join an organization, but not telling me why. Or claiming that another person shouldn’t be trusted, without giving details. I personally very rarely see folks do this. An organization doing something different and explaining their reasoning (ex. giving feedback was not viewed as not a good ROI) is not asking for trust.
Regarding why giving feedback at scale is hard, most of these positions have at best vague evaluation metrics which usually bottom out in “help the organization achieve its goals.” Any specific criteria is very prone to being Goodharted. And the people who most need feedback are in my experience disproportionately likely to argue with you about it and make a stink to management. No need to trust me on this, just try out giving feedback at scale and see if it’s hard.
My admittedly limited understanding of the UK Civil Service suggests that it’s more amenable to quantization compared to GiveWell research analysts and Google software engineers. For example, if your job is working at the UK equivalent of a DMV, we could grade you based on number of customers served and a notion of error rate. That would seem pretty fair and somewhat hard to game. For a programmer, we could grade you based on tickets closed and bugs introduced. In contrast, this is absolute trash as the sole metric (although it does have some useful info).
A lot of this is looking at global poverty, and I’d highly recommend ‘Poor Economics’ as an introduction to the lives of the global poor.
I’ll mention that I found this post’s title to be overly sensational (and likely wrong in context). I expect the majority of EA forum viewers would score above 7 on the quiz (where 4.3 would be the expectation for randomly guessing), and I honestly would be crushingly depressed if this were not the case.
For reference, I was 11⁄13 on the quiz (I thought global life expectancy was ~60 instead of ~70 and expected 1 of the three animals listed to have become more endangered).