A very tiny, very informal announcement: if you want someone to review your resume and give you some feedback or advice, send me your resume and I’ll help. If we have never met before, that is okay. I’m happy to help you, even if we are total strangers.
For the past few months I’ve been active with a community of Human Resources professionals and I’ve found it quite nice to help people improve their resumes. I think there are a lot of people in EA that are looking for a job as part of a path to greater impact, but many people feel somewhat awkward or ashamed to ask for help. There is also a lot of ‘low-hanging fruit’ for making a resume look better, from simply formatting changes that make a resume easier to understand to wordsmithing the phrasings.
To be clear: this is not a paid service, I’m not trying to drum up business for some kind of a side-hustle, and I’m not going to ask you to subscribe to a newsletter. I am just a person who is offering some free low-key help.
Note: I’m sharing this an undisclosed period of time after the conference has occurred, because I don’t want to inadvertently reveal who this individual is, and I don’t want to embarrass this person.
I’m preparing to attend a conference, and I’ve been looking at the Swapcard profile of someone who lists many areas of expertise that I think I’d be interested in speaking with them about: consulting, people management, operations, policymaking, project management/program management, global health & development… wow, this person knows about a lot of different areas. Wow, this person even lists Global coordination & peace-building as an area of expertise! And Ai strategy & policy! Then I look at this person’s LinkedIn. They graduated from their bachelor’s degree one month ago. So many things arise in my mind.
One is about how this typifies a particular subtype of person who talks big about what they can do (which I think has some overlap with “grifter” or “slick salesman,” and has a lot of overlap with people who promote themselves on social media).
Another is that I notice that this person attended Yale, and it makes me want to think about elitism and privilidge and humility and “fake it till you make it” and the Matthew effect.
Another is that I shouldn’t judge people too harshly, because I also certainly try to put my best foot forward when job hunting. I am certainly guilty of being overconfident at times.
I’ll also acknowledge that while it isn’t probable that this person’s listed areas of expertise are accurate and realistic, it is possible. A fresh college grad could have read a dozen books about each of these distinct areas, and attended some sort of training program, and had multiple informational interviews. I could imagine an industrious student with enough free time gaining some competency in a variety of areas. Is that enough to count as “expertise?” I’m not sure, and it certainly seems context-dependent: at an EAGx conference I feel okay claiming competence in certain skills, but at a conference with lots of people highly trained in those skills (such as at a PMI Global Summit) I would not describe myself as so competent, simply because the reference group is different. Compared to laypeople I know a bunch about project management; compared to a professional project manager I know hardly anything.[1]
I suppose I shouldn’t be too surprised. Although I’m not a big fan of the “there are lots of grifters in EA” narrative, it isn’t unheard of for people to vastly[2] exaggerate their skills/competencies/experiences or to imply that they have more than they really do.
At a separate EA conference a person listed many areas under “Area(s) of expertise,” including one particular skill that I reached out to them to chat about, after which they replied to tell me that they actually didn’t do that kind of work and aren’t knowledgeable about it.[3]
One EA Forum poster shared strong opinions about how cumbersome regular working hours, offices in normal cities, and work/life boundaries can be. When I looked, this person also has only about five years of post-graduate work experience, all of which has either been freelance, self-employed, or running his own organization.[4] This isn’t to say that you aren’t allowed to have an opinion about “standard” offices if you haven’t spent X years in offices, but I’m skeptical of any broad and sweeping claim about a particular working style (such as “I don’t know anyone who is highly effective and gets everything done between 9 and 5 from Mon-Fri”) while having sampled that working style very little. Some offices are horribly unproductive, but that doesn’t mean that all of them are.
At least one person active on the EA Forum has an entry under Licenses & certifications on his LinkedIn profile listing Pareto Productivity Pro as license/certification from Charity Entrepreneurship, with a link to the Amazon page for the book How to Launch a High-Impact Nonprofit. This seems pretty deceptive to me, to list an official training or association with an organization when all you did is read their book.[5]EDIT: see this comment from Tyler Johnston for additional context.
Someone made a forum post about taking several months off work to hike, claiming that it was a great career decision and that they gained lots of transferable skills. I see this as LinkedIn-style clout-seeking behavior.
I saw another person list a job of Social Media Manager for Effective Altruism on their LinkedIn. (EDIT: it turns out that this is legitimate. I was completely wrong to look at this and conclude that a person was exaggerating their experiences.)
There are multiple people who have job titles of “senior [SOMETHING]”, or “president,” or “director of [SOMETHING]” even though they have no previous work experience in that area. Maybe that really is their official job title, but it strikes me as a bit fishy to have a title of Vice President or CEO when you are only two or three years into your career.
Related to the idea of how expertise is dependent on who you compare yourself to, there is a kind of a narrative among sinologists and China-watchers that a “westerner” who spends a week in China knows enough to write a book, and if they spend a month in China they can write an article, and if they spend a year in China they can’t even write a paragraph because they realize how little they know.
Although it could just be a less rude way to say “I don’t want to talk to you,” much like the little white lies people tell to turn down an invitation or to withdraw from a conversation.
This is just from a cursory view of his LinkedIn, so maybe he has much more relevant experience that I am unaware of. This would largely or completely invalidate this critique.
But I could be totally wrong. Maybe this person received some kind of specialized/individualized training from Charity Entrepreneurship and has their permission/blessing to put this on his LinkedIn profile, so I am simply making a bad assumption based in incomplete information.
I list “social media manager” for Effective Altruism on LinkedIn—but I highlight that it’s a voluntary role, not a job. I have done this for over 10 years, maintaining the “effective altruism” page amongst others, as well as other volunteering for EA.
Ya know what? That strikes me as 100% legitimate. I had approached it from the perspective of “there isn’t an organization called Effective Altruism, so anyone claiming to work for it is somehow stretching/obfuscating the truth,” but I think I was wrong. While I have seen people use an organization’s name on LinkedIn without being associated with the organization, your example of maintaining a resource for the EA community seems permissible, especially since you note that it is volunteering.
+1 to the EAG expertise stuff, though I think that it’s generally just an honest mistake/conflicting expectations, as opposed to people exaggerating or being misleading. There aren’t concrete criteria for what to list as expertise so I often feel confused about what to put down.
@Eli_Nathan maybe you could add some concrete criteria on swapcard?
e.g. expertise = I could enter roles in this specialty now and could answer questions of curious newcomers (or currently work in this area)
interest = I am either actively learning about this area, or have invested at least 20 hours learning/working in this area .
Ivan from the EAG team here — I’m responsible for a bunch of the systems we use at our events (including Swapcard).
Thanks for flagging this! It’s useful to hear that this could do with more clarity. Unfortunately, there isn’t a way we can add help text or sub text to the Swapcard fields due to Swapcard limitations. However, we could rename the labels/field names to make this clearer..?
For example
Areas of Expertise (3+ months work experience)
Areas of Interest (actively seeking to learn more)
Does that sound like something that would be helpful for you to know what to put down? I’ll take this to the EAG team and see if we can come up with something better. Let me know if you have other suggestions!
For what it is worth, I’d want the bar for expertise to be a lot higher than a few months of work experience. I can’t really think of any common career (setting aside highly specialized fields with lots of training, such as astronaut) in which a few months of work experience make someone an expert. Maybe Areas of Expertise (multiple years work experience)? It is tricky, because there are so many edge cases, and maybe someone had read all the research on [AREA] and is incredibly knowledge without having ever worked in that area.
That would help me! Right now I mostly ignore the expertise/interest fields, but I could imagine using this feature to book 1:1s if people used a convention like the one you suggested.
The mention of “Pareto Productivity Pro” rang a bell, so I double-checked my copy of How to Launch a High-Impact Nonprofit — and sure enough, towards the end of the chapter on productivity, the book actually encourages the reader to add that title to their Linkedin verbatim. Not explicitly as a certification, nor with CE as the certifier, but just in general. I still agree that it could be misleading, but I imagine it was done in fairly good faith given the book suggests it.
However, I do think this sort of resume padding is basically the norm rather than the exception. Somewhat related anecode from outside EA: Harvard College has given out a named award for many decades to the “top 5% of students of the year by GPA.” Lots of people — including myself — put this award in their resume hoping it will help them stand out among other graduates.
The catch is that grade inflation has gotten so bad that something like 30-40% of students will get a 4.0 in any given year, and they all get the award on account of having tied for it (despite it now not signifying anything like “top 5%.”) But the university still describes it as such, and therefore students still describe it that way on resumes and social media (you can actually search “john harvard scholar” in quotes on LinkedIn and see the flexing yourself). Which just illustrates how even large, reputable institutions support this practice through fluffy, misleading awards and certifications.
This post actually spurred me to go and remove the award from my LinkedIn, but I still think it’s very easy and normal to accidentally do things that make yourself look better in a resume — especially when there is a “technically true” justificaiton for it (like “the school told me I’m in the top 5%” or “the book told me I could add this to my resume!”), whether or not this is really all that informative for future employers. Also, in the back of my mind, I wonder whether choosing to not do this sort of resume padding creates bad selection effects that lead to people with more integrity being hired less, meaning even high-integrity people should be partaking in resume padding so long as everyone else is (Moloch everywhere!). Maybe the best answer is just making sure hiring comittees have good bullshit detectors and lean more on work trials/demonstrated aptitude over fancy certifications/job titles.
I double-checked my copy of How to Launch a High-Impact Nonprofit — and sure enough, towards the end of the chapter on productivity, the book actually encourages the reader to add that title to their Linkedin verbatim. Not explicitly as a certification, nor with CE as the certifier, but just in general..
Thanks for mentioning this. I wasn’t aware of this context, which changes my initial guesswork quite a bit. I just looked it up at in Chapter 10 (Take Planning), section 10.6 has this phrase: “As you implement most or some of the practices introduced here, you have every right to add the title Pareto Productivity Pro to your business card and LinkedIn profile.” So I guess that is endorsed by Charity Entrepreneurship. While I disagree with their choice to encourage people to add what I view as a meaningless title to LinkedIn, I think it I can’t put so much blame on the individual who did this.
Yeah, agreed that it’s an odd suggestion. The idea of putting it on a business card feels so counterintuitive to me that I wonder how literally it’s meant to be taken, or if the sentence is really just a rhetorical device the authors are using to encourage the reader.
choosing to not do this sort of resume padding creates bad selection
That is definitely something for us to be aware of. The simplistic narrative of “lots of people are exaggerating and inflating their experiences/skills, so if I don’t do it I will be at a disadvantage” is something that I think of when I am trying to figure out wording on a resume.
Someone made a forum post about taking several months off work to hike, claiming that it was a great career decision and that they gained lots of transferable skills. I see this as LinkedIn-style clout-seeking behavior.
I am curious why you think this i) gains them clout or ii) was written with that intention?
It seems very different to the other examples, which seem about claiming unfair competencies or levels of impact etc.
I personally think that taking time off work to hike is more likely to cost you status than give you status in EA circles! I therefore read that post as an attempt to promote new community norms (around work and life balance and self-discovery etc) than to gain status.
One disclaimer here is that I think I know this person, so I am probably biased. I am genuinely curious though and not feeling defensive etc.
Sure, I’ll try to type out some thoughts on this. I’ve spent about 20-30 minutes pondering this, and this is what I’ve come up with.
I’ll start by saying I don’t view this hiking post as a huge travesty; I have a general/vague feeling of a little yuckiness (and I’ll acknowledge that such gut instincts/reactions are not always a good guide to clear thinking), and I’ll also readily acknowledge that just because I interpret a particular meaning doesn’t mean that other people interpreted the same meaning (nor that the author intended that meaning).
(I’ll also note that if the author of that hiking post reads this: I have absolutely no ill-will toward you. I am not angry, I enjoyed reading about your hike, and it looked really fun. I know that tone is hard to portray in writing, and that the internet is often a fraught place with petty and angry people around every corner. If you are reading this it might come across as if I am angrily smashing my keyword simply because I disagree with something. I assure you that I am not angry. I am sipping my tea with a soft smile while I type about your post. I view this less like “let’s attack this person for some perceived slight” and more like “let’s explore the semantics and implied causation of an experience.”)
One factor is that it doesn’t seem generalizable. If 10,000 people took time off work to do a hike, how many of them would have the same positive results? From the perspective of simply sharing a story of “this is what happened to me” I think it is fine. But the messaging of “this specific action I took helped me get a new job” seems like the career equivalent of “I picked this stock and it went up during a decade-long bear market, so I will share my story about how I got wealthy.”
A second factor is the cause-and-effect. I don’t know for sure, but I suspect that the author’s network played a much larger role in getting a job than the skills picked up while hiking. The framing of the post was “It was a great career decision. I gained confidence and perspective, but also lots of transferable and work-applicable skills: persistence, attention to detail, organization, decision-making under pressure...” And I’m looking at this and thinking that those are all context-dependent skills. Just because you have an eye for detail or skill with organization when it comes to your backpack, it doesn’t mean that you will when you are looking at a spreadsheet. Just because you can make a decision when you slip down the side of a mountain doesn’t mean you can make a decision in a board room.
And I think a third factors is general vibes: it felt very self-promotional to me.[1] It struck me as similar to LinkedIn content in which something completely unrelated to work and professional life occurs, and then is squeezed into a box in order to be presented as a work-appropriate narrative with a career-relevant takeaway.
So I’ll frame this in a way that is more discussion-based: how context dependent are these kinds of general/broad skills? Taking attention to detail as an example, I can be very attentive to a system that I am familiar with and pretty change blind in a foreign setting (I notice slight changes in font in a spreadsheet, but I won’t notice if a friend got a new haircut).[2] Persistence (or determination, or grit) is also highly dependent on the person’s motivation for the particular task they are working on. How accurate is it to claim to have gained these skills on a hike, to the extent that they benefit you in an office job?
According to some IO Psychologist contacts (this is two quotes smushed together and lightly edited from when I was chatting about this topic):
Personality traits generally only have significant change in such a short period of time (and six months counts as a short period of time when looking at a human life) when there are severe or sudden life events; my impression is that such change is very rare. I struggle to believe a six month hike is going to change a personality long-term. I’m definitely skeptical… To me, the author saying it was a great career decision either means A) they were burnt out and this was a chance to take some time to recover, or B) they’re overselling it, maybe for online clout or maybe to justify taking six months off work. For any life lessons/soft skills learned in six months on a trail my perspective is it would be difficult to link them directly to a job (unless it is an outdoorsy, trail guide kind of job).
I think that I tend to be more averse to marketing and self-promotional behavior than the average person, so it is possible that 100 people look at that post and 80 or 90 of them feel it isn’t self-promotional.
I’ve actually had colleagues/managers from two different professional contexts describe me as extremely attentive to detail, noticing things that nobody else did and insufficiently attentive to detail, to the extent that I am not competent to do the job (these are not direct quotes, but rather my rough characterization). The context matters a lot for how good we are at things. Determination is an easy example to illustrate the importance of context: think of doing a dull, mundane task as opposed to one you find inherently interesting and engaging.
I don’t know if this is a fair assessment, but it’s hard for me to expect anything else as long as many EAs are getting sourced from elite universities, since that’s basically the planetary focus for the consumption and production of inflated credentials.
The main Swapcard example you mention seems to me like a misunderstanding of EAGs and 1-1s.
To take consulting as an example, say I am a 1st year undergrad looking to get into management consulting. I don’t need to speak to a consulting expert (probably they should change the name to be about experience instead of expertise), but I’d be very keen to get advice from someone who recently went through the whole consulting hiring process and got multiple offers, say someone a month out of undergrad.
Or another hypothetical: say I’m really interested in working in an operations/HR role within global health. I reach out to the handful of experts in the field who will be at the conference, but I want to fit in as many 1-1s as I can, and anyway the experts may be too busy, so I also reach out to someone who did an internship on the operations team of a global health charity during college. They’re not an expert in the field, but they could still brain-dump a bunch of stuff they learnt from the internship in 25 min.
And these could be about the same recently graduated person.
With the trekking example, I also know the person, and it seems extremely unlikely to me they were trying to gain power or influence (ie clout), by writing the post. It also seems to be the case that it did result in some minor outdoorsy career opportunities.
A lot of the points about transferability seem like they would apply to many job to job changes—e.g. ‘why would you think your experience running a startup would be transferable to working for a large corporation?’ But people change career direction all the time, and indeed EA has a large focus on helping people to do so.
Yes, it refers to a position. So if this is actually someone’s job title, then there kind of isn’t anything wrong with it. And I sympathize with people who found or start their own organization. If I am 22 and I’ve never had a job before but I create a startup, I am the CEO.
So by the denotation there is nothing wrong with it. The connotation makes it a bit tricky, because (generally speaking) the title of CEO (or director, or senior manager, or similar titles) refers to people with a lot of professional experience. I perceive a certain level of … self-aggrandizement? inflating one’s reputation? status-seeking? I’m not quite sure how to articulate the somewhat icky feeling I have about people giving themselves impressive-sounding titles.
I’m currently reading a lot of content to prepare for HR certification exams (from HRCI and SHRM), and in a section about staffing I came across this:
some disadvantages are associated with relying solely on promotion from within to fill positions of increasing responsibility: ■ There is the danger that employees with little experience outside the organization will have a myopic view of the industry
Just the other day I had a conversation about the tendency of EA organizations to over-weight how “EA” a job candidate is,[1] so it particularly stuck me to come across this today. We had joked about how a recent grad with no work experience would try figuring out how to do accounting from first principles (the unspoken alternative was to hire an accountant). So perhaps I would interpret the above quotation in the context of EA as “employees with little experience outside of EA are more likely to have a myopic view of the non-EA world.” In a very simplistic sense, if we imagine EA as one large organization with many independent divisions/departments, a lot of the hiring (although certainly not all) is internal hiring.[2]
And I’m wondering how much expertise, skill, or experience is not utilized within EA as a result of favoring “internal” hires. I think that I have learned a lot about EA over the past three years or so, but I suspect that I would perform better in most EA jobs if I had instead spent 10% of that time learning about EA and 90% of it learning about [project management, accounting, bookkeeping, EEO laws, immigration law, workflow automation tools, product management, etc.]. Nonetheless, I also suspect that if I had spent less time delving into EA, I would be a less appealing job candidate for EA orgs, who heavily weigh EA-relevant experience.[3]
It does seem almost comical how we (people involved in EA) try to invent many things for ourselves rather than simply using the practices and tools that exist. We don’t need to constantly re-invent the wheel. It is easy to joke about hiring for a position that doesn’t require someone to be highly EA, and then using “be very EA” as an selection criteria (which eliminates qualified candidates). I’ll return to my mainstay: make sure the criteria you are using for selection are actually related to ability to perform the job. If you are hiring a head of communications to manage public relations for EA, then I think it makes sense that this role needs to understand a lot of EA. If you are hiring an office manager or a data analyst, I think that it makes less sense (although I can certainly imagine exceptions).
I’m imagining a 0-10 scale for “how EA someone is,” and I think right now most roles require candidates to be a 7 or 8 or 9 on the scale. I think there are some roles where someone being a 3 or a 4 on the scale would be fine, and would actually allow a more competitive candidate pool to be considered. This is all quite fuzzy, and I think there is a decent chance that I could be wrong.[4]
“How EA someone is” is a very sloppy term for a variety of interconnected things: mission-alignment, demonstrated interaction with the EA community, reads lots of EA content, ability to use frequently used terms like “counterfactual” and “marginal,” up-to-date with trends and happenings within EA, social connections with EAs…
Actually, I wonder if there are stats on this. It would be curious to get some actual estimates regarding what percent of hires made are from people who are within EA. There would certainly be some subjective judgement calls, but I would view being “within EA” as having worked/interned/volunteered for an EA org, or having run or having been heavily involved in an EA club/group.
I have a vague feeling that heavily weighing EA-relevant experience over non-EA experience is fairly common. I did have one person in a influential position at a central EA org mention that a candidate with a graduate degree (or maybe the words spoken were “MBA”? I don’t recall exactly) gets a bit less consideration. Nonetheless, I don’t know how much this actually happens, but I hope not often.
Especially since “how EA someone is” conflates several things: belief in a mission, communication styles, working preferences, and several other things that are actually independent/distinct. People have told me that non EAs have had trouble understanding the context of meetings and trouble communicating with team members. Could we take a generic project manager with 10 years of work experience, have them do two virtual programs, and then toss them into an EA org?
I think that the worries about hiring non-EAs are slightly more subtly than this.
Sure, they may be perfectly good at fulfilling the job description, but how does hiring someone with different values affect your organisational culture? It seems like in some cases it may be net-beneficial having someone around with a different perspective, but it can also have subtle costs in terms of weakening the team spirit.
Then you get into the issue where if you have some roles you are fine hiring EAs for and some you want them to be value-aligned for, then you may have an employee who you would not want to receive certain promotions or be elevated into certain positions, which isn’t the best position to be in.
Not to mention, often a lot of time ends up being invested in skilling up an employee and if they are value-aligned then you don’t necessarily lose all of this value when they leave.
Chris, would you be willing to talk more about this issue? I’d love to hear about some of the specific situations you’ve encountered, as well as to explore broad themes or general trends. Would it be okay if I messaged you to arrange a time to talk?
(I want to share, but this doesn’t seem relevant enough to EA to justify making a standard forum post. So I’ll do it as a quick take instead.)
People who know me know that I read a lot.[1] Although I don’t tend to have a huge range, I do think there is a decent variety in the interests I pursue: business/productivity, global development, pop science, sociology/culture, history. Of all the books I read in 2023, here is my best guess as to the ones that would be of most interest to an effective altruist.
For people who haven’t explored much yet
Scrum: The Art of Doing Twice the Work in Half the Time. If you haven’t worked in ‘startupy’ or lean organizations, this books may introduce you to some new ideas. I first worked for a startup in my late 20s, and I wish that I had read this book at that point.
Developing Cultural Adaptability: How to Work Across Differences. This 32 page PDF is a good introduction to ideas of working with people from other cultures. This will be particularly useful if you are going to work in a different country (although there are cultural variations within a single country). This is fairly light introduction, so don’t stop here if you want to learn more about cross-cultural communication and cross-cultural psychology.
How to Be Perfect: The Correct Answer to Every Moral Question. Less focused on productivity /professional skills, this is a fun and lighthearted exploration of different ethical theories. This book made me smile more than any other I read this year, and also introduced me to some new moral philosophers. This is probably the most easily ‘digestible’ book ever written on moral philosophy. If you enjoyed the TV Show The Good Place, you should listen to the audiobook version of this book, as it features the cast from The Good Place.
Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth. If you aren’t familiar with the problems of the scientific process as it actually exists, or with the ‘industry’ of science, then this book will probably introduce you to some of these ideas, as well as make you a bit more skeptical of scientific publication in general. I think it would be great if we all slightly increased our incredulousness toward any any all new publications. It strikes me as a bit of a kindred spirit to the essay Beware the Man of One Study.
Conscious: A Brief Guide to the Fundamental Mystery of the Mind and Free Will. I started to think more about consciousness in animals this year, and these two short books were the start of my exploration. You probably won’t learn anything new if you have already done some thinking or reading about this topic, but I’m guessing that the average twenty-something interested in EA would gain a bit from reading these.
For people who have already explored a lot and know the basics
The Idealist: Jeffrey Sachs and the Quest to End Poverty. A nicely written portrait that doesn’t pull any punches, highlighting both the good and the bad. I loved how neutral the author’s tone felt; there wasn’t idolizing or vilifying. I view reading this as a good way to a) inoculate oneself a bit against hero worship, and b) understand some of the complications that come with global development work, even when you are relatively well-resourced.
Crucial Conversations Tools for Talking When Stakes Are High. It is rare for me to re-read books, but I think I will revisit this one in a few years. This are useful skills that should be practiced, both in a professional environment and in personal relationships.
How to Measure Anything: Finding the Value of Intangibles in Business. If you are already familiar with the basics of expected value and trying to quantity things via fermi estimates, then this book will help take you to the next level. I enjoyed the balance between examples and explanation, and I could see myself taking this book out and referring to it in the future to figure out the value of an estimate.
What a Fish Knows: The Inner Lives of Our Underwater Cousins. I don’t think that I gained anything specific and concrete from this book, but in a broad sense I got a much greater appreciation for non-human life, and a strong reminder of how little I know of the world. I’m trying to read more about animals as a way of building greater understanding, and there were dozens of fascinating tidbits in this book that I view as pixels in a picture or pieces in a jigsaw puzzle (in the sense that after I gather enough of them I will start to be able to understand something larger).
Maybe not so closely related to effective altruist ideas, but still worth reading for (some people)
Simply Managing: What Managers Do–and Can Do Better, and Humble Inquiry: The Gentle Art of Asking Instead of Telling. If you are interested in being a manager in the future (or if you already are a manager of people) then you should learn how to manage. While there are a lot of aspects to it, this is a good start. Henry Mintzberg is very famous and well respected when it comes to management education, and Edgar H. Schein is one of the foremost experts on organizational culture. These two books are short, simple samplings of their ideas. If you are already a skilled and experienced people manager, reading these might be a bit of a refresher, but you likely won’t encounter new concepts.
I Hate the Ivy League: Riffs and Rants on Elite Education[2]and Where You Go Is Not Who You’ll Be: An Antidote to the College Admissions Mania. If you aren’t interested in how higher education functions in American society, then skip these books. I’m interested in how higher education functions in American society, and these two books were enjoyable and educational explorations. But if you are interested in ideas of justice, gatekeeping, access, inclusion, equality, etc., then you might enjoy these two. These really got me thinking about what admissions criteria ought to be for a university education. Simplistic answers (admit everyone and have resources stretched so thin that quality is bad, or admit only those that are already very well-resourced and then give them lots more resources) don’t seem great paths to building a better society. This is an area that I want to learn more about, and I intend to read more books about this. But there is one quote that has stuck with me: “The prestige associated with going to, say, Yale, was a function of at least in part how many people wanted to get in and couldn’t. It was the logic of the nightclub, it had never occurred to me that a university was a nightclub. I thought it was more like a hospital, an institution judged by how many patients it took in and how many of those later emerged fully healed.”
Hand to Mouth: Living in Bootstrap America. If you have never been poor, read this to try and get a bit of an understanding of what it is like to live in a first world country without having much money or career stability. It isn’t brilliant literature, but it is a decently good whirlwind tour of poverty. I grew up… kind of poor, I guess? For most of my childhood my parents both had jobs, and I think that we were consistently beneath the US median household income. I never noticed it much as a kid, but looking back I can see various indications that my family didn’t have much money. If you know what it is like to be poor in America, then you probably won’t learn anything from this (although you may feel emotionally “seen” and validated).[3] A quote from a different book that seems relevant here: “That’s the difference between being privileged and being poor in America. It’s how many chances you get. If you’re wealthy, all kinds of things can happen and you’ll be okay. You can drop out of school for a year, you can get addicted to pain killers, you can have a bad car accident. No one ever says, of the upper-middle class high school kid whose parents get a terrible divorce, “I wonder if she’ll ever go to college.” She’s going to college; disruption is not fatal to life chances.”
Mixed feelings
Animal Liberation Now: The Definitive Classic Renewed. Yes, I know that it is an influential classic. But I didn’t really gain anything from it (other than the credibility to be able to say “yes, I’ve read it.”) I think that from a few conversations, a few documentaries, and a few online articles over the years I had already picked up the core messages that this book attempted to portray. I already knew that [insert animal here] are treated horribly. It is kind of like watching Dominion after you already have watched three or four other documentaries about farm animal welfare: it is just telling you what you already know.
Utopia for Realists: How We Can Build the Ideal World. It was… fine. I wanted to like it, and I like the idea of it. I’m not sure why this book didn’t resonate more with me; it really should have. I could go through it with a fine-toothed comb and justify, caveat, and explain all my reactions. But it doesn’t seem worth it, so I’ll just be satisfied with a vague shrug and move on to reading other books.
Moral Mazes: The World of Corporate Managers. I thought that I would gain something from this, but it was all kind of… straightforward. Of course people are going to act on incentives, and when those incentives incentivize behavior that is damaging, I am not surprised that damage results. This all struck me as kind of blandly obvious. Maybe if I hadn’t previously read a bunch about behavioral economics and social psychology then maybe I would have learned new things and found this book worthwhile, but I’ve read almost all of the books on these subjects already at this point.[4]
A few books on diversity
DEI Deconstructed: Your No-Nonsense Guide to Doing the Work and Doing It Right. If you are interested in the DEI industry, what is wrong with it, and how it can be better, read this. It focuses a lot on the DEI industry, but it has a really good chapter on practical applications and on what an organization can do.
Read This to Get Smarter: About Race, Class, Gender, Disability, and More. If you are brand new to ideas about diversity and inclusion, or if you are intimidated by the terminology, or if you just want a light introduction, then read this book. It you already are familiar with the terminology and the basic ideas, then you won’t learn anything.
A majority via audiobook, so we could quibble on whether or not it really counts as reading, but it is accurate to say that I have ‘consumed’ a lot of books.
One thing that I feel odd about the EA community is the easy confidence I observe. It is very different from the feeling of financial precariousness (I guess the term would be precariat?). Not knowing if your job will be terminated, if you will be able to afford rent, if you will be okay skipping the doctor’s appointment, etc. Existing without stability or predictability or security causes a lot of stress. I’m stunned to meet people who are in the top decile of American income earners (somebody talked about earning nearly a million dollars in a year so casually, as if it was a normal thing), or who have donated more money in the past five years than I have earned in the past ten, or who owned a house in a high cost of living city in their mid-20s. I’m amazed at people who graduate from school and earn more than the average American income by the age of 24, and who then create/found their own organization so that they can pursue their interests and get paid for it. But there is a lot of selection bias at play here: maybe people simply don’t talk about their upbringing and I make shallow and incorrect assumptions. This should be interpreted as musings that are very low-confidence.
No, not literally all of them. I mean that if you compile a list of the most recommended or most read books in behavioral economics and social psychology, I think that I have already read between 40% and 80% of them. Just popular press books, not academic books. So I’d consider myself a fairly well-read layperson.
One of the best experiences I’ve had at a conference was when I went out to dinner with three people that I had never met before. I simply walked up to a small group of people at the conference and asked “mind if I join you?” Seeing the popularity of matching systems like Donut in Slack workspaces, I wonder if something analogous could be useful for conferences. I’m imagining a system in which you sign up for a timeslot (breakfast, lunch, or dinner), and are put into a group with between two and four other people. You are assigned a location/restaurant that is within walking distance of the conference venue, so the administrative work of figuring out where to go is more-or-less handled for you. I’m no sociologist, but I think that having a small group is better for conversation than a large group, and generally also better than a two-person pairing. An MVP version of this could perhaps just be a Google Sheet with some RANDBETWEEN formulas.
The topics of conversation were pretty much what you would expect for people attending an EA conference: we sought advice about interpersonal relationships, spoke about careers, discussed moral philosophy, meandered through miscellaneous interests, shared general life advice, and so on. None of us were taking any notes. None of us sent any follow up emails. We weren’t seeking advice on projects or trying to get the most value possible. We were simply eating dinner and having casual conversation.
When I claim this was one of the best experiences, I don’t mean “best” in the sense of “most impactful,” but rather as as 1) fairly enjoyable/comfortable, 2) distinct from the talks and the one-on-ones (which often tend to blur together in my memory), and 3) I felt like I was actually interacting with people rather than engaging in “the EA game.”[1] I think that third aspect felt like the most important for me.
Of course, if could simply be that this particular group of individuals just happened to mesh well, and that this specific situation it isn’t something which can be easily replicated.
“The EA game” is very poorly conceptualized on my part. I apologize for the sloppiness of it, but I’ll emphasize that this is a loose concept that I’ve just started thinking about, rather than something rigorous. I think of it as something along the lines of “trying to extract value or trying to produce value.” Exploring job opportunities, sensing if someone is open to a collaboration of some type, getting advice on career plans, picking someone’s brain on their area of expertise, getting intel on new funders and grants, and so on. It is a certain type of professional and para-professional networking. You have your game face on, because there is some outcome that is dependent on your actions and on how people perceive you. This is in contrast to something like interacting without an agenda, or being authentic and present.
@Christoph Hartmann has developed a tool that might be useful! Might try to see if we can use it at EAGxUtrecht. Below is a message he sent me explaining it:
idea for EAGx: Last time I went I loved the program but I felt facilitating interaction especially outside core conference hours could’ve been easier. I built an app that would make it super easy for people to find each other to go to a restaurant, go bouldering, discuss longtermism, go to a bar, etc. Would love to bring this to EAGx: https://letss.app/
Thanks for tagging me! Fully agree with you Joseph that an easier way to socialise with strangers at conferences would be great and that’s exactly what I’m trying to do with this app. Let me know if you know anybody organising conferences or communities for whom this could be helpful.
I wish that people wouldn’t use “rat” as shorthand for “rationalist.”
For people who aren’t already aware of the lingo/jargon it makes things a bit harder to read and understand. Unlike terms like “moral patienthood” or “mesa-optimizers” or “expected value,” a person can’t just search Google to easily find out what is meant by a “rat org” or a “rat house.”[1] This is a rough idea, but I’ll put it out there: the minimum a community needs to do in order to be welcoming to newcomers is to allow newcomers to figure out what you are saying.
Of course, I don’t expect that reality will change to meet my desires, and even writing my thoughts here makes me feel a little silly, like a linguistic prescriptivist tell people to avoid dangling participles.
Try searching Google for what is rat in effective altruism and see how far down you have to go before you find something explaining that rat means rationalist. If you didn’t know it already and a writer didn’t make it clear from context that “rat” means “rationalist”, it would be really hard to figure out what “rat” means.
(I’m writing with a joking, playful, tongue-in-cheek intention) If we are setting the bar at “to join our community you need to be at least as well read at GPT4,” then I think we are setting the bar too high.
More seriously: I agree that it isn’t impossible for someone to figure out what it means, it is just a bit harder than I would like. Like when someone told me to do a “bow tech” and I had no idea what she was talking about, but it turns out she was just using a different name for a Fermi estimate (a BOTEC).
TLDR: Try to be more friendly and supportive, and to display/demonstrate that in a way the other person can see.
Slightly longer musings: if you attend an EA conference (or some other event that involves you listening to a speaker), I suggest that you:
look at the speaker while they are speaking
have some sort of smile, nodding, or otherwise encouraging/supportive body language or facial expression.
This is likely less relevant for people that are very experienced public speakers, but for people that are less comfortable and at ease speaking in front of a crowd[1] it can be pretty disheartening to look out at an audience and see the majority of people looking at their phone and their laptops.
I was at EAGxNYC recently, and I found it a little disheartening at how many people in the audience were paying attention to their phones and laptops instead of paying attention to the speaker.[2] I am guilty of doing this in at least one talk that I didn’t find interesting, and I am moderately ashamed of my behavior. I know that I wouldn’t want someone to do that to me if I was speaking in front of a crowd. One speaker mentioned to me later that they appreciated my non-verbal support/agreement.[3]
I do understand that taking notes can be really helpful, but from the point of view of the speaker they can’t tell if an audience member is taking rigorous notes or is browsing cat videos on YouTube. We can talk about the optimum scenario for maximum global utility, but I want us (as a community) to also remember that there is a person standing in front of us.
Although I think I may tend to be more expressive than many of the EAs I’ve interacted with, especially when it comes to friendliness, support, enthusiasm, etc.
I want to provide an alternative to Ben West’s post about the benefits of being rejected. This isn’t related to CEA’s online team specifically, but is just my general thoughts from my own experience doing hiring over the years.
While I agree that “the people grading applications will probably not remember people whose applications they reject,” two scenarios[1] come to mind for job applicants that I remember[2]:
The application is much worse than I expected. This would happen if somebody had a nice resume, a well-put together cover letter, and then showed up to an interview looking slovenly. Or if they said they were good at something, and then were unable to demonstrate it when prompted.[3]
Something about the application is noticeably abnormal (usually bad). This could be the MBA with 20 years of work experience who applied for an entry level part-time role in a different city & country than where he lived[4]. This could be the French guy I interviewed years ago who claimed to speak unaccented American English, but clearly didn’t.[5] It could be the intern who came in for an interview and requested a daily stipend that was higher than the salary of anyone on my team. If you are rude, I’ll probably remember it. I remember the cover letter that actually had the wrong company name at the top (I assume he had recently applied to that company and just attached the wrong file). I also remember the guy I almost hired who had started a bibimbap delivery service for students at his college, so impressive/good things can also get you remembered.
A big caveat here is that memories are fuzzy. If John Doe applies to a job and I reject him and three months later we meet somehow and he says “Hi, I’m John Doe” I probably wouldn’t remember that John Doe applied, nor that I rejected him (unless his name was abnormally memorable, or there was something otherwise notable to spark my memory). But if he says “Hi, I’m John Doe. I do THING, and I used to ACCOMPLISHMENT,” then maybe I’d remember looking at his resume or that he mentioned ACCOMPLISHMENT in a cover letter. But I would expect more than 90% of applications I look at fade completely from my mind within a few days.
I think that it is rare. I have memories of less than a dozen specific applications out of the 1000s I’ve looked at over the years, and if you are self-aware enough to be reading this type of content then you probably won’t have an application bad enough for me to remember.
The other thing I would gently disagree with Ben West on is about how getting rejected can be substantially positive.[6] My rough perspective (not based on data, just based on impressions) is that it is very rare that getting rejected from a job application is a good thing. I imagine that there are some scenarios in which a strong candidate doesn’t get hired, and then the hiring manager refers the candidate to another position. That would be great, but I also think that it doesn’t happen very often. I don’t have data on “of candidates that reach the 3rd stage or further of a hiring process but are not hired, what percent have some specific positive result from the hiring process,” but my guess is that it is a low percentage.
Nonetheless, my impression is that the hiring rounds Ben runs are better than most, and the fact that he is willing to give feedback or make referrals for some rejected candidates already puts his hiring rounds in the top quartile or decile by my judgement.
To the extent that the general claim is “if you think you are a reasonable candidate, please apply,” I agree. You miss 100% of the shots you don’t take. If you are nervous about applying to EA organizations because you think a rejection could damage your reputation at that and other organizations, if your application is better than the bottom 5-10%, you have nothing to worry about. Have a few different people check your resume to make sure you haven’t got any low hanging fruit improvements, and go for it.
I’m thinking about real applications I’ve seen for each of these things that I mention. But they are all several years old, from before I became aware of EA.
I remember interviewing somebody in 2017 or so who was talking about his machine learning project, but then when I poked and prodded he had just cobbled together templates from a tutorial. And I’ve had this linguistically a few times, when a resume /cover letter claims a high level of competence in a language (bilingual, fluency, “practically native”), or something similarly high, yet the person struggles to converse in that language.
I’m 100% open to people taking part-time jobs if they want them, and I don’t mind someone “overqualified” doing a job. But if the job is in-person and requires you to speak the local language, you’ll have to at least convince me why you are a good fit.
His English was very good, far better than my French, and I assume that he spent many hours practicing and studying. But it was noticeably not American English, and that particular job required incumbents to be native English speakers.
There is the general idea getting rejected from MEDIOCRE_COMPANY enabled you to apply and get hired at GREAT_COMPANY. But that seems bland/obvious enough that I’ll set it aside.
A brief thought on ‘operations’ and how it is used in EA (a topic I find myself occasionally returning to).
It struck me that operations work and non-operations work (within the context of EA) maps very well onto the concept of staff and line functions. Line function are those that directly advances an organization’s core work, while staff functions are those that do not. Staff functions have advisory and support functions; they help the line functions. Staff functions are generally things like accounting, finance, public relations/communication, legal, and HR. Line functions are generally things like sales, marketing, production, and distribution. The details will vary depending on the nature of the organization, but I find this to be a somewhat useful framework for bridging concepts between EA and the broader world.
It also helps illustrate how little information is conveyed if I tell someone I work in operations. Imagine ‘translating’ that into non-EA verbiage as I work in a staff function. Unless the person I am talking to already has a very good understanding of how my organization works, they won’t know what I actually do.
I’m skimming through an academic paper[1] that I’d roughly describe as cross-cultural psychology about morality, and the stark difference between what kinds of behaviors Americans and China view as immoral[2] was surprising to me.
The American list has so much of what I could consider as causing harm to others, or malicious. The Chinese list has a lot of what I would consider as rude, crass, or ill-mannered. The differences here remind me of how I have occasionally pushed against the simplifying idea of words having easy equivalents between English and Chinese.[3]
There are, of course, issues with taking this too seriously: issues like spitting, cutting in line, or urinating publicly are much more salient issues in Chinese society than in American society. I’m also guessing that news stories about murders and thefts are more commonly seen in American media than in China’s domestic media. But overall I found it interesting, and a nice nudge/reminder against the simplifying idea that “we are all the same.”
Dranseika, V., Berniūnas, R., & Silius, V. (2018). Immorality and bu daode, unculturedness and bu wenming. Journal of Cultural Cognitive Science, 2, 71-84.
Note that there are issues here relating to meaning of the words in English and Chinese (immoral and bu daode) not being quite the same, which is a big part of the paper. In fact, the authors even claim that daode is not a reasonable translation for morality (a claim that I roughly agree with).
Similarly to morality, words like friend, cousin, to be open, or hair have different connotations and are used in different ways, and shouldn’t be viewed as exact translations, but rather as rough analogues. My naïve assumption is that the more closely related languages and culture are, the easier it is to translate concepts directly.
I wonder if the main difference is that the Americans and Lithuanians are responding more based on how bad the things seem to be, while the Chinese are responding more based on how common they are. Most of the stuff on the Chinese list also seems bad to me, just not nearly as bad as violence.
I’d think the article you’re referencing (link) basically argues against considering “daode” to mean “morality” and vice-versa.
The abstract: “In contemporary Western moral philosophy literature that discusses the Chinese ethical tradition, it is a commonplace practice to use the Chinese term daode 道德 as a technical translation of the English term moral. The present study provides some empirical evidence showing a discrepancy between the terms moral and daode.”
I think this is a really big and valuable finding, and generally agree with your thinking about language and morality differences, which are valuable research areas.
Anyone doing a deeper dive in the paper might want to think about whether Chinese survey participants are surprised to see relatively extreme and serious crimes like theft and violence and decide not to touch those concepts with a ten foot pole, and default to things that people frequently talk about or are frequently criticized by official news sources and propaganda.
Not that they’re super afraid of checking a box or anything; it’s just that it’s only a survey and they don’t know the details of what’s going on, and by default the tiny action is not worth something complicated happening or getting involved in something weird that they don’t understand. Or maybe it’s only that they think it’s acceptable to criticize things that everyone is obviously constantly criticizing, especially in an unfamiliar environment where everything is being recorded on paper permanently (relative to verbal conversations which are widely considered safer and more comfortable). It’s not that people are super paranoid, but, like, why risk it if some unfair and bizarre situation could theoretically happen (e.g. corruption-related, someone’s filling quotas), and conformity is absolutely guaranteed to be safe and cause no major or minor disturbances to your daily life?
I didn’t read the paper, and these musings should only be seriously considered as potentially helpful for people reading the paper. The paper seems to have run other forms of surveys that point towards similar conclusions.
From the study it looks like participants were given a prompt and asked to “free-list” instead checking boxes so it might be more indicative of what’s actually on people’s minds.
The immoral behaviors prompt being:
The aim of this study is to learn which actions or behaviors are considered immoral. Please provide a list of actions and behaviors which, in your opinion, are immoral. Please list at least five examples. There are no correct answers, we are just interested in your opinion.
My impression is that the differences between the American and Chinese lists (with the Lithuanian list somewhat in between) appear to be a function of differences in the degree of societal order (i.e., crime rates, free speech), cultural differences (i.e., extent of influence of: Anglo-American progressivism, purity norms of parts of Christianity, traditional cultures, and Confucianism), and demographics (i.e, topics like racism/discrimination that might arise in contexts that are ethnically diverse instead of homogenous).
Anyone can call themselves a part of the EA movement.
I sort of don’t agree with this idea, and I’m trying to figure out why. It is so different from a formal membership (like being a part of a professional association like PMI), in which you have a list of members and maybe a card or payment.
Here is my current perspective, which I’m not sure that I fully endorse: on the ‘ladder’ or being an EA (or of any other informal identity) you don’t have to be on the very top rung to be considered part of the group. You probably don’t even have to be on the top handful of rungs. Is halfway up the ladder enough? I’m not sure. But I do think that you need to be higher than the bottom rung or two. You can’t just read Doing Good Better and claim to be an EA without any additional action. Maybe you aren’t able to change your career due to family and life circumstances. Maybe you don’t earn very much money, and thus aren’t donating. I think I could still consider you an EA if you read a lot of the content and are somehow engaged/active. But there has to be something. You can’t just take one step up the ladder, then claim the identity and wander off.
My brain tends to jump to analogies, so I’ll use these to try and illustrate my examples:
If I visit your city and watch your local sports team for an hour, and then never watch them play again, I can’t really claim that I’m a fan of your team, can I? The fans are people who watch the matches regularly, who know something about the team, who really feel a sense of connection.
If I started lifting weights twice per week, and I started this week, is it too early for me to identify as a weight lifter? Nobody is going to police the use of the term “weight lifter,” but it feels premature. I’d feel better waiting until I have a regular habit of this activity before I publicly claim the identity.
If I go to yoga classes, which sometimes involve meditation, and I don’t do any other meditation outside of ~5 every now and then, can I call myself a meditator? Meh… If a person never intentionally or actively does meditation, and they just happen to do it when it is part of a yoga class, I would lean toward “no.”
To give more colour to this. During the hype of FTX Future Fund a lot of people called themselves EAs in order to try show value alignment to try get funding and it was painfully awkward and obvious. I think the feeling you’re naming is something like a fair-weather EA effect that dilutes trust within the community and the self-commitment of the label.
I interpreted it in a more literal way, like it’s just true that anyone can literally call themselves part of EA. That doesn’t mean other people consider it accurate.
I don’t think you can define who gets to identify as something, whether that’s gender or religion or group membership.
I’m a Christian and I think anyone should be able to call themselves call themselves a Christian, no issue with that at all no matter what they believe or whatever their level of commitment or how good or bad they are as a person.
Any alternative means that someone else has to make a judgement call based on objective or subjective criteria, which I’m not comfortable with.
TBH I doubt people will be clamouring for the EA title for status or popularity haha.
Yeah, I think you are right in implying there aren’t really any good alternatives. We could try having a formal list of members who all pay dues to a central organization, but (having put almost no thought into it) I assume that would come with it’s own set of problems. And I also feel comfortability with an implication that we should have someone else making a judgment based on externally visible criteria. I probably wouldn’t make the cut! (I hardly donate at all, and my career hasn’t been particularly impactful either)
Your example of Christianity makes me think about EA being a somewhat “action-based identity.” This is what I mean: I can verbally claim a particular identity (Christianity, or EA, or something else), and that matters to an extent. But what I do matters a lot also, especially if it is not congruent with the identity I claim. If I claim to be Christian but I fail to treat my fellow man with love and instead I am cruel, other people might (rightly) question how Christian I am. If I claim to be an EA but I behave in anti-EA ways (maybe I eat lots of meat, I fail to donate discretionary funds, I don’t work toward reducing suffering, etc.) I won’t have a lot of credibility as an EA.
I’m not sure how to parse the difference between a claimed identity and a demonstrated identity, but I’d guess that I could find some good thoughts about it if I were willing to spend several hours diving into some sociology literature about identity. I am curious about it, but I am 20-minutes curious, not 8-hours curious. Haha.
EDIT: after mulling over this for a few more minutes, I’ve made this VERY simplistic framework that roughly illustrated my current thinking. There is a lot of interpretation to be made regarding what behavior counts as in accordance with an EA identity or incongruent with an EA identity (eating meat? donating only 2%? not changing your career?). I’m not certain that I fully endorse this, but it gives me a starting point for thinking about it.
100% I really like this. You can claim any identity, but how much credibility you have with that identity depends on your “demonstrated identity”. There is risk though to the movement with this kind of all takers appoach. Before I would have thought that the odd regular person behaving badly while claiming to be EA wasn’t a big threat.
Then there was SBF and the sexual abuse scandals. These however were not so much an issue of fringe, non-committed people claiming to EA and tarnishing the movement, but mostly high profile central figures tarnishing the movement.
Reflecting on this, perhaps the actions of high profile or “core” people matter more than people on the edge, who might claim to be EA without serious committment.
I mean I think it’ll come in waves. As I said in my comment below when FTX Future Fund was up and regrants were abound I had many people around me fake the EA label with hilarious epistemic tripwires abound. Then when FTX collapsed those people were quiet. I think as AI Safety gets more prominent this will happen again in waves. I know a few humanities people pivoting to talking about AI Safety and AI bias people thinking of how to get grant money.
I’m very pleased to see that my writing on the EA Forum is now referenced in a job posting from Charity Entrepreneurship to explain to candidates what operations management is, described as “a great overview of Operations Management as a field.” This gives me some warm fuzzy feelings.
I just looked at [ANONYMOUS PERSON]’s donations. The amount that this person has donated in their life is more than double the amount that I have ever earned in my life. This person appears to be roughly the same age as I am (we graduated from college ± one year of each other). Oof. It makes me wish that I had taken steps to become a software developer back when I was 15 or 18 or 22.
Oh, well. As they say, comparison is the thief of joy. I’ll try to focus on doing the best I can with the hand I’m dealt.
Hi Joseph :) Based on what you’ve written I’m going to guess you have probably donate more than 99% of the world’s population to effective charities. So you’re probably crushing it!
Because my best estimate is that there are different steps toward different paths that would be better than trying to rewind life back to college age and start over. Like the famous Sylvia Plath quote about life branching like a fig tree, unchosen paths tend to wither away. I think that becoming a software developer wouldn’t be the best path for me at this point: cost of tuition, competitiveness of the job market for entry-level developers, age discrimination, etc.
Being a 22-year old fresh grad with a bachelor’s degree in computer science in 2010 is quite a different scenario than being a 40-year old who is newly self-taught through Free Code Camp in 202X. I predict that the former would tend to have a lot of good options (with wide variance, of course), while the latter would have fewer good options. If there was some sort of ‘guarantee’ regarding a good job offer or if a wealthy benefactor offered to cover tuition and cost of living while I learn then I would give training/education very serious consideration, but my understanding is that the 2010s were an abnormally good decade to work in tech, and there is now a glut of entry-level software developers.
Is talk about vegan diets being more healthy is mostly just confirmation bias and tribal thinking? A vegan diet can be very healthy or very unhealthy, and a non-vegan diet can also be very healthy or very unhealthy. The simplistic comparisons that I tend to see are contrasting vegans who put a lot of care and attention toward their food choices and the health consequences, versus people who aren’t really paying attention to what they eat (something like the standard American diet or some similar diet without much intentionality). I suppose in a statistics class we would talk about non representativeness.
Does the actual causal factor for health tend to be something more like cares about diet, or pays attention to what they eat, or socio-economic status? If we controlled for factors like these, would a vegan diet still be healthier than a non-vegan diet?
Is talk about vegan diets being more healthy is mostly just confirmation bias and tribal thinking?
I also think if often is. I find discussions for and against veganism surprisingly divisive and emotionally-charged (see e.g. r/AntiVegan and r/exvegans )
That said, my understanding is that many studies do control for things like socio-economic status, and they mostly find positive results for many diets (including, but not exclusively, plant-based ones). You can see some mentioned in a previous discussion here.
In general, I think it’s very reasonable when deciding whether something is “more healthy” to compare it to a “standard”. As an extreme example, I would expect a typical chocolate-based diet to be less healthy than the standard American diet. So, while it would be healthier than a cyanide-based diet, it would still be true and useful to say that a chocolate-based diet is unhealthy.
I’d also guess, without much evidence, that there’s a halo effect-like thing going on where if someone really care about averting animal suffering a vegan diet starts seeming more virtuous, which spills over into their assessment of its health benefits.
I’m wondering to what extent this serves as one small data point in support of the “too much hero worship/celebrity idolization in EA” hypothesis, and (if so) to what extent we should do something about it. I feel kind of conflicted, because in a very real sense reputation can be a result of hard work over time,[1] and it seems unreasonable to say that people shouldn’t benefit from that. But it also seems antithetical to the pursuit of truth, philosophy, and doing good to weigh to the messenger so heavily over the message.
I’m mulling this over, but it is a complex and interconnected enough issue that I doubt I will create any novel ideas with some casual thought.
Perhaps just changing the upvote buttons to something more like this content creates nurtures a discussion space that lines up with the principles of EA? I’m not confident that would change much.
Although not always. Sometimes a person is just in the right place at the right time. Big issues of genetic lottery and class matter. But in a very simplistic example, my highest ranking post on the EA forum is not one of the posts that I spent hours and hours thinking about and writing, but instead is one where I simply linked to a article about EA in the popular press and basically said “hey guys, look how cool this is!”
I’m not convinced by this example; in addition to expressing the view, Toby’s message is a speech act that serves to ostracize behaviour in a way that messages from random people do not. Since his comment achieves something the others do not it makes sense for people to treat it differently. This is similar to the way people get more excited when a judge agrees with them that they were wronged than when a random person does; it is not just because of the prestige of the judge, but because of the consequences of that agreement.
I’m glad that you mentioned this. This makes sense to me, and I think it weakens the idea of this particular circumstance as an example of “celebrity idolization.”
If the EA forum had little emoji reactions for this made me change my mind or this made me update a bit, I would use them here. 😁
I agree as to the upvotes but don’t find the explanation as convincing on the agreevotes. Maybe many people’s internal business process is to only consider whether to agreevote after having decided to upvote?
Yeah, and in general there’s an extremely high correlation between upvotes and agreevotes, perhaps higher than there should be. It’s also possible that some people don’t scroll to the bottom and read all the comments.
I definitely think you should expect a strong correlation between “number of agree-votes” and “number of approval-votes”, since those are both dependent on someone choosing to engage with a comment in the first place, my guess is this explains most of the correlation.
And then yeah, I still expect a pretty substantial remaining correlation.
I wish that it was possible for agree votes to be disabled on comments that aren’t making any claim or proposal. When I write a comment saying “thank you” or “this has given me a lot to think about” and people agree vote (or disagree vote!), it feels to odd: there isn’t even anything to agree or disagree with there!
If we interpret an up-vote as “I want to see more of this kind of thing”, is it so surprising that people want to see more such supportive statements from high-status people?
I would feel more worried if we had examples of e.g. the same argument being made by different people and the higher-status person getting rewarded more. Even then—perhaps we do really want to see more of high-status people reasoning well in public.
Generally, insofar as karma is a lever for rewarding behaviour, we probably care more about the behaviour of high-status people and so we should expect to see them getting more karma when they behave well, and also losing more when they behave badly (which I think we do!). Of course, if we want karma to be something other than an expression of what people want to see more of then it’s more problematic.
Toby’s average karma-per-comment definitely seems higher than average, but it isn’t so much higher than that of other (non-famous) quality posters I spot-checked as to suggest that there are a lot of people regularly upvoting his comments due to hero worship/celebrity idolization. I can’t get the usual karma leaderboard to load to more easily point to actual numbers as opposed to impressionistic ones.
I have this concept I’ve been calling “kayfabe inversion” where attempts to create a social reality that $P$ accidentally enforces $\not P$. The EA vibe of “minimize deference, always criticize your leaders” may just be, by inscrutable social pressures, increasing deference and hero worship and so on. Spurred by my housemate’s view of DoD and it’s ecosystem of contractors (because their dad has a long career in it) that perhaps the military’s explicit deference and hierarchies actually make it easier to do meaningful criticism of or disagreement with leaders, compared to the implicit hierarchies that emerge when you say that you want to minimize deference.
Something along these lines.
Perhaps this hypothesis is made clear by a close reading of tyranny of structurelessness, idk.
I’ve found explanation freeze to be a useful concept, but I haven’t found a definition or explanation of it on the EA Forum. So I thought I’d share a little description of explanation freeze here so that anyone searching the forum can find it, and so that I can easily link to it.
The short version is:
explanation freeze is the tendency to stick with the first explanation we come up with.
The slightly longer explanation is:
Situations that are impossible to conclusively explain can afflict us with explanation freeze, a condition in which we come up with just one possible explanation for an event that may have resulted from any of several different causes. Since we’re only considering one explanation at a time, this condition leads us to make the available evidence fit that explanation, and then overestimate how likely that explanation is. We can consider it as a type of cognitive bias or flaw, because it hinders us in our attempts to form an accurate view of reality.
I think that describing it as a combination of anchoring with confirmation bias seems roughly accurate. Maybe there might be an element of availability bias tossed in as well, since we latch on to the most readily available answer?
I’m not sure, but I think that Julia Galef has spoken about the concept of explanation freeze in interviews or on podcasts, so you might be able to dig up a more detailed and expansive explanation. But with some cursory Google Searching I was only able to find passing references to it, rather than more full explanations.
The 80,000 Hours team just published that “We now rank factory farming among the top problems in the world.” I wonder if this is a coincidence or if this planned to coincide with the EA Forum’s debate week? Combined with the current debate week’s votes on where an extra $100 should be spent, these seem like nice data points to show to anyone that claims EA doesn’t care about animals.
Every now and I then I see (or hear) people involved in EA refer to Moloch[1], as if this is a specific force that should be actively resisted and acted against. Genuine question: are people just using the term “Moloch” to refer to incentives [2] that nudge us to do bad things? Is there any reason why we should say “Moloch” instead of “incentives,” or is this merely a sort of in-group shibboleth? Am I being naïve or otherwise missing something here?
As well as the other influences on our motives from things external to ourselves, such as the culture and society that we grew up in, or how we earn respect and admiration from peers.
I see it as “incentives that nudge us to do bad things”, plus this incentive structure being something that naturally emerges or is hard to avoid (“the dictatorless dictatorship”).
I think “Moloch” gets this across a bit better than just “incentives” which could include things like bonuses which are deliberately set up by other people to encourage certain behaviour.
This is actually a pretty big issue. It was basically locked in to Meditations on Moloch because it was too good. The essay does a really good job explaining it, and giving examples that create the perspective you need to understand the broad applicability of the concept, but has too many words; “incentives” or even a single phrase (e.g. “race to the bottom”) would have fewer words, but it wouldn’t give the concept the explanation that it’s worth. Maybe there could be some kind of middle ground.
“I want to place [my pet cause], a neglected and underinvested cause, at the center of the Effective Altruism movement.”[1]
In my mind, this seems anti-scouty. Rather than finding what works and what is impactful, it is saying “I want my team to win.” Or perhaps the more charitable interpretation is that this person is talking about a rough hypothesis and I am interpreting it as a confident claim. Of course, there are many problems with drawing conclusions from small snippets of text on the internet, and if I meet this person and have a conversation I might feel very differently. But at this point it seems like a small red flag, demonstrating that there is a bit less cause-neutrality here (and a bit more being wedded to a particular issue) than I would like. But it is hard to argue with personal fit; maybe this person simply doesn’t feel motivated about lab grown meat or bednets or bio-risk reduction, and this is their maximum impact possibility.
I changed the exact words to that I won’t publicly embarrass or draw attention to the person who wrote this. But to be clear, this is not a thought experiment of mine, someone actually wrote this. EDIT: And the cause this individual promoted is more along the lines of helping homeless people in America or protect elephants or rescuing political dissidents: it would probably have a positive effect, but I doubt it would be competitive with saving a life (in expectation) for 4-6 thousand USD.
In my experience, many of those arguments are bad and not cause-neutral, though to me your take seems too negative—cause prioritization is ultimately a social enterprise and the community can easily vet and detect bad cases, and having proposals for new causes to vet seems quite important (i.e. the Popperian insight, individuals do not need to be unbiased, unbiasedness/intersubjectivity comes from open debate).
You make a good point. I probably allow myself to be too affected by claims (such as “saving the great apes should be at the center of effective altruism”), when in reality I should simply allow the community sieve to handle them.
This feels misplaced to me. Making an argument for some cause to be prioritised highly is in some sense one of the core activities of effective altruism. Of course, many people who’d like to centre their pet cause make poor arguments for its prioritisation, but in that case I think the quality of argument is the entire problem, not anything about the fact they’re trying to promote a cause. “I want effective altruists to highly prioritise something that they currently don’t” is in some sense how all our existing priorities got to where they are. I don’t think we should treat this kind of thing as suspicious by nature (perhaps even the opposite).
It seems to me that one should draw a distinction between, “I see this cause as offering good value for money, and here is my reasoning why”, and “I have this cause that I like and I hope I can get EA to fund it”. Sometimes the latter is masquerading as the former, using questionable reasoning.
Some examples that seem like they might be in the latter category to me:
In any case though, I’m not sure it makes a difference in terms of the right way to respond. If the reasoning is suspect, or the claims of evidence are missing, we can assume good faith and respond with questions like, “why did you choose this program”, “why did you conduct the analysis in this way”, or “have you thought about these potentially offsetting considerations”. In the examples above, the original posters generally haven’t engaged with these kind of questions.
If we end up with people coming to EA looking for resources for ineffective causes, and then sealioning over the reasoning, I guess that could be a problem, but I haven’t seen that here much, and I doubt that sort of behavior would ultimately be rewarded in any way.
The third one seems at least generally fine to me—clearly the poster believes in their theory of change and isn’t unbiased, but that’s generally true of posts by organizations seeking funding. I don’t know if the poster has made a (metaphorically) better bednet or not, but thought the Forum was enhanced by having the post here.
The other two are posts from new users who appear to have no clear demonstrated connection to EA at all. The occasional donation pitch or advice request from a charity that doesn’t line up with EA very well at all is a small price to pay for an open Forum. The karma system dealt with preventing diversion of the Forum from its purposes. A few kind people offered some advice. I don’t see any reason for concern there.
those posts all go out of their way to say they’re new to EA. I feel pretty differently about someone with an existing cause discovering EA and trying to fundraise vs someone who integrated EA principles[1] and found a new cause they think is important.
I don’t love the phrase “EA principles”, EA gets some stuff critically wrong and other subcultures get some stuff right. But it will do for these purposes.
I think that to a certain extent that is right, but this context was less along the lines of “here is a cause that is going to be highly impactful” and more along the lines of “here is a cause that I care about.” Less “mental health coaching via an app can be cost effective” and more like “let’s protect elephants.”
But I do think that in a broad sense you are correct: proposing new interventions, new cause areas, etc., is how the overall community progresses.
I think a lot of the EA community shares your attitude regarding exuberant people looking to advance different cause areas or interventions, which actually concerns me. I am somewhat encouraged by the disagreement with you regarding your comment that makes this disposition more explicit. Currently, I think that EA, in terms of extension of resources, has much more solicitude for thoughts within or adjacent to recognized areas. Furthermore, an ability to fluently convey ones ideas in EA terms or with an EA attitude is important.
Expanding on jackva re the Popperian insight, having individuals passionately explore new areas to exploit is critical to the EA project and I am a bit concerned that EA is often disinterested in exploring in directions where a proponent lacks some of the EA’s usual trappings and/or lacks status signals. I would be inclined to be supportive of passion and exuberance in the presentation of ideas where this is natural to the proponent.
I suspect you are right that many of us (myself included) focus more than we ought to on how similar an idea sounds in relation to ideas we are already supporting. I suppose maybe a cruxy aspect of this is how much effort/time/energy we should spend considering claims that seem unreasonable at first glance?
If someone honestly told me that protecting elephants (as an example) should be EA’s main cause area, the two things that go through my heard first are that either that this person doesn’t understand some pretty basic EA concepts[1], or that there is something really important to their argument that I am completely ignorant of.
But depending on how extreme a view it is, I also wonder about their motives. Which is more-or-less what led me to viewing the claim as anti-scouty. If John Doe has been working for elephant protecting (sorry to pick on elephants) for many years and now claims that elephant protection should be a core EA cause area, I’m automatically asking if John is A) trying to get funding for elephant protection or B) trying to figure out what does the most good and to do that. While neither of those are villainous motives, the second strikes me as a bit more intellectually honest. But this is a fuzzy thing, and I don’t have good data to point to.
I also suspect that I myself may have an over-sensitive “bullshit detector” (for lack of a more polite term), so that I end up getting false positives sometimes.
I agree that advocacy inspired by other-than-EA frameworks is a concern, I just think that the EA community is already quite inclined to express skepticism for new ideas and possible interventions. So, the worry that someone with high degrees of partiality for a particular cause manages to hijack EA resources is much weaker than the concern that potentially promising cases may be ignored because they have an unfortunate messenger.
the worry that someone with high degrees of partiality for a particular cause manages to hijack EA resources is much weaker than the concern that potentially promising cases may be ignored because they have an unfortunate messenger
I think you’ve phrased that very well. As much as I may want to find the people who are “hijacking” EA resources, the benefit of that is probably outweighed by how it disincentivized people to try new things. Thanks for commenting back and forth with me on this. I’ll try to jump the gun a bit less from now on when it comes to gut feeling evaluations of new causes.
I think it’s important to consider that the other person may be coming from a very different ethical framework than you are. I wouldn’t likely support any of the examples in your footnote, but one can imagine an ethical framework in which the balance looks closer than it does to me. To be clear, I highly value saving the lives of kids under five as the standard EA lifesaving projects do. But: I can’t objectively show that a framework that assigns little to no value to averting death (e.g., because the dead do not suffer) is a bad one. And such a significant difference in values could be behind some statements of the sort you describe.
This is in relation to the Keep EA high-trust idea, but it seemed tangential enough and butterfly idea-ish that it didn’t make sense to share this as a comment on that post.
Rough thoughts: focus a bit less on people and a bit more on systems. Some failures are ‘bad actors,’ but my rough impression is that far more often bad things happen because either:
the system/structures/incentives nudge people toward bad behavior, or
the system/structures/incentives allow bad behavior
I think it is great to be able to trust people, but I also want institutions designed in such a way that it is okay if someone is in the 70th percentile of trustworthiness rather than the 95th percentile of trustworthiness.
Low confidence guess: small failures often occur not because people are malicious or selfish, but because they aren’t aware of better ways to do things. An employee that isn’t aware of EEO in the United States is more likely to make costly mistakes. A manager who has not received good training on how to be a manager is going to fumble more often.
I don’t want to imply that designing systems well is easy, not that I am somehow an expert in it. But my (very) rough impression is that in EA we trust individuals a lot, and we don’t spend as much time thinking about organizational design.
What are the norms on the EA Forum about ChatGPT-generated content?
If I see a forum post that looks like it was generated by a LLM generative AI tool, it is rude to write a comment asking “Was this post written by generative AI?” I’m not sure what the community’s expectations are, and I want to be cognizant of not assuming my own norms/preferences are the appropriate ones.
It seems to me that the proof is in the pudding. The content can be evaluated on what it brings to the discourse and the tools used in producing it are only relevant insofar as these tools result in undesirable content. Rather than questioning whether the post was written by generative AI, I would give feedback as to what aspects of the content you are criticizing.
While I am not aware of any norms or consensus, I would be okay with that. My own view is that use of generative AI should be proactively disclosed where the AI could fairly be considered the primary author of the post/comment. I am unsure how much support this view has, though.
IMO, if the content is good we shouldn’t bring it up. If an author is producing bad content more than once a month and it seems generated by LLMs they should be warned then banned if it continues.
I suspect any comment threads about whether content is LLM-generated aren’t worth reading and thus aren’t worthwhile writing.
Whether or not I personally act in the morally best way is irrelevant to the truth of the moral principles we’ve been discussing. Even if I’m hypocritical as you claim, that wouldn’t make it okay for you to keep buying meat.
This quote made me think of the various bad behaviors that we’ve seen within EA over the past few years. Although this quote is from a book about vegetarianism, the words “keep buying meat” could easily be substituted for some other behavior.
While publicity and marketing and optics all probably oppose this to a certain extent, I take some solace in the fact that some people behaving poorly doesn’t actually diminish the validity of the core principles. I suppose the pithy version would be something like “[PERSON] did [BAD THING]? Well, I’m going to keep buying bednets.”
Decoding the Gurus is a podcast in which an anthropologist and a psychologist critique popular guru-like figures (Jordan Peterson, Nassim N. Taleb, Brené Brown, Imbram X. Kendi, Sam Harris, etc.). I’ve listened to two or three previous episodes, and my general impression is that the hosts are too rambly/joking/jovial, and that the interpretations are harsh but fair. I find the description of their episode on Nassim N. Taleb to be fairly representative:
Taleb is a smart guy and quite fun to read and listen to. But he’s also an infinite singularity of arrogance and hyperbole. Matt and Chris can’t help but notice how convenient this pose is, when confronted with difficult-to-handle rebuttals.
Taleb is a fun mixed bag of solid and dubious claims. But it’s worth thinking about the degree to which those solid ideas were already well… solid. Many seem to have been known for decades even by all the ‘morons, frauds and assholes’ that Taleb hates.
To what degree does Taleb’s reputation rest on hyperbole and intuitive-sounding hot-takes?
A few weeks ago they released an episode about Eliezer Yudkowksy titled Eliezer Yudkowksy: AI is going to kill us all. I’m only partway through listening to it, but so far they have reasonable but not rock-solid critiques (such as noting how it is a red flag for someone to list off a variety of fields that they claim expertise in, or highlighting the behavior that lines up with a Cassandra complex).
The difficulty I have in issues like this parallels the difficulty I perceive in evaluating any other “end of the world” claim: the fact that many other individuals have been wrong about each of their own “end of the world” claims doesn’t really demonstrate that this one is wrong. It perhaps suggests that I should not accept it at face value and I should interrogate the claim, but it certainly doesn’t prove falsehood.
You’re right, but it does feel like some pretty strong induction, though not just to not accepting the claim at face value, but for demanding some extraordinary evidence. I’m speaking from the p.o.v. of a person ignorant of the topic, and just making the inference from the perennially recurring apocalyptic discourses.
It perhaps suggests that I should not accept it at face value and I should interrogate the claim, but it certainly doesn’t prove falsehood.
True, but you only have a finite amount of time to spend investigating claims of apocalypses. If you do a deep dive into the arguments of one of the main proponents of a theory, and find that it relies on dubious reasoning and poor science (like the “mix proteins to make diamondoid bacteria” scenario), then dismissal is a fairly understandable response.
If AI safety wants to avoid this sort of thing from happening, they should pick better arguments and better spokespeople, and be more willing to call out bad reasoning when it happens.
I run some online book clubs, some of which are explicitly EA and some of which are EA-adjacent: one on China as it relates to EA, one on professional development for EAs, and one on animal rights/welfare/advocacy. I don’t like self-promoting, but I figure I should post this at least once on the EA Forum so that people can find it if the search for “book club” or “reading group.” Details, including links for joining each of the book clubs, are in this Google Doc.
I want to emphasize that this isn’t funded through an organization, I’m not trying to get emails to put on a newsletter, and I’m not selling an online course or push people to buy a product. This is literally just online book clubs: we vote on books and have video chats to talk about books.
Here are some upcoming discussions, with links for the events:
September 28, The Emotional Lives of Animals: A Leading Scientist Explores Animal Joy, Sorrow, and Empathy—and Why They Matter. https://lu.ma/ng492gwf
If there is interest, I’d be open to organizing/coordinating some kind of a “core EA books” reading group, with looks like What We Owe the Future, Scout Mindset, Doing Good Better, Animal Liberation, Poor Economics, etc.
Some people involved in effective altruism have really great names for their blogs: Ollie Base has Base Rates, Diontology from Dion Tan, and Ben West has Benthamite. It is really cool how people are able to take their names and with some slight adjustments make them into cool references. If I was the blogging type and my surname wasn’t something so uncommon/unique, I would take a page from their book.
Oh, that’s not bad! Maybe I’ll use that someday. 🤣 Unfortunately, I think that will encourage people to mispronounce my surname; it isn’t pronounced less like “lemon” and more in a way that rhymes with “the mean” or “the keen.”
I suspect that the biggest altruistic counterfactual impact I’ve had in my life was merely because I was in the right place at the right time: a moderately heavy cabinet/shelf thing was tipping over and about to fall on a little kid (I don’t think it would have killed him. He probably would have had some broken bones, lots of bruising, and a concussion). I simply happened to be standing close enough to react.
It wasn’t as a result of any special skillset I had developed, nor of any well thought-out theory of change; it was just happenstance. Realistically, I can’t really take credit for it any more than I can take credit for being born in the time and place that I was. It makes me think about how we plan for things in expectation, but there is such a massive amount of random ‘noise’ in the world. This isn’t exactly epistemic humility or moral cluelessness, but it seems vaguely related to those.
Which brings me to a point the PayPal Mafia member Keith Rabois raised early in this book: he told me that it’s important to hire people who agree with your “first principles”—for example, whether to focus on growth or profitability and, more broadly, the company’s mission and how to pursue it. I’d agree. If your mission is to encourage people to share more online, you shouldn’t hire someone who believes people don’t really want to make their private lives public, or you’ll spend a lot of time arguing, time you don’t have to waste when you’re trying to build a company. But those who believe in your mission and how to execute it aren’t limited to people who look and act like you. To combat this tendency, you must first be explicit about what your first principles are. And then, for all of the reasons we discussed, go out of your way to find people who agree with your first principles and who don’t look like you. Because if you don’t build a diverse team when you start, as you scale, it will be incomparably harder to do so.
The parallels seem pretty obvious to me, and here is my altered version:
If your mission is to improve the long-term future, you shouldn’t hire someone who believes that most of the value is in the next 0 to 50 years. If your mission is to reduce animal suffering, you shouldn’t hire someone who hates animals. But those who believe in your mission and how to execute it aren’t limited to people who look and act like you.
If your mission is to reduce animal suffering, should you hire someone that wants to do that but is simply less intense about it? A person who spends 5% of their free time thinking about this when you spend 60% of your free time thinking about this? I do think that mission alignment is important for some roles, but it is hard to specify without really understanding the work.[1]
As an example of “understanding the work,” my superficial guess is that someone planning an EAG event probably doesn’t need to know all about EA in order to book conference rooms, arrange catering, set up sound & lighting, etc. But I don’t know, because I haven’t done that job or managed that job or closely observed that job. Maybe lot of EA context really is necessary in order to make lots of little decisions which otherwise would make the event a noticeably worse experience for the attendees. Indeed, pretty much the only thing that I am confident in in relation to this is that we can’t make strong claims about a role unless we really understand the work.
I didn’t learn about Stanislav Petrov until I saw announcements about Petrov Day a few years ago on the EA Forum. My initial thought was “what is so special about Stanislav Petrov? Why not celebrate Vasily Arkhipov?”
I had known about Vasily Arkhipovfor years, but the reality is that I don’t think one of them is more worthy of respect or idolization than the other. My point here is more about something like founder effects, path dependency, and cultural norms. You see, at some point someone in EA (I’m guessing) arbitrarily decided that Stanislav Petrov was more worth knowing and celebrating than Vasily Arkhipov, and now knowledge of Stanislav Petrovis widespread (within this very narrow community). But that seems pretty arbitrary. There are other things like this, right? Things that people hold dear or believe that are little more than cultural norms, passed on because “that is the way we do things here.”
I think a lot about culture and norms, probably as a result of studying other cultures and then living in other countries (non-anglophone countries) for most of my adult life. I’m wondering what other things exist in EA that are like Stanislav Petrov: things that we do for no good reason other than that other people do them.
The origin of Petrov Day, as an idea for an actual holiday, is this post by Eliezer Yudkowsky. Arkhipov got a shout-out in the comments almost immediately, but “Petrov Day” was the post title, and it’s one syllable shorter.
There are many other things like Petrov Day, in this and every culture — arbitrary decisions that became tradition.
But of course, “started for no good reason” doesn’t have to mean “continued for no good reason”. Norms that survive tend to survive because people find them valuable. And there are plenty of things that used to be EA/rationalist norms that are now much less influential than they were, or even mostly forgotten. The first examples that come to mind for me:
Early EA groups sometimes did “live below the line” events where participants would try to live on a dollar a day (or some other small amount) for a time. This didn’t last long, because there were a bunch of problems with the idea and its implementation, and the whole thing faded out of EA pretty quickly (though it still exists elsewhere).
The Giving What We Can pledge used to be a central focus of student EA groups; it was thought to be really important and valuable to get your members to sign up. Over time, people realized this led students to feel pressure to make a lifelong decision too early on, some of whom regretted the decision later. The pledge gradually attained an (IMO) healthier status — a cool part of EA that lots of people are happy to take part in, but not an “EA default” that people implicitly expect you to do.
I would be happy to celebrate an Arkhipov Day. Is there anything that could distinguish the rituals and themes of the day? Arkhipov was in a submarine and had to disagree with two other officers IIRC? (Also when is it?)
Haha, I don’t think we need another holiday for Soviet military men who prevented what could have been WWIII. More so, I think we should ask ourselves (often) “Why do we do things the way we do, and should we do things that way?”
As Aaron notes, the “Petrov Day” tradition started with a post by Yudkowsky. It is indeed somewhat strange that Petrov was singled out like this, but I guess the thought was that we want to designate one day of the year as the “do not destroy the world day”, and “Petrov Day” was as good a name for it as any.
Note that this doesn’t seem representative of the degree of appreciation for Petrov vs. Arkhipov within the EA community. For example, the Future of Humanity Institute has both a Petrov Room and an Arkhipov Room (a fact that causes many people to mix them up), and the Future of Life Award was given both to Arkhipov (in 2017) and to Petrov (in 2018).
I think Arkhipov’s actions are in a sense perhaps even more consequential than Petrov’s, because it was truly by chance that he was present in that particular nuclear submarine, rather than in any of the other subs from the flotilla. This fact justifies the statement that, if history had repeated itself, the decision to launch a nuclear torpedo would likely not have been vetoed. The counterfactual for Petrov is not so clear.
Some jobs are proactive: you have to be the one doing the calls and you have to make the work yourself and no matter how much you do you’re always expected to carry on making more, you’re never finished. Some jobs are reactive: The work comes in, you do it, then you wait for more work and repeat.
Proactive roles are things like business development/sales, writing a book, marketing and advertising, and research. You can almost always do more, and there isn’t really an end point unless you want to impose an arbitrary end point: I’ll stop when I finish writing this chapter, or I’ll take a break after this research paper. I imagine[1] that a type of stress present in sales and business development is that you are always pushing for more, like the difference between someone who wants to accumulate $950,000 dollars for retirement as opposed to someone who simply wants lots of dollars for retirement.
Reactive roles are things like running payroll, being the cook in a restaurant (or being the waiter in a restaurant), legal counsel, office manager, teacher. There is an ‘inflow’ of tasks or work or customers, and you respond to that inflow. But if there are times when there isn’t any inflow, then you just wait for work to arrive[2]. After you finish running payroll for this pay period, it isn’t like you can take initiative to send the next round of salary payments ahead of schedule. Or imagine being the cook in a restaurant, and there is a 30-minute period when there are no new order placed. Once everything is clean and you are ready for orders to come in, what can you do? You prep what you can, and then you just kind of… wait for more work tasks to arrive.
It isn’t always so simplistic of course. Maybe the waiter has some other tasks on ‘standby’ for when there are no customer’s coming in. Maybe the payroll person has some lower priority tasks (back burner tasks) that are now the highest priority available task to do when there isn’t any payroll work to do. Often there are ways that you can do something other than sit around an twiddle your thumbs, and this is also a great way to get noticed and get positive attention from managers. But it seems to be a very slippery slope into busy work with a lot of low-prestige jobs: how often does that supply closet really need to be reorganized? How often does this glass door need to be cleaned? How many months in advance can you realistically really make lesson plans for the students?
I just had a call with a young EA from Oyo State in Nigeria (we were connected through the excellent EA Anywhere), and it was a great reminder of how little I know regarding malaria (and public health in developing countries more generally). In a very simplistic sense: are bednets actually the most cost effective way to fight against malaria?
I’ve read a variety of books on the development economics canon, I’m a big fan of the use of randomized control trials in social science, I remember worm wars and microfinance not being so amazing as people thought and critiques of Tom’s Shoes. I was thrilled when I first read Poor Economics, and it opened my eyes to a whole new world. But I’m a dabbler, not an expert. I haven’t done fieldwork; I’ve merely read popular books. I don’t have advanced coursework in this area.
It was nice to be reminded of how little I actually know, and of how superficial general interest in a field is not the same as detailed knowledge. If I worked professionally in development economics I would probably be hyper aware of the gaps in my knowledge. But as a person who merely dabbles in development as an interest, I’m not often confronted with the areas about which I am completely ignorant, and thus there is something vaguely like a Dunning-Kruger effect. I really enjoyed hearing perspectives from someone that knows a lot more than I do.
If anybody wants to read and discuss books on inclusion, diversity, and similar topics, please let me know. This is a topic that I am interested in, and a topic that I want to learn more about. My main interest is on the angle/aspect of diversity in organizations (such as corporations, non-profits, etc.), rather than broadly society-wide issues (although I suspect they cannot be fully disentangled).
I have a list of books I intend to read on DEI topics (I’ve also listed them at the bottom of this quick take in case anybody can’t access my shared Notion page), but I think I would gain more from the books if I am able to discuss the contents with other people and bounce around ideas. I think that I tend to agree too readily with what I read, and having other people would help me be a more critical consumer of this information. Most of these books are readily available through public libraries (and services like Libby/Overdrive) in the USA or through online book shops.
I’m not planning on formally starting another book club (although I’m open to the possibility if there are a handful of people that express interest), but I would really enjoy having a call/chat once every several weeks. I’m not expecting this to evolve into some sort of a working group or a diversity council, but I’d be open that possibility in time.
- - - - -
The Inclusion Dividend: Why Investing in Diversity & Inclusion Pays Off
We Can’t Talk about That at Work!: How to Talk about Race, Religion, Politics, and Other Polarizing Topics
Inclusion on Purpose: An Intersectional Approach to Creating a Culture of Belonging at Work
Inclusify: The Power of Uniqueness and Belonging to Build Innovative Teams
The 4 Stages of Psychological Safety
How to Be an Ally: Actions You Can Take for a Stronger, Happier Workplace
Race Rules: What Your Black Friend Won’t Tell You
Say the Right Thing: How to Talk About Identity, Diversity, and Justice
OtherWise: The Wisdom You Need to Succeed in a Diverse and Divisive World
Inclusion Revolution: The Essential Guide to Dismantling Racial Inequity in the Workplace
Inclusive Growth: Future-proof your business by creating a diverse workplace
Leading Global Diversity, Equity, and Inclusion: A Guide for Systemic Change in Multinational Organizations
Managing Diversity: Toward a Globally Inclusive Workplace
A Queer History of the United States
The Making of Asian America: A History
White Trash: The 400-Year Untold History of Class in America
No Right to Be Idle: The Invention of Disability
History from the Bottom Up and the Inside Out: Ethnicity, Race, and Identity in Working-Class History
I’ve been reading about performance management, and a section of the textbook I’m reading focuses on The Nature of the Performance Distribution. It reminded me a little of Max Daniel’s and Ben Todd’s How much does performance differ between people?, so I thought I’d share it here for anyone who is interested.
The focus is less on true outputs and more on evaluated performance within an organization. It is a fairly short and light introduction, but I’ve put the content here if you are interested.
A theme that jumps out at me is situational specificity, as it seems some scenarios follow a normal distribution, some scenarios are heavy tailed, and some probably have a strict upper limit. This echoes the emphasis that an anonymous commented shared on the Max’s and Ben ’s post:
My point is more “context matters,” even if you’re talking about a specific skill like programming, and that the contexts that generated the examples in this post may be meaningfully different from the contexts that EA organizations are working in.
I’m roughly imaging an organization in which there is a floor to performance (maybe people beneath a certain performance level aren’t hired), and there is some type of barrier that creates a ceiling to performance (maybe people who perform beyond a certain level would rather go start their own consultancy rather than work for this organization, or they get promoted to a different department/team). But the floor or the ceiling could be more more naturally related to the nature of the work as well, as in the scenario of an assembly worker who can’t go faster than the speed of the assembly line.
This idea of situational specificity is paralleled in hiring/personnel selection, in which a particular assessment might be highly predictive of performance in one context, and much less so in a different context. This is the reason why we shouldn’t simply use GMA and conscientiousness to evaluate every single employee at every single organization.
We tell the story in our class about the time our CIO Craig Hergenroether’s daughter was working in another organization, and she said, “We’re taking our IT team to happy hour tonight because we got this big e-mail virus, but they did a great job cleaning it up.”
Our CIO thought, “We never got the virus. We put all the disciplines and practices in place to ensure that we never got it. Shouldn’t we celebrate that?”
What we choose to hold up and celebrate gets emulated. Therefore it is important to consider how those decisions impact the culture. Instead of firefighting behaviors, we recognize and celebrate sustained excellence: people who consistently distinguish themselves through their actions. We celebrate people who do their jobs very well every day with little drama. Craig, the CIO, took his team out to happy hour and said, “Congratulations, we did not get the e-mail virus that took out most of the companies in St. Louis and Tampa Bay.”
Overall, the Everybody Matters could is the kind of book that could have been an article. I wouldn’t recommend spending the time to read it if you are already superficially familiar with the fact that an organization can choose to treat people well (although maybe that would be revelatory for some people). It was on my to-read list due to it’s mention in the TED Talk Why good leaders make you feel safe.
It is sort of curious/funny that posts stating “here is some racism” get lots of attention, and posts stating “let’s take the time to learn about inclusivity, diversity, and discrimination”[1] don’t get much attention. I suppose it is just a sort of an unconscious bias: some topics are incendiary and controversial and are more appealing/exciting to engage with, while some topics are more hufflepuffy and do the work and aren’t so exciting. Is it vaguely analogous to a polar bear stranded on an ice floe getting lots of clicks, but the randomized control trial for giving schoolchildren preventative medicine doesn’t?
I think that is probably at least some of it. Other candidate explanations might include the following. (I’m going to use phrases like attitude toward racial issues in an awkward attempt to cover the continuum from overt racism against people of color to DEI superstar status; this is not meant to imply that the more DEI a viewpoint is, the better.)
From the perspective of a Forum commenter, the response to many “here is some racism” things may be more tractable / have a clearer theory of change and impact (which may involve education, norm reinforcement, placing social pressure on people, etc. depending on the poster) than many “let’s learn about DEI” type posts.
In particular, one could think that attitudes about race in the lower half of the progressiveness distribution are more important toward “scoring” the community’s overall attitude toward racial issues. For example, the percentage of people who espouse views on race that are problematic is probably an important metric for the extent to which the community is unwelcoming to people of color.
In contrast, if a person is at the 75th percentile of progressiveness on racial issues already, moving them to the 95th percentile may not accomplish nearly as much as moving the 5th percentile person to the 25th. And it’s likely that the bulk of people who are interested in engaging with “let’s learn about DEI” posts in a supportive manner are at least already above the median here.
Also, a paucity of DEI often implicates structural barriers that are significantly harder to address (especially at the individual-commenter level) than individual/organizational bad behavior.
People tend to react more strongly to losses from an established baseline (e.g., losing $100) than equal-magnitude gains from that baseline (e.g., winning $100).
One could think there are significantly diminishing returns at play. In particular, one might identify a “good enough” point beyond which additional improvements are likely to have relatively little benefit to EA. For instance, this is likely true from a PR/optics standpoint; we’re unlikely to get positive press coverage even if we reach an A+ score on DEI. So there’s not much delta between a B and an A+ through the PR/optics lens. And one might think EA is currently at the “good enough” point (to be clear, this is not my personal view).
Some people could associate the “let’s learn about DEI” type posts—rightly or wrongly—with ideas like affirmative action (positive discrimination) that they find contrary to their values. In contrast, posts focused on bad behavior may be less likely to trigger this association.
Some vocal commenters (and strong-downvoters) are so opposed to DEI-like ideas that commenters may not feel like putting on the emotional armor to engage on pro-DEI-like posts. They feel more social support to comment on the “here is some racism” posts, and they feel that overt racism is stigmatized enough to create some social pressure not to throw flaming arrows at them in response.
All of these ideas are speculative, and I’d be curious about the extent to which any of them resonate / don’t resonate with people.
Ben West recently mentioned that he would be excited about a common application. It got me thinking a little about it. I don’t have the technical/design skills to create such a system, but I want to let my mind wander a little bit on the topic. This is just musings and ‘thinking out out,’ so don’t take any of this too seriously.
What would the benefits be for some type of common application? For the applicant: send an application to a wider variety of organizations with less effort. For the organization: get a wider variety of applicants.
Why not just have the post openings posted to LinkedIn and allow candidates to use the Easy Apply function? Well, that would probably result in lots of low quality applications. Maybe include a few question to serve as a simple filter? Perhaps a question to reveal how familiar the candidate is with the ideas and principles of EA? Lots of low quality applications aren’t really an issue if you have an easy way to filter them out. As a simplistic example, if I am hiring for a job that requires fluent Spanish, and a dropdown prompt in the job application asks candidates to evaluate their Spanish, it is pretty easy to filter out people that selected “I don’t speak any Spanish” or “I speak a little Spanish, but not much.”
But the benefit of Easy Apply (from the candidate’s perspective) is the ease. John Doe candidate doesn’t have to fill in a dozen different text boxes with information that is already on his resume. And that ease can be gained in an organization’s own application form. An application form literally can be as simple as prompts for name, email address, and resume. That might be the most minimalistic that an application form could be while still being functional. And there are plenty of organizations that have these types of applications: companies that use Lever or Ashby often have very simple and easy job application forms (example 1, example 2).
Conversely, the more than organizations prompt candidates to explain “Why do you want to work for us” or “tell us about your most impressive accomplishment” the more burdensome it is for candidates. Of course, maybe making it burdensome for candidates is intentional, and the organization believes that this will lead to higher quality candidates. There are some things that you can’t really get information about by prompting candidates to select an item from a list.
I’m been thinking about small and informal ways to build empathy[1]. I don’t have big or complex thoughts on this (and thus I’m sharing rough ideas as a quick take rather than as a full post). This is a tentative and haphazard musing/exploration, rather than a rigorous argument.
Read about people who have various hardships or suffering. I think that this is one of the benefits of reading fiction: it helps you more realistically understand (on an emotional level) the lives of other people. Not all fiction is created equal, and you probably won’t won’t develop the same level of empathy reading about vampire romance as you will reading a book about a family struggling to survive a civil war[2]. But good literature can make you cry and leave you shaken for how much you feel. The other approach here is to read things that are not fiction; read real stories. Autobiographies can be one option, but if you don’t want to commit to something so large, try exploring online forums where people tell their own stories of the hard and difficult things they have gone through. Browsing the top posts on the Cancer subreddit might bring tears to your eyes. I suggest that you do not do this during the workday: If you can read about these experiences (a person watching their spouse suffer and die while being helpless to do anything about it, or a parent knowing he won’t live to see his child’s tenth birthday) without crying and losing composure, then you are made of sterner stuff than I am[3]. I remember crying when I read Zhenga Cuomao’s writing about her husband’s “trial” and imprisonment: “How is it this hard to be a good person?” I wanted so desperately for the world to be a just place, and the world so obviously was not. So if you want to build empathy this way, the action might be something like occasionally seek out places where you can hear of actual hardship that real people undergo.
Walk a mile in someone else’s shoes. It shouldn’t be a surprise to anyone that experiencing hardship can build empathy for hardship. It is one of the common tropes of storytelling. But (taking physical disability as an example) it is very different to think “it must be hard to live life with such mobility limitations” and to actually live for a few days being physically unable to drink from a glass of water or raise your arms above your head. The trouble with walking in someone’s shows is that it is normally not feasible. You can understand what it is like to be an immigrant in a foreign country, but only if you are willing to commit multiple years of your life to actually doing that. There are roleplaying exercises people can do, but it is hard to get a full picture. There isn’t any easy way for a man to have the experience that a woman has in American society[4], nor it is easy for a person without any mental illnesses to understand what it is like to live with bipolar or schizophrenia. Nonetheless, some people seriously commit to these efforts. Seneca recommends regularly spending time destitute and depriving yourself of comfortable clothing a good quality food[5]. In 1959 John Howard Griffin (a white man) chemically darkened his skin to appear black. Barbara Ehrenreich wrote a book about her experience spending months trying to make it as a low-wage, unskilled worker (a project that has been duplicated by others). And even she was aware that she could always stop ‘pretending’ if she had a real emergency. The National Center for Civil and Human Rights in Atlanta has an experience/exhibit in which you sit on a bar stool and put on headphones to immerse yourself in a simulated experience of being black in a diner in the deep south.
Why bother? Well, I have a vague and not well-reasoned intuition that being more empathetic makes you a better person. Will it actually increase your impact? I have no idea. Maybe you would have higher impact and you would make the world a better place if you just kept your head down and worked on your project.
A polished article would have some sort of conclusion or a nice takeaway, but for this short form I’ll just end it here.
I’m using “empathy” in a pretty sloppy sense. Something like “caring for other people who are not related/connected to you” or “developing something of an emotional understanding of the suffering people go through, rather than merely an intellectual one.” I’m thinking about this in a very suffering-focused sense.
Half of a Yellow Sun is once of the books that I think made me a little bit more empathetic. It is a book about the Nigerian Civil war, something that I assume most of fellow North Americans know almost nothing about. I certainly knew nothing about it.
And to echo writings from many other people in and around the EA community: if you think that is bad, remember that there is a similar level of suffering happening every day for millions of people.
Although you can read accounts from transgender people. The rough summary would be something like “I am stunned at how different people treat me when they see me as a man/woman.”
Note that the Stoic interpretation here isn’t to build empathy, but rather to make yourself unafraid of hardship. And the trouble with using these for building empathy is that you aren’t really in the situation; you can stop pretending whenever you like. For anyone who is curious, here is the relevant excerpt from The Daily Stoic that turned me on to this idea:
What if you spent one day a month experiencing the effects of poverty, hunger, complete isolation, or any other thing you might fear? After the initial culture shock, it would start to feel normal and no longer quite so scary.
There are plenty of misfortunes one can practice, plenty of problems one can solve in advance. Pretend your hot water has been turned off. Pretend your wallet has been stolen. Pretend your cushy mattress was far away and that you have to sleep on the floor, or that your car was repossessed and you have to walk everywhere. Pretend you lost your job and need to find a new one. Again, don’t just think about these things, but live them. And do it now, while things are good. As Seneca reminds us: “It is precisely in times of immunity from care that the soul should toughen itself beforehand for occasions of greater stress. . . . If you would not have a man flinch when the crisis comes, train him before it comes.”
(caution: grammatical pedantry, and ridiculously low-stakes musings. possibly the most mundane and unexciting critique of EA org ever)
The name of Founders Pledge should actually be Founders’ Pledge, right? It is possessive, and the pledge belongs to multiple founders. If I remember my childhood lessons, apostrophes come after the s for plural things:
the cow’s friend (this one cow has a friend)
the birds’ savior (all of these birds have a savior)
A new thought: maybe I’ve been understanding it wrong. I’ve always thought of the “pledge” in Founders Pledge as a noun, but maybe it is actually an verb? In that sense, Founders Pledge would be like Germans Give or Gamblers Donate. I think it sounds a little funny to use pledge as an intransitive verb (without anything coming after it), but I guess it works in the same way that “I eat” sounds a little odd but is grammatically correct, and I suppose Californians Eat sounds fine.
EDIT: It looks like there have been some disagree votes. I find this particularly curious, as this is musings rather than claims/arguments.
I assumed it was functioning as a compound noun rather than a possessive. The word ‘Founders’ is modifying the type of Pledge, not claiming ownership of it.
I just finished reading Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth. I think the book is worth reading for anyone interested in truth and the figuring out what is real, but I especially liked the aspiration Mertonian norms, a concept I had never encountered before, and which served as a theme throughout the book.
I’ll quote directly from the book to explain, but I’ll alter the formatting a bit to make it easier to read:
In 1942, Merton set out four scientific values, now known as the ‘Mertonian Norms’. None of them have snappy names, but all of them are good aspirations for scientists.
First, universalism: scientific knowledge is scientific knowledge, no matter who comes up with it – so long as their methods for finding that knowledge are sound. The race, sex, age, gender, sexuality, income, social background, nationality, popularity, or any other status of a scientist should have no bearing on how their factual claims are assessed. You also can’t judge someone’s research based on what a pleasant or unpleasant person they are – which should come as a relief for some of my more disagreeable colleagues.
Second, and relatedly, disinterestedness: scientists aren’t in it for the money, for political or ideological reasons, or to enhance their own ego or reputation (or the reputation of their university, country, or anything else). They’re in it to advance our understanding of the universe by discovering things and making things – full stop. As Charles Darwin once wrote, a scientist ‘ought to have no wishes, no affections, – a mere heart of stone.’ The next two norms remind us of the social nature of science.
The third is communality: scientists should share knowledge with each other. This principle underlies the whole idea of publishing your results in a journal for others to see – we’re all in this together; we have to know the details of other scientists’ work so that we can assess and build on it.
Lastly, there’s organised scepticism: nothing is sacred, and a scientific claim should never be accepted at face value. We should suspend judgement on any given finding until we’ve properly checked all the data and methodology. The most obvious embodiment of the norm of organised scepticism is peer review itself.
Although there are lots of differences between the goals of EA and the goals of science, in the areas of similarity I think there might be benefit in more awareness of these norms and more establishment of these as standards. Much of it seems to line up with broad ideas of scout mindset and epistemic rationality.
My vague impressions are that the EA community generally holds up fairly well when measured against these norms. I suspect there is some struggle with organized skepticism (ideas from high-status people often get accepted at face value) and there are a lot of difficulties with disinterestedness (people need resources to survive and to pursue their goals, and most of us have a desire for social desirability), but overall I think we are doing decently well.
I remember being very confused by the idea of an unconference. I didn’t understand what it was and why it had a special name distinct from a conference. Once I learned that it was a conference in which the talks/discussions were planned by participants, I was a little bit less confused, but I still didn’t understand why it had a special name. To me, that was simply a conference. The conferences and conventions I had been to had involved participants putting on workshops. It was only when I realized that many conferences lack participative elements that I realized my primary experience of conferences was non-representative of conferences in this particular way.
I had a similar struggle understanding the idea of Software as a Service (SaaS). I had never had any interactions with old corporate software that required people to come and install it on your servers. The first time I heard the term SaaS as someone explained to me what it meant, I was puzzled. “Isn’t that all software?” I thought. “Why call it SaaS instead of simply calling it software?” All of the software I had experienced and was aware of was in the category of SaaS.
I’m writing this mainly just to put my own thoughts down somewhere, but if anyone is reading this I’ll try to put a “what you can take from this” spin on it:
If your entire experience of X falls within X_type1, and you are barely even aware of the existence of X_type2, then you will simply think of X_type1 as X, and you will be perplexed when people call it X_type1.
If you are speaking to someone who is confused by X_type1, don’t automatically assume they don’t know what X_type1 is. It might be that they simply don’t know why you are using such an odd name for (what they view as X).
Silly example: Imagine growing up in the USA, never travelling outside of the USA, and telling people that you speak “American English.” Most people in the USA don’t think of their language as American English; they just think of it as English. (Side note: over the years I have had many people tell me that they don’t have an accent)
In discussions (both online and in-person) about applicant experience in hiring rounds, I’ve heard repeatedly that applicants want feedback. Giving in-depth feedback is costly (and risky), but here is an example I have received that strikes me as low-cost and low-risk. I’ve tweaked it a little to make it more of a template.
“Based on your [resume/application form/work sample], our team thinks you’re a potential fit and would like to invite you to the next step of the application process: a [STEP]. You are being asked to complete [STEP] because you are currently in the top 20% of all applicants.”
The phrasing “you are currently in the top 20% of all applicants” is nice. I like that. I haven’t ever seen that before, but I think it is something that EA organizations (or hiring teams at any organization) could easily adapt and use in many hiring rounds. While you don’t always know exactly what percentile a candidate falls into, you can give broad/vague information, such as being in the top X%. It is a way to give a small amount of feedback to candidates without requiring a large amount of time/effort and without taking on legal risk.
Some questions cause me to become totally perplexed. I’ve been asked these (or variations of these) by a handful people in the EA community. These are not difficulties or confusions that require PhD-level research to explain, but instead I think they represent a sort of communication gap/challenge/disconnect and differing assumptions.
Note that this fuzzy musings on communication gaps, and on differing assumptions of what is normal. In a very broad sense you could think of this as an extension of the maturing/broadining of perspectives that we all do when we realize “the way I’m used to things isn’t the way everybody does things.”[1] This is musings and meanderings rather than well-thought out conclusions, so don’t take any of this too seriously.
It isn’t quite an issue inferential distance, but it seems to be a vaguely similar communication error. Questions like this are surprising to me since they seem somewhat self evident, or obvious (which maybe speaks to my own difficulty in communicating well). The questions I’m thinking of are questions like
why is it important to use data and evidence for decision-making?
why do you like ice cream?
why don’t you want to do [non-standard thing]?
The use of data/evidence is something that I have difficulty justifying off the top of my head, simply because it strikes me as so incredibly obvious. I would probably respond “would it be better if we referenced animal entrails or the shapes of the clouds to make decisions?” But if pressed I would probably compare decisions made using data and decisions made without data, and see which tends to turn out better, using that as justification.[2] It would feel really weird to have to justify use of data though, like needing to justify why I prefer a 10% chance of pain to a 50% chance of pain; one of them strikes me as obviously preferably.
I like ice cream because it tastes good. While we could dive deep into the chemistry of taste and the evolutionary biology of human preferences, what me like ice cream tasty is simply the fact that I get pleasure/enjoyment from eating it. It is an end in itself: I’m not eating the ice cream as part of a plan toward some larger goal; I enjoy it, and that is all there is to it.
Asking me why I don’t want to do something has confused me on more than one occasion in the past. I’ve always thought that absence of an action doesn’t require justification, and rather that taking an action requires justification. The generic form of this is someone suggesting an activity that is expensive and unenjoyable to me, and expressing some level of surprise or concern when I decline.[3] The times people have asked me “why don’t you want to pay money to do [thing that you probably won’t find fun]” or “why don’t you want to join us in [expensive activity that you really didn’t enjoy last time you joined us],” they haven’t really been satisfied with me simply saying “I don’t like doing that kind of thing. It isn’t much fun for me.”[4]
My most stark memory of this was when I made hot chocolate, and an adult man said something along the lines of “Oh my god, you are putting water in your hot chocolate instead of milk?” as if this was something outrageous. A more internet-friendly version would be the silliness surrounding pineapple on pizza: it really doesn’t matter that other people have mundane preferences different than yours. It is mostly just reflexive in-group/out-group dynamics.
If I want to eat ice cream, should should I consider the distance, price, and quality of two different shops and base my decision on that, or should I just flip a coin to decide which ice cream shop to go to? To me, the answer is as obvious as the answer to “is astrology predictive?”
If you don’t drink alcohol, you probably get this a lot; people have often assumed I have some specific religious reason. I’m guessing that people who don’t engage in other common practices also get similar responses.
Of course, the examples I’m choosing share here tell you something about my own preferences. For a person that inherently enjoys sitting in a bar, spending money to drink alcohol, and having shouted conversations over loud music, it is quite unusual for someone to day “no thanks, I wouldn’t enjoy that.” But I’ve tried to enough bars and parties, and I’ve discovered that sitting around with a bunch of people I barely know having shallow/forgettable conversations usually isn’t really my thing. The topics of conversation that people often want to talk about didn’t really have much overlap with things that I would be interested in talking about, and the levels of performative behaviors and the affectations aren’t something that I enjoy very much. So it is no surprise that I enjoy “grown up” dinner parties/cocktail parties more than “college kid” parties with loud music. (Of course, if I had loads of money to spare, a high level of beauty and charisma, and a group of friends that regularly hung out in bars, then my preferences would probably be quite different.)
I think that the interface looks a bit dated, but it works well: you send people books you have that you don’t want, and other people send you books that you want but you don’t have. I used to use BookMooch a lot from around 2006 to 2010, but when I moved outside of the USA in 2010 I stopped using it. One thing I like is that it feels very organic and non-corporate: it doesn’t cost a monthly membership, there are no fees for sending and receiving books,[1] and it isn’t full of superfluous functions. There is a pretty simple system to prevent people from abusing the system, which is basically just transparency and having a “give:mooch ratio” visible. Although it is registered as a for-profit corporation, John Buckman runs it without trying to maximize profits. BookMooch earns a bit of money by using Amazon affiliate fees if people want to buy a book immediately rather than mooch the mooch the book, but the site doesn’t have advertisements or any other revenue.[2]
I love this, and it makes me think about creating value in the world. In my mind, this is kind of the ideal of a startup: you have an idea and you implement it, literally making value out of nothing. There really was an unrealized “market” for second-hand books, but there was no way to “liberate” it. And I also love that this is simply providing a service to the world. I wonder what similar yet-to-be-realized ventures there are that would create more impact than merely the joy of getting a book you want.
Now that I am in the USA again I think I’ll start using BookMooch again. I probably won’t use it as much as I used to, with how I’ve become more adapted to reading PDFs and EPUBs and listening to audiobooks, but I’ll use it some for books that I haven’t been able to get digital copies of.
I had an anarchist streak when I was younger, and the fact that this corporation lacks so many of the trappings of standard extractive capitalism is emotionally quite appealing. If a bunch of hippies had created silicon valley instead of venture capitalists, maybe the big tech firm would look more like this.
I want to try and nudge some EAs engaged in hiring to be a bit more fair and a bit less exclusionary: I occasionally see job postings for remote jobs with EA organizations that set time zone location requirements.[1] Location seems like the wrong criteria; the right criteria is something more like “will work a generally similar schedule to our other staff.” Is my guess here correct, or am I missing something?
What you actually want are people who are willing to work “normal working hours” for your core staff. You want to be able to schedule meetings and do collaborative work. If most staff are located in New York City, and you hire someone in Indonesia who is willing and able to do a New York City working schedule, for the organization and for teamwork that isn’t different than hiring someone in Peru (which is in the time zone as New York City).[2]
I’ve previously spoken with people in Asian time zones who emphasized the unreasonableness of this; people who have the skills and who are happy/able to work from 9pm to 4am. If someone who lives in a different time zone is happy to conform to your working schedule, don’t disqualify them. You can disqualify them because they lack the job-relevant skills, or because they wouldn’t perform well enough in the role, but don’t do it due to their location.[3] If they have stable internet connection and they state that they are willing to work a particular schedule, believe them. You could even have a little tick-box on your job application to clarify that they understand and consent that they need to be available for at least [NUMBER] hours during normal business hours in your main/preferred time zone.
You might make the argument that the person in Indonesia would be giving themselves a big burden working in the middle of the night and (presumably) sleeping during the day, but that is a different argument. That is about whether they are able to conform to the expected work schedule/availability or about how burdensome they would find it, not about whether they are physically located in a similar time zone. Lots of people in low income countries would be happy to have a weird sleeping & work schedule in exchange for the kinds of salaries that EA organizations in the UK and USA tend to pay; that is a good tradeoff for many people.
There are, of course, plenty of other reasons to care about location. There are legal and tax reasons that a organization should only hire people in certain locations. Not all employers of record can employee people in all countries. And there are practical reasons related to the nature of the job. If you need someone to physically be somewhere occasionally, location matters. That person should probably shouldn’t be located a 22-hour trip away if they need to be there in-person twice a month; they should be able to travel there in a reasonable amount of time.
Hmm, I don’t entirely disagree but I also don’t fully agree either:
Where I agree: I have indeed hired people on the opposite side of the world (eg Australia) for whom it was not a problem.
Where I disagree: working at weird hours is a skill, and one that is hard to test for in interviews. There is a reasonably high base rate (off the cuff: maybe 30 percent?) of candidates claiming overconfidently in interviews that they can meet a work schedule that is actually incredibly impractical for them and end up causing problems or needing firing later on. I would rather not take that collective risk—to hire you and discover 3 months in, that the schedule you signed up for is not practical for you.
There is a reasonably high base rate (off the cuff: maybe 30 percent?) of candidates claiming overconfidently in interviews that they can meet a work schedule that is actually incredibly impractical for them and end up causing problems or needing firing later on.
That is a very real concern, and strikes me as reasonable. While I don’t have a good sense of what the percent would be, I agree with you that people in general tend to exaggerate what they are able to do in interviews. I wonder if there are good questions to ask to filter for this, beyond simply asking about how the candidate would plan to meet the timing requirements.
For the time zones, I had been thinking of individuals that had done this previously and can honestly claim that they have done this previously. But I do understand that for many people (especially people with children or people who live with other people) it would be impractical. Maybe my perception of people is fairly inaccurate, in the sense that I expect them to be more honest and self-aware than they really are? 😅
Even if the justification is reasonable, it is quite exclusionary to candidates outside of the required time zone. Think of a company who wants to hire a data analyst, but instead of the job posting listing ‘skilled at data analytics’ it instead lists ‘MA in data analytics.’ It is excluding a lot of people that might be skilled but which don’t have the degree.
I think the broader idea I’m trying to get at is when X is needed, but Y is listed as the requirement, and they are two distinct things. Maybe I need someone that speaks German as a native language for a job, but on the job describing I write that I need someone who grew up in Germany; those are distinct things. I’d reject all the German expats that grew up abroad, as well as the native-German speakers who grew up in Switzerland or Austria.
There might also be something here related to the non-central fallacy: applying the characteristics of an archetypical category member to a non-typical category member. Most people in distant time zones probably wouldn’t be able to manage an abnormal working schedule, but that doesn’t mean we should assume that no people in distant time zones can handle it.
Of course, the tradeoffs are always an issue. If I would get 5 additional candidates who would be good and 95 additional candidates who are poor fits, then maybe it wouldn’t be worth it. But something about the exclusion that I can’t quite put my finger on strikes me as unjust/unfair.
Imperfect Parfit (written by by Daniel Kodsi and John Maier) is a fairly long review (by 2024 internet standards) of Parfit: A Philosopher and His Mission to Save Morality. It draws attention to some of his oddities and eccentricity (such as brushing his teeth for hours, or eating the same dinner every day (not unheard of among famous philosophers)). Considering Parfit’s influence on the ideas that many of us involved in EA have, it seemed worth sharing here.
This is about donation amounts, investing, and patient philanthropy. I want to share a simple excel graph showing the annual donation amounts from two scenarios: 10% of salary, and 10% of investment returns.[1] While back a friend was astounded at the difference in dollar amounts, so I thought I should share this a bit more widely. The specific outcomes will change based on the assumptions that we input, of course.[2] A person could certainly combine both approaches, and there really isn’t anything stopping you from donating more than 10%, so interpret this as illustrative rather than definitive.
The blue line is someone who donates 10% of their salary for the rest of their career. The orange line is someone who invests 10% of their salary for the rest of their career, followed by donating 10% of investment returns starting at retirement.
I’m not going to share the spreadsheet simply because I have some personal information that I don’t want to share tied up in this spreadsheet and it would be a bit of a hassle to separate it out. But for anyone who wants to re-create something like this and fiddle with your own inputs to look at various scenarios, it shouldn’t be too hard to make a few columns like this:
I’m a big fan of using compound interest, and I lean somewhat toward patient philanthropy. The upsides and downsides of patient philanthropy have been written about already, so I won’t repeat all the pros and cons.
What the starting salary, how much and how fast salary increases, what the annual return is for the investments, how old you will be when you retire, etc. I used a starting salary of 70,000 USD at age 30, with 2% annual salary increases, 7.5% annual investment growth
Yeah I think this is a good point! Donor-advised funds seem like a good way to benefit from compound interest (and tax deductions) while avoiding the risk of value drift.
I guess shortform is now quick takes. I feel a small amount of negative reaction, but my best guess its that this reaction is nothing more than a general human “change is bad” feeling.
Is quick takes a better name for this function that shortform? I’m not sure. I’m leaning toward yes.
I wonder if this will have an effect to nudge people to not write longer posts using the quick takes function.
Would anyone find it interesting/useful for me to share a forum post about hiring, recruiting, and general personnel selection? I have some experience running hiring for small companies, and I have been recently reading a lot of academic papers from the Journal of Personnel Psychology regarding the research of most effective hiring practices. I’m thinking of creating a sequence about hiring, or maybe about HR and managing people more broadly.
Some musings about experience and coaching. I saw another announcement relating to mentorship/coaching/career advising recently. It looked like the mentors/coaches/advisors were all relatively junior/young/inexperienced. This isn’t the first time I’ve seen this. Most of this type of thing I’ve seen in and around EA involves the mentors/advisors/coaches being only a few years into their career. This isn’t necessarily bad. A person can be very well-read without having gone to school, or can be very strong without going to a gym, or can speak excellent Japanese without having ever been to Japan. A person being two or three or four years into their career doesn’t mean that it is impossible for them to have have good ideas and good advice.[1] But it does seem a little… odd. The skepticism I feel is similar to having a physically frail person as a fitness trainer: I am assessing the individual on a proxy (fitness) rather than on the true criteria (ability to advise me regarding fitness). Maybe that thinking is a bit too sloppy on my part.
This doesn’t mean that if you are 24 and you volunteer as a mentor that you should stop; you aren’t doing anything wrong. And I wouldn’t want some kind a silly and arbitrary rule, such as “only people age 40+ are allowed to be career coaches.” And there are some people doing this kind of work that have a decade or more of professional experience; I don’t want to make it sound like all of the people doing coaching and advising are fresh grads.
I wonder if there are any specific advantages or disadvantages to this ‘junior skew.’ Is there a meaningful correlation between length of career and ability to help other people with their careers?
EA already skews somewhat young, but from the last EA community survey it looks like the average age was around 29. So I wonder why are the vast majority of people doing mentorship/coaching/career advising are younger than that? Maybe the older people involved in EA are disproportionality not employed for EA organizations and are thus less focused on funneling people into impactful careers? I do have the vague impression that many 35+ EAs lean more toward earn-to-give. Maybe older EAs tend to be a little more private and less focused on the EA community? Maybe older people simply are less interested, or don’t view it as a priority? Maybe the organizations that employ/hire coaches all prefer young people? Maybe this is a false perception and I’m engaging in sloppy generalization from only a few anecdotes?
And the other huge caveat is that you can’t really know what a person’s professional background is from a quick glance at their LinkedIn Profile and the blurb that they share on a website, any more than you can accurately guess age from a profile photo. People sometimes don’t list everything. I can see that someone earned a bachelor’s degree in 2019 or 2020 or 2021, but maybe they didn’t follow a “standard” path: maybe they had a 10-year career prior to that, so guesses about being fairly young or junior are totally off. As always, drawing conclusions based on tiny snippets of information with minimal context is treacherous territory.
EA already skews somewhat young, but from the last EA community survey it looks like the average age was around 29. So I wonder why are the vast majority of people doing mentorship/coaching/career advising are younger than that? Maybe the older people involved in EA are disproportionality not employed for EA organizations and are thus less focused on funneling people into impactful careers?
I checked and people who currently work in an EA org are only slightly older on average (median 29 vs median 28).
This is a sloppy rough draft that I have had sitting in a Google doc for months, and I figured that if I don’t share it now, it will sit there forever. So please read this as a rough grouping of some brainstormy ideas, rather than as some sort of highly confident and well-polished thesis.
- - - - - -
What feedback do rejected applicants want?
From speaking with rejected job applicants within the EA ecosystem during the past year, I roughly conclude that they want feedback in two different ways:
The first way is just emotional care, which is really just a different way of saying “be kind rather than being mean or being neutral.”[1] They don’t want to feel bad, because rejection isn’t fun. Anybody who has been excluded from a group of friends, or kicked out of a company, or in any way excluded from something that you want to be included in knows that it can feel bad.[2] It feels even worse if you appear to meet the requirements of the job, put in time and effort to try really hard, care a lot about the community and the mission, perceive this as one of only a few paths available to you for more/higher impact, and then you get summarily excluded with a formulaic email template. There isn’t any feasible way to make a rejection feel great, but you can minimize how crappy it feels. Thank the candidates for their time/effort, and emphasize that you are rejecting this application for this role rather than rejecting this person in general. Don’t reject people immediately after their submission; wait a couple of days. If Alice submits a work trial task and less than 24 hours later you reject her, it feels to her like you barely glanced at her work, even if you spent several hours diligently going over it.
Improving. People want actionable feedback. If they lack a particular skill, they would like to know how to get better so that they can go learn that skill and then be a stronger candidate for this type of role in the future. If the main differentiator between candidates Alice and Bob is that Alice scored 50 points better on an IQ test or that Alice attended Impressive School while Bob attended No Name School, maybe don’t tell Bob that.[3] But if the main differentiator is that Alice has spent a year being a volunteer for the EA Virtual Program or that Alice is really good with spreadsheets or that Bob didn’t format his documents well, that is actionable, and gives the candidate a signal regarding how they can improve. Now the candidate knows something they can do to make become a more competitive candidate. They will practice their Excel skills and look up spreadsheet tutorials, they can get some volunteering experience with a relevant organization, and they look up how to use headers and to adjust line spacing. Think of this like a company investing in the local community college and sponsoring a professorship at the college: they are building a pipeline of potential future employees.
Here is a rough hierarchy of what, in an ideal world, I’d like to receive when I am rejected from a job application:
“Thanks for applying. We won’t be moving forward with your application. Although it is never fun to receive an email like this, we want to express appreciation for the time you spent on this selection process. Regarding why we choose to not move forward with your application, it looks like you don’t have as much experience directly related to X as the candidates we are moving forward with, and we also want someone who is able to Y. Getting experience with Y is challenging, but some ideas are here: [LINK].”
“Thanks for applying. We won’t be moving forward with your application. It looks like you don’t have as much experience directly related to X as the most competitive candidates, and we also want someone who is able to Y.”
“Thanks for applying. We won’t be moving forward with your application.”
That last bullet point is what most EA organizations send (according to conversations I’ve had with candidates, as well as my own experiences in EA hiring rounds). I have seen two or three that sometimes send rejections that are similar to the first or similar to the second.[4] If the first bullet point looks too challenging and you think that it would take too much staff time, then see if you can do the second bullet point: simply telling people why (although this will dependent on the context) can make rejections a lot less hurtful, and also points them in the right direction for how to get better.
I still remember how bad it felt being told that I couldn’t join a feminist reading group because they didn’t want any men there. I think that was totally understandable, but it still felt bad to be excluded. I remember not being able to join a professional networking group because I was older than the cutoff age (they required new members to be under 30, and I was 31 when I learned about it). These things happened years ago, and were not particularly influential in my life. But people remember being excluded.
Things that people cannot change with a reasonable amount of time and effort (or things that would require a time machine, such as what university someone attended) are generally not good pieces of feedback to give people. These things aren’t actionable.
Last I saw, the Centre for Effective Altruism and Animal Advocacy Careers both had systems in place helping them to do better than average. It has been a while since I’ve interaction with the internals of either of their hiring systems, but last I checked they both send useful and actionable feedback for at least some of their rejections.
I’m on board with a lot of your emotional care advice, but,,,
Don’t reject people immediately after their submission; wait a couple of days. If Alice submits a work trial task and less than 24 hours later you reject her, it feels to her like you barely glanced at her work, even if you spent several hours diligently going over it.
...I feel like your mileage may vary on this one. I don’t like being in suspense, and moreover it’s helpful from a planning perspective to know what’s up sooner rather than later. I’d say instead that if you want to signal that you spent time with someone’s application, do it by making sure your rejection is conspicuously specific (i.e. mentions features of the applicant or their submissions, even if only superficially).
I also think you missed an entire third category of reason to want feedback, which is that if I stand no hope of getting job X, no matter how much I improve, I do really want to know that, so I can make choices about how much time to spend trying to get that job or jobs like it. It feels like a kindness to tell me I can do anything I put my mind to, but if it’s not true then you’re just setting me up for more pain in the future. (Similarly, saying “everyone should apply, even if you’re not sure you’re qualified” sounds like a kindness but does have a downside in terms of increasing the number of unsuccessful applicants; sometimes it’s worth it anyway, but the downside should be acknowledged.)
There is a sort of a trade-off to notifying people immediately or notifying them after a couple of days. My best guess is that it generally won’t make a difference for someone’s planning to be rejected from a job application in less than 24 hours or to be rejected within a few days. But there is probably a lot of variation in preferences from one person to another; maybe I am impacted by this more than average. I’m probably heavily influences by a typical mind fallacy here as well, as I am very sloppily generalizing from my own internal state.
I’ve had a few job applications that I submitted and then got rejected for an hour or two later, and emotionally that felt so much worse. But at the end of the day I think you are right that “your mileage may vary.”
I’m been mulling over the idea of proportional reciprocity for a while. I’ve had some musings sitting a a Google Doc for several months, and I think that I either share a rough/sloppy version of this, or it will never get shared. So here is my idea. Note that this is in relation to job applications within EA, and I felt nudged to share this after seeing Thank You For Your Time: Understanding the Experiences of Job Seekers in Effective Altruism.
- - - -
Proportional reciprocity
I made this concept up.[1] The general idea is that relationships tend to be somewhat reciprocal, but in proportion to the maturity/growth of the relationship: the level of care and effort that I express toward you should be roughly proportional to the level of effort and care that you express toward me. When that is violated (either upward or downward) people feel that something is wrong.[2] The general idea (as far as it relates to job applications and hiring rounds) is that the more of a relationship the two parties have, the more care and consideration the rejection should involve. How does this relate to hiring in the context of EA? If Alice puts in 3 hours of work, and then Alice perceives that Bob puts in 3 minutes of work, Alice feels bad. That the simplistic model.
As a person running a hiring round, you might not view yourself as having a relationship with these people, but there is a sort of psychological contract which exists, especially after an interview; the candidate expects you to behave in certain ways.
One particularly frustrating experience I had was with an EA organization that had a role with a title, skills, and responsibilities that matched my experience fairly well. That organization reached out to me and requested that I answer multiple short essay-type questions as a part of the job application.[3] I did so, and I ended up receiving a template email from a noreply email address that stated “we have made the decision to move forward with other candidates whose experience and skills are a closer match to the position.” In my mind, this is a situation in which a reasonable candidate (say, someone not in the bottom 10%) who spent a decent chunk of time thoughtfully responding to multiple questions and who actually does meet the stated requirements for the role, is blandly rejected. This kind of scenario appears to be fairly common. And I wouldn’t have felt so bitter about it if they hadn’t specifically reached out to me and asked me to apply. Of course, I don’t know how competitive I was or wasn’t; maybe my writing was so poor that I was literally the worst-ranked candidate.
What would I have liked to see instead? I certainly don’t think that I am owed an interview, nor a job offer, and in reality I don’t know how competitive the other candidates were.[4] But I would have liked to have been given a bit more information beyond the implication of merely “other candidates are a better match.” I would love to be told in what way I fell short, and what I should do instead. If they specifically contacted me to invite me to apply, something along the lines of “Hey Joseph, sorry for wasting your time. We genuinely thought that you would have been among the stronger candidates, and we are sorry that we invited you to apply only to reject you at the very first stage.” That would have felt more human and personal, and I wouldn’t hold it against them. But instead I got a very boilerplate email template.
Of course, I’m describing my own experience, but lots of other people in EA and adjacent to EA go through this. It isn’t unusual for candidate to be asked to do 3-hour work trials without compensation, to be invited to interview and then rejected without information, or to meet 100% of the requirements of a job posting and then get rejected 24 hours after submitting an application.[5]
If this is an example of the applicant putting in effort and not getting reciprocity, the other failure mode that I’ve seen is the applicant being asked for more and more effort. A hiring round from one EA adjacent organization involved a short application form, and then a three-hour unpaid trial task. I understand the need to deal with a large volume of applicants; interviewing 5-10 people is feasible, interviewing 80 is less so. What would I have liked to see instead? Perhaps a 30-minute trial task instead of a three-hour trial task. Perhaps a 10-minute screening interview. Perhaps an additional form with some knockout questions and non-negotiables. Perhaps a three hour task that is paid.
There are plenty of exceptions, of course. I can’t obligate you to form a friendship with me by doing favors or by giving you gifts. The genuineness matters also: a sycophant who only engages in a relationship in order to extract value isn’t covered by proportionally reciprocity. And there are plenty of misperceptions regarding what level a relationship has reached; I’ve seen many interpersonal conflicts arise from two people having different perceptions of the current level of reciprocity. I think that this is particularly common in romantic relationships among young people.
I don’t remember exactly how much time I spent on the short essays. I know that it wasn’t a five-hour effort, but I also know that I didn’t just type a sentence or two and click ‘submit.’ I put a bit of thought into them, and I provided context and justification. Maybe it was between 30 and 90 minutes? One question was about DEI and the relevance it has to the work that organization did. I have actually read multiple books on DEI and I’ve been exploring that area quite a bit, so I was able to elaborate and give nuance on that.
Maybe they had twice as much relevant work experience as me, and membership in prestigious professional institutions, and experience volunteering with the organization. Or maybe I had something noticeably bad about my application, such as a blatant typo that I didn’t notice.
the level of care and effort that I express toward you should be roughly proportional to the level of effort and care that you express toward me
maybe a version of this that is more durable to the considerations in your footnote is: the level of care and effort that I ask from you should be roughly proportional to the level that I express towards you
if I ask for not much care and effort and get a lot, that perhaps should be a prompt to figure out if I should have done more to protect my counterpart from overinvesting, if I accidentally overpromised or miscommunicated, but ultimately there’s only so much responsibility you can take for other people’s decisions
(not well thought-out musings. I’ve only spent a few minutes thinking about this.)
In thinking about the focus on AI within the EA community, the Fermi paradox popped into my head. For anyone unfamiliar with it and who doesn’t want to click through to Wikipedia, my quick summary of the Fermi paradox is basically: if there is such a high probability of extraterrestrial life, why haven’t we seen any indications of it?
On a very naïve level, AI doomerism suggests a simple solution to the Fermi paradox: we don’t see signs of extraterrestrial life because civilizations tend to create unaligned AI, which destroys them. But I suspect that the AI-relevant variation would actually be something more like this:
We claim that a superintelligent AI is going to be a reality soon (maybe between 5 years and 80 years from now), and in general is a benchmark that any civilization would reach eventually. But if superintelligent AI is a thing that civilizations tend to make, why aren’t we seeing any indications of that in the broader universe? If some extraterrestrial civilization made an aligned AI, wouldn’t we see the results of that in a variety of ways? If some extraterrestrial civilization made an unaligned AI, wouldn’t we see the results of that in a variety of ways?
Like many things, I suppose the details matter immensely. Depending on the morality of the creators, an aligned AI might reach spend resources expanding civilization throughout the galaxy, or it might happily putt along maintaining a globe’s agricultural system. Depending on how an unaligned AI is unaligned, it might be focused on turning the whole universe into paperclips, or it might simply kill its creators to prevent them from enduring suffering. So on a very simplistic level it seems that the claim of “civilizations tend to make AI eventually, and it really is a superintelligent and world-changing technology” is consistent with reality of “we don’t observe any signs of extraterrestrial intelligence.”
This is a random musings of cultural norms, mainstream culture, and how/where we choose to spend our time and attention.
Barring the period when I was roughly 16-20 and interested in classic rock, I’ve never really been invested in music culture. By ‘music culture’ I mean things like knowing the names of the most popular bands of the time, knowing the difference between [subgenre A] and [subgenre A] off the top of my head, caring about the lives of famous musicians, etc.[1] Celebrity culture in general is something I’ve never gotten into, but avoiding TV, radio, and advertisements has meant that the messaging which most people are inundated with passes me by.
A YouTube video called 5 Songs You’ve Never Heard That You’ve Heard 1000 Times reminded me this morning of what a HUGE difference there is between the level of care/attention I have for music and the level of care/attention that I perceive as normal. I don’t think I have ever heard any of these songs before, and half of the musicians I’ve never heard of either.[2] Which is a little curious/odd/funny from a cultural perspective, since apparently the target audience has heard these songs so many times.
I suppose we all have our our areas of focus and specialization.
I’ve heard of the names of a variety of famous bands or musicians from the past few decades, but for most of them I’ve never bothered to spend time exploring what they really are.
This is just for my own purposes. I want to save this info somewhere so I don’t lose it. This has practically nothing to do with effective altruism, and should be viewed as my own personal blog post/ramblings.
I read the blog post What Trait Affects Income the Most?, written by Blair Fix, a few years ago, I really enjoyed seeing some data on it. At some point later I wanted to find it and I couldn’t find it, and today I stumbled upon it again. The very short and simplistic summary is that hierarchy (a fuzzy concept that I understand to be roughly “class,” including how wealthy your parents were, were you were born, and other factors) is the biggest influence on lifetime earnings[1]. This isn’t a huge surprise, but it is nice to see some references to research comparing class, education, occupation, race, and other factors.
Opportunity, equity, justice/fairness… these are topics that I probably think about too much for my own good.[2]
Of course, like most research, this isn’t rock solid, and lacking the breadth of knowledge I’m not able to make a sound critique of the research. I also want to be wary of confirmation bias, since this is basically a blog post telling me that what I want to be true it true, so there is another grain of salt I should keep in mind.
I would probably think about them less if I had been born into an upper-middle class family, or if I suddenly inherited $500,000. Just like a well-fed person doesn’t think about food, or a person with career stability isn’t anxious about their job. However, I think that if write about or talk about what leads to success in life then I will be perceived as angry/bitter/envious (especially since I don’t have any solutions or actions, other than a vague “fortunate people be more humble”), and that isn’t how I want people to perceive me. Thus, I generally try to avoid bringing up these topics.
I vaguely remember reading something about buying property with a longtermism perspective, but I can’t remember the justification against doing it. This is basically using people’s inclination to choose immediate rewards over rewards that come later in the future. The scenario was (very roughly) something like this:
You want to buy a house, and I offer to help you buy it. I will pay for 75% of the house, you will pay for 25% of the house. You get to own/use the house for 50 years, and starting in year 51 ownership transfers to me. You get a huge discount to own the house for 50 years, and I get a big discount to own the house forever (starting in year 51).
This feels like a very naïve question, but if I had enough money to support myself and I also had excess funds outside of that, why not do something like this as a step toward building an enormous pool of resources for the future? Could anyone link me to the original post?
That’s like what is known as a “life estate” except for a fixed term of years. It has similiarities to offering a long-term lease for an upfront payment . . and many of the same problems. The temporary possessor doesn’t care about the value of the property in year 51, so has every incentive to defer maintenance and otherwise maximize their cost/benefit ratio. Just ask anyone in an old condo association about the tendency to defer major costs until someone else owns their unit . . .
If you handle the maintenance, then this isn’t much different than a lease . . . better to get a bank loan and be an ordinary lessor, because the 50-year term and upfront cash requirement are going to depress how much you make. If you plan on enforcing maintenance requirements for the other person, that will be a headache and could be costly.
I’m grappling with an idea of how to schedule tasks/projects, how to prioritize, and how to set deadlines. I’m looking for advice, recommending readings, thoughts, etc.
The core question here is “how should we schedule and prioritize tasks whose result becomes gradually less valuable over time?” The rest of this post is just exploring that idea, explaining context, and sharing examples.
Here is a simple model of the world: many tasks that we do at work (or maybe also in other parts of life?) fall into either sharp decrease to zero or sharp reduction in value.
The sharp decrease to zero category. These have a particular deadline beyond which they offer no value, so you should really do the task before that point.
If you want to put me in touch with a great landlord to rent from, you need to do that before I sign a 12-month lease for a different apartment; at that point the value of the connection is zero.
If you want to book a hotel room prior to a convention, you need to do it before the hotel is fully booked; if you wait until the hotel is fully booked, calling to make that reservation is useless.
If you want to sharing the meeting agenda to allow attendees to prepare for a meeting, you have to share it prior to the meeting starting.
The sharp reduction in value category. You should do these tasks before the sharp reduction in value. Thus, the deadline is when value is about to sharply decrease.
Giving me food falls into the sharp sharp reduction category, because if you wait until I’ve I’m already satiated by eating a full meal, the additional food that you give me has far less value than if you had given it to me before my meal.
Setting deadlines for these kinds of tasks is, in a certain sense, simple: do it at some point before the decrease in value. But what about tasks that decrease gradually in value over time?
We can label these as the gradual reduction category.
Examples include an advertisement for a product that launched today and will be sold for the next 100 days. If I do this task today I will get 100% of it’s value, or if I do it tomorrow I will get 99% of it’s value, and so on, all the way to last day that will add any value.
I could start funding my retirement savings today or tomorrow, and the difference is negligible. In fact, the difference between any two days is tiny. But if I delay for years, then the difference will be massive. This is kind of a “drops of water in a bucket” issue: a single drop doesn’t matter, but all together they add up to a lot.
Should you start exercising today or tomorrow? Doesn’t really matter. Or start next week? No problem. Start 15 years from now? That is probably a lot worse.
If you want to stop smoking, what difference does a day make?
Which sort of leads us back to the core question. If the value decreases gradually rather than decreasing sharply, then when do you do the task?
I suppose one answer is to do the task immediately, before it has any reduction in value. But that also seems like it isn’t what we actually do. In terms of prioritizing, instead of doing everything immediately, people seem to push tasks back to the point just before they would cause problems. If I am prioritizing, I will probably try hard to to the sharp reduction in value task (orange in the below graph) before it has the reduction in value, and then I’ll prioritize the sharp decrease to zero task (blue in the graph), finally starting on my lowest priority task once the other two are finished. But that doesn’t seem optimal, right?
I’ve been reading a few academic papers on my “to-read” list, and The Crisis of Confidence in Research Findings in Psychology: Is Lack of Replication the Real Problem? Or Is It Something Else? has a section that made me think about epistemics, knowledge, and how we try to make the world a better place. I’ll include the exact quote below, but my rough summary of it would be that multiple studies found no relationship between the presence or absence of highway shoulders and accidents/deaths, and thus they weren’t built. Unfortunately, none of the studies had sufficient statistical power, and thus the conclusions drawn were inaccurate. I suppose that absence of evidence is not evidence of absence might be somewhat relevant here. Lo and behold, later on a meta-analysis was done, finding that having highway shoulders reduced accidents/deaths. So my understanding is that inaccurate knowledge (shoulders don’t help) led to choices (don’t build shoulders) that led to accidents/deaths that wouldn’t otherwise have happened.
I’m wondering if there are other areas of life that we can find facing similar issues. These wouldn’t necessarily be new cause areas, but the general idea of identify an area that involves life/death decisions, and then either make sure the knowledge is accurate or attempt to bring accurate knowledge to the decision-makers would be incredibly helpful. Hard though. Probably not very tractable.
For anyone curious, here is the relevant excerpt that prompted my musings:
A number of studies had been conducted to determine whether highway shoulders, which allow drivers to pull over to the side of the road and stop if they need to, reduce accidents and deaths. None of these inadequately powered studies found a statistically significant relationship between the presence or absence of shoulders and accidents or deaths. Traffic safety engineers concluded that shoulders have no effect, and as a result fewer shoulders were built in most states. Hauer’s (2004) meta-analysis of these studies showed clearly that shoulders reduced both accidents and deaths. In this case, people died as a result of failure to understand sampling error and statistical power.
What? Isn’t it all evidence-based? Who would take actions without evidence? Well, often people make decisions based on an idea they got from a pop-business book (I am guilty of this), off of gut feelings (I am guilty of this), or off of what worked in a different context (I am definitely guilty of this).
Rank-and-yank (I’ve also heard it called forced distribution and forced ranking, and Wikipedia describes it as vitality curve) is an easy example to pick on, but we could easily look at some other management practice in hiring, marketing, people management, etc.
I like one-on-ones. I think that one-on-ones are a great way to build a relationship with the people on your team, and they also provide a venue for people to bring you issues. But where is the evidence? I’ve never seen any research or data to suggest that one-on-ones lead to particular outcomes. I’ve heard other people describe how they are good, and I’ve read blog posts about why they are a best practice, but I’ve never seen anything stronger than anecdote and people recommending them from their own experience.
It was an HBR article from 2006 (which I found as a result of a paper titled Evidence-Based I–O Psychology: Not There Yet) that I recently read that got me thinking about this more, but I’m considering reading into the area more and writing a more in-depth post about it. It lines up nicely with two different areas of interest of mine: how we often make poor decisions even when we have plenty of opportunities to make better decisions, and learning how to run organizations well.
I haven’t read any research or evidence demonstrating one leadership style is better than another. My intuitions and other people’s anecdotes that I’ve heard tell me that certain behaviors are more likely or less likely to lead to success, but I haven’t got anything more solid to go on that that at the moment.
Similarly, I haven’t read any research showing (in a fairly statistically rigorous way) that lean, or agile, or the Toyota Production System, or other similar concepts are effective. Anecdote tells me that they are, and the reasoning for why they work makes sense to me, but I haven’t seen anything more rigorous.
Nicholas Bloom’s research is great, and I am glad to see his study of consulting in India referenced on the EA forum. I would love to see more research measuring impacts of particular management practices, and if I was filthy rich that is probably one of the things that I would fund.
I’m assuming that there are studies about smaller-level actions/behaviors, but it is a lot easier to A-B test what color a button on a homepage should be than to A-B test having a cooperative work culture or a competitive work culture.
I think of the the tricky things is how context matters to much. Just because practice A is more effective than practice B in a particular culture/industry/function, doesn’t mean it will apply to all situations. As a very simplistic example, rapid iteration is great for a website’s design, but imagine how horrible it would be for payroll policy.
A very tiny, very informal announcement: if you want someone to review your resume and give you some feedback or advice, send me your resume and I’ll help. If we have never met before, that is okay. I’m happy to help you, even if we are total strangers.
For the past few months I’ve been active with a community of Human Resources professionals and I’ve found it quite nice to help people improve their resumes. I think there are a lot of people in EA that are looking for a job as part of a path to greater impact, but many people feel somewhat awkward or ashamed to ask for help. There is also a lot of ‘low-hanging fruit’ for making a resume look better, from simply formatting changes that make a resume easier to understand to wordsmithing the phrasings.
To be clear: this is not a paid service, I’m not trying to drum up business for some kind of a side-hustle, and I’m not going to ask you to subscribe to a newsletter. I am just a person who is offering some free low-key help.
This is both a very kind and a very helpful thing to offer. This is something that can help people an awful lot in terms of their career.
Just to say I took Joseph up on this and found it very helpful! I recommend doing the same!
Note: I’m sharing this an undisclosed period of time after the conference has occurred, because I don’t want to inadvertently reveal who this individual is, and I don’t want to embarrass this person.
I’m preparing to attend a conference, and I’ve been looking at the Swapcard profile of someone who lists many areas of expertise that I think I’d be interested in speaking with them about: consulting, people management, operations, policymaking, project management/program management, global health & development… wow, this person knows about a lot of different areas. Wow, this person even lists Global coordination & peace-building as an area of expertise! And Ai strategy & policy! Then I look at this person’s LinkedIn. They graduated from their bachelor’s degree one month ago. So many things arise in my mind.
One is about how this typifies a particular subtype of person who talks big about what they can do (which I think has some overlap with “grifter” or “slick salesman,” and has a lot of overlap with people who promote themselves on social media).
Another is that I notice that this person attended Yale, and it makes me want to think about elitism and privilidge and humility and “fake it till you make it” and the Matthew effect.
Another is that I shouldn’t judge people too harshly, because I also certainly try to put my best foot forward when job hunting. I am certainly guilty of being overconfident at times.
I’ll also acknowledge that while it isn’t probable that this person’s listed areas of expertise are accurate and realistic, it is possible. A fresh college grad could have read a dozen books about each of these distinct areas, and attended some sort of training program, and had multiple informational interviews. I could imagine an industrious student with enough free time gaining some competency in a variety of areas. Is that enough to count as “expertise?” I’m not sure, and it certainly seems context-dependent: at an EAGx conference I feel okay claiming competence in certain skills, but at a conference with lots of people highly trained in those skills (such as at a PMI Global Summit) I would not describe myself as so competent, simply because the reference group is different. Compared to laypeople I know a bunch about project management; compared to a professional project manager I know hardly anything.[1]
I suppose I shouldn’t be too surprised. Although I’m not a big fan of the “there are lots of grifters in EA” narrative, it isn’t unheard of for people to
vastly[2] exaggerate their skills/competencies/experiences or to imply that they have more than they really do.At a separate EA conference a person listed many areas under “Area(s) of expertise,” including one particular skill that I reached out to them to chat about, after which they replied to tell me that they actually didn’t do that kind of work and aren’t knowledgeable about it.[3]
One EA Forum poster shared strong opinions about how cumbersome regular working hours, offices in normal cities, and work/life boundaries can be. When I looked, this person also has only about five years of post-graduate work experience, all of which has either been freelance, self-employed, or running his own organization.[4] This isn’t to say that you aren’t allowed to have an opinion about “standard” offices if you haven’t spent X years in offices, but I’m skeptical of any broad and sweeping claim about a particular working style (such as “I don’t know anyone who is highly effective and gets everything done between 9 and 5 from Mon-Fri”) while having sampled that working style very little. Some offices are horribly unproductive, but that doesn’t mean that all of them are.
At least one person active on the EA Forum has an entry under Licenses & certifications on his LinkedIn profile listing Pareto Productivity Pro as license/certification from Charity Entrepreneurship, with a link to the Amazon page for the book How to Launch a High-Impact Nonprofit. This seems pretty deceptive to me, to list an official training or association with an organization when all you did is read their book.[5] EDIT: see this comment from Tyler Johnston for additional context.
Someone made a forum post about taking several months off work to hike, claiming that it was a great career decision and that they gained lots of transferable skills. I see this as LinkedIn-style clout-seeking behavior.
I saw another person list a job ofSocial Media ManagerforEffective Altruismon their LinkedIn. (EDIT: it turns out that this is legitimate. I was completely wrong to look at this and conclude that a person was exaggerating their experiences.)There are multiple people who have job titles of “senior [SOMETHING]”, or “president,” or “director of [SOMETHING]” even though they have no previous work experience in that area. Maybe that really is their official job title, but it strikes me as a bit fishy to have a title of Vice President or CEO when you are only two or three years into your career.
Related to the idea of how expertise is dependent on who you compare yourself to, there is a kind of a narrative among sinologists and China-watchers that a “westerner” who spends a week in China knows enough to write a book, and if they spend a month in China they can write an article, and if they spend a year in China they can’t even write a paragraph because they realize how little they know.
EDIT: I think this reads as too combative/aggressive, so I’m taking this word out.
Although it could just be a less rude way to say “I don’t want to talk to you,” much like the little white lies people tell to turn down an invitation or to withdraw from a conversation.
This is just from a cursory view of his LinkedIn, so maybe he has much more relevant experience that I am unaware of. This would largely or completely invalidate this critique.
But I could be totally wrong. Maybe this person received some kind of specialized/individualized training from Charity Entrepreneurship and has their permission/blessing to put this on his LinkedIn profile, so I am simply making a bad assumption based in incomplete information.
I list “social media manager” for Effective Altruism on LinkedIn—but I highlight that it’s a voluntary role, not a job. I have done this for over 10 years, maintaining the “effective altruism” page amongst others, as well as other volunteering for EA.
Ya know what? That strikes me as 100% legitimate. I had approached it from the perspective of “there isn’t an organization called Effective Altruism, so anyone claiming to work for it is somehow stretching/obfuscating the truth,” but I think I was wrong. While I have seen people use an organization’s name on LinkedIn without being associated with the organization, your example of maintaining a resource for the EA community seems permissible, especially since you note that it is volunteering.
+1 to the EAG expertise stuff, though I think that it’s generally just an honest mistake/conflicting expectations, as opposed to people exaggerating or being misleading. There aren’t concrete criteria for what to list as expertise so I often feel confused about what to put down.
@Eli_Nathan maybe you could add some concrete criteria on swapcard?
e.g. expertise = I could enter roles in this specialty now and could answer questions of curious newcomers (or currently work in this area)
interest = I am either actively learning about this area, or have invested at least 20 hours learning/working in this area .
Hi Caleb,
Ivan from the EAG team here — I’m responsible for a bunch of the systems we use at our events (including Swapcard).
Thanks for flagging this! It’s useful to hear that this could do with more clarity. Unfortunately, there isn’t a way we can add help text or sub text to the Swapcard fields due to Swapcard limitations. However, we could rename the labels/field names to make this clearer..?
For example
Areas of Expertise (3+ months work experience)
Areas of Interest (actively seeking to learn more)
Does that sound like something that would be helpful for you to know what to put down? I’ll take this to the EAG team and see if we can come up with something better. Let me know if you have other suggestions!
For what it is worth, I’d want the bar for expertise to be a lot higher than a few months of work experience. I can’t really think of any common career (setting aside highly specialized fields with lots of training, such as astronaut) in which a few months of work experience make someone an expert. Maybe Areas of Expertise (multiple years work experience)? It is tricky, because there are so many edge cases, and maybe someone had read all the research on [AREA] and is incredibly knowledge without having ever worked in that area.
That would help me! Right now I mostly ignore the expertise/interest fields, but I could imagine using this feature to book 1:1s if people used a convention like the one you suggested.
The mention of “Pareto Productivity Pro” rang a bell, so I double-checked my copy of How to Launch a High-Impact Nonprofit — and sure enough, towards the end of the chapter on productivity, the book actually encourages the reader to add that title to their Linkedin verbatim. Not explicitly as a certification, nor with CE as the certifier, but just in general. I still agree that it could be misleading, but I imagine it was done in fairly good faith given the book suggests it.
However, I do think this sort of resume padding is basically the norm rather than the exception. Somewhat related anecode from outside EA: Harvard College has given out a named award for many decades to the “top 5% of students of the year by GPA.” Lots of people — including myself — put this award in their resume hoping it will help them stand out among other graduates.
The catch is that grade inflation has gotten so bad that something like 30-40% of students will get a 4.0 in any given year, and they all get the award on account of having tied for it (despite it now not signifying anything like “top 5%.”) But the university still describes it as such, and therefore students still describe it that way on resumes and social media (you can actually search “john harvard scholar” in quotes on LinkedIn and see the flexing yourself). Which just illustrates how even large, reputable institutions support this practice through fluffy, misleading awards and certifications.
This post actually spurred me to go and remove the award from my LinkedIn, but I still think it’s very easy and normal to accidentally do things that make yourself look better in a resume — especially when there is a “technically true” justificaiton for it (like “the school told me I’m in the top 5%” or “the book told me I could add this to my resume!”), whether or not this is really all that informative for future employers. Also, in the back of my mind, I wonder whether choosing to not do this sort of resume padding creates bad selection effects that lead to people with more integrity being hired less, meaning even high-integrity people should be partaking in resume padding so long as everyone else is (Moloch everywhere!). Maybe the best answer is just making sure hiring comittees have good bullshit detectors and lean more on work trials/demonstrated aptitude over fancy certifications/job titles.
Thanks for mentioning this. I wasn’t aware of this context, which changes my initial guesswork quite a bit. I just looked it up at in Chapter 10 (Take Planning), section 10.6 has this phrase: “As you implement most or some of the practices introduced here, you have every right to add the title Pareto Productivity Pro to your business card and LinkedIn profile.” So I guess that is endorsed by Charity Entrepreneurship. While I disagree with their choice to encourage people to add what I view as a meaningless title to LinkedIn, I think it I can’t put so much blame on the individual who did this.
Yeah, agreed that it’s an odd suggestion. The idea of putting it on a business card feels so counterintuitive to me that I wonder how literally it’s meant to be taken, or if the sentence is really just a rhetorical device the authors are using to encourage the reader.
That is definitely something for us to be aware of. The simplistic narrative of “lots of people are exaggerating and inflating their experiences/skills, so if I don’t do it I will be at a disadvantage” is something that I think of when I am trying to figure out wording on a resume.
Thanks for writing this, Joseph.
Minor, but I don’t really understand this claim:
Someone made a forum post about taking several months off work to hike, claiming that it was a great career decision and that they gained lots of transferable skills. I see this as LinkedIn-style clout-seeking behavior.
I am curious why you think this i) gains them clout or ii) was written with that intention?
It seems very different to the other examples, which seem about claiming unfair competencies or levels of impact etc.
I personally think that taking time off work to hike is more likely to cost you status than give you status in EA circles! I therefore read that post as an attempt to promote new community norms (around work and life balance and self-discovery etc) than to gain status.
One disclaimer here is that I think I know this person, so I am probably biased. I am genuinely curious though and not feeling defensive etc.
Sure, I’ll try to type out some thoughts on this. I’ve spent about 20-30 minutes pondering this, and this is what I’ve come up with.
I’ll start by saying I don’t view this hiking post as a huge travesty; I have a general/vague feeling of a little yuckiness (and I’ll acknowledge that such gut instincts/reactions are not always a good guide to clear thinking), and I’ll also readily acknowledge that just because I interpret a particular meaning doesn’t mean that other people interpreted the same meaning (nor that the author intended that meaning).
(I’ll also note that if the author of that hiking post reads this: I have absolutely no ill-will toward you. I am not angry, I enjoyed reading about your hike, and it looked really fun. I know that tone is hard to portray in writing, and that the internet is often a fraught place with petty and angry people around every corner. If you are reading this it might come across as if I am angrily smashing my keyword simply because I disagree with something. I assure you that I am not angry. I am sipping my tea with a soft smile while I type about your post. I view this less like “let’s attack this person for some perceived slight” and more like “let’s explore the semantics and implied causation of an experience.”)
One factor is that it doesn’t seem generalizable. If 10,000 people took time off work to do a hike, how many of them would have the same positive results? From the perspective of simply sharing a story of “this is what happened to me” I think it is fine. But the messaging of “this specific action I took helped me get a new job” seems like the career equivalent of “I picked this stock and it went up during a decade-long bear market, so I will share my story about how I got wealthy.”
A second factor is the cause-and-effect. I don’t know for sure, but I suspect that the author’s network played a much larger role in getting a job than the skills picked up while hiking. The framing of the post was “It was a great career decision. I gained confidence and perspective, but also lots of transferable and work-applicable skills: persistence, attention to detail, organization, decision-making under pressure...” And I’m looking at this and thinking that those are all context-dependent skills. Just because you have an eye for detail or skill with organization when it comes to your backpack, it doesn’t mean that you will when you are looking at a spreadsheet. Just because you can make a decision when you slip down the side of a mountain doesn’t mean you can make a decision in a board room.
And I think a third factors is general vibes: it felt very self-promotional to me.[1] It struck me as similar to LinkedIn content in which something completely unrelated to work and professional life occurs, and then is squeezed into a box in order to be presented as a work-appropriate narrative with a career-relevant takeaway.
So I’ll frame this in a way that is more discussion-based: how context dependent are these kinds of general/broad skills? Taking attention to detail as an example, I can be very attentive to a system that I am familiar with and pretty change blind in a foreign setting (I notice slight changes in font in a spreadsheet, but I won’t notice if a friend got a new haircut).[2] Persistence (or determination, or grit) is also highly dependent on the person’s motivation for the particular task they are working on. How accurate is it to claim to have gained these skills on a hike, to the extent that they benefit you in an office job?
According to some IO Psychologist contacts (this is two quotes smushed together and lightly edited from when I was chatting about this topic):
I think that I tend to be more averse to marketing and self-promotional behavior than the average person, so it is possible that 100 people look at that post and 80 or 90 of them feel it isn’t self-promotional.
I’ve actually had colleagues/managers from two different professional contexts describe me as extremely attentive to detail, noticing things that nobody else did and insufficiently attentive to detail, to the extent that I am not competent to do the job (these are not direct quotes, but rather my rough characterization). The context matters a lot for how good we are at things. Determination is an easy example to illustrate the importance of context: think of doing a dull, mundane task as opposed to one you find inherently interesting and engaging.
Thanks for the detailed response, I appreciate it!
I don’t know if this is a fair assessment, but it’s hard for me to expect anything else as long as many EAs are getting sourced from elite universities, since that’s basically the planetary focus for the consumption and production of inflated credentials.
The main Swapcard example you mention seems to me like a misunderstanding of EAGs and 1-1s.
To take consulting as an example, say I am a 1st year undergrad looking to get into management consulting. I don’t need to speak to a consulting expert (probably they should change the name to be about experience instead of expertise), but I’d be very keen to get advice from someone who recently went through the whole consulting hiring process and got multiple offers, say someone a month out of undergrad.
Or another hypothetical: say I’m really interested in working in an operations/HR role within global health. I reach out to the handful of experts in the field who will be at the conference, but I want to fit in as many 1-1s as I can, and anyway the experts may be too busy, so I also reach out to someone who did an internship on the operations team of a global health charity during college. They’re not an expert in the field, but they could still brain-dump a bunch of stuff they learnt from the internship in 25 min.
And these could be about the same recently graduated person.
With the trekking example, I also know the person, and it seems extremely unlikely to me they were trying to gain power or influence (ie clout), by writing the post. It also seems to be the case that it did result in some minor outdoorsy career opportunities.
A lot of the points about transferability seem like they would apply to many job to job changes—e.g. ‘why would you think your experience running a startup would be transferable to working for a large corporation?’ But people change career direction all the time, and indeed EA has a large focus on helping people to do so.
I agree with everything but the last point. Director or CEO simply refers to a name of the position, doesn’t it?
Yes, it refers to a position. So if this is actually someone’s job title, then there kind of isn’t anything wrong with it. And I sympathize with people who found or start their own organization. If I am 22 and I’ve never had a job before but I create a startup, I am the CEO.
So by the denotation there is nothing wrong with it. The connotation makes it a bit tricky, because (generally speaking) the title of CEO (or director, or senior manager, or similar titles) refers to people with a lot of professional experience. I perceive a certain level of … self-aggrandizement? inflating one’s reputation? status-seeking? I’m not quite sure how to articulate the somewhat icky feeling I have about people giving themselves impressive-sounding titles.
I’m currently reading a lot of content to prepare for HR certification exams (from HRCI and SHRM), and in a section about staffing I came across this:
Just the other day I had a conversation about the tendency of EA organizations to over-weight how “EA” a job candidate is,[1] so it particularly stuck me to come across this today. We had joked about how a recent grad with no work experience would try figuring out how to do accounting from first principles (the unspoken alternative was to hire an accountant). So perhaps I would interpret the above quotation in the context of EA as “employees with little experience outside of EA are more likely to have a myopic view of the non-EA world.” In a very simplistic sense, if we imagine EA as one large organization with many independent divisions/departments, a lot of the hiring (although certainly not all) is internal hiring.[2]
And I’m wondering how much expertise, skill, or experience is not utilized within EA as a result of favoring “internal” hires. I think that I have learned a lot about EA over the past three years or so, but I suspect that I would perform better in most EA jobs if I had instead spent 10% of that time learning about EA and 90% of it learning about [project management, accounting, bookkeeping, EEO laws, immigration law, workflow automation tools, product management, etc.]. Nonetheless, I also suspect that if I had spent less time delving into EA, I would be a less appealing job candidate for EA orgs, who heavily weigh EA-relevant experience.[3]
It does seem almost comical how we (people involved in EA) try to invent many things for ourselves rather than simply using the practices and tools that exist. We don’t need to constantly re-invent the wheel. It is easy to joke about hiring for a position that doesn’t require someone to be highly EA, and then using “be very EA” as an selection criteria (which eliminates qualified candidates). I’ll return to my mainstay: make sure the criteria you are using for selection are actually related to ability to perform the job. If you are hiring a head of communications to manage public relations for EA, then I think it makes sense that this role needs to understand a lot of EA. If you are hiring an office manager or a data analyst, I think that it makes less sense (although I can certainly imagine exceptions).
I’m imagining a 0-10 scale for “how EA someone is,” and I think right now most roles require candidates to be a 7 or 8 or 9 on the scale. I think there are some roles where someone being a 3 or a 4 on the scale would be fine, and would actually allow a more competitive candidate pool to be considered. This is all quite fuzzy, and I think there is a decent chance that I could be wrong.[4]
“How EA someone is” is a very sloppy term for a variety of interconnected things: mission-alignment, demonstrated interaction with the EA community, reads lots of EA content, ability to use frequently used terms like “counterfactual” and “marginal,” up-to-date with trends and happenings within EA, social connections with EAs…
Actually, I wonder if there are stats on this. It would be curious to get some actual estimates regarding what percent of hires made are from people who are within EA. There would certainly be some subjective judgement calls, but I would view being “within EA” as having worked/interned/volunteered for an EA org, or having run or having been heavily involved in an EA club/group.
I have a vague feeling that heavily weighing EA-relevant experience over non-EA experience is fairly common. I did have one person in a influential position at a central EA org mention that a candidate with a graduate degree (or maybe the words spoken were “MBA”? I don’t recall exactly) gets a bit less consideration. Nonetheless, I don’t know how much this actually happens, but I hope not often.
Especially since “how EA someone is” conflates several things: belief in a mission, communication styles, working preferences, and several other things that are actually independent/distinct. People have told me that non EAs have had trouble understanding the context of meetings and trouble communicating with team members. Could we take a generic project manager with 10 years of work experience, have them do two virtual programs, and then toss them into an EA org?
I think that the worries about hiring non-EAs are slightly more subtly than this.
Sure, they may be perfectly good at fulfilling the job description, but how does hiring someone with different values affect your organisational culture? It seems like in some cases it may be net-beneficial having someone around with a different perspective, but it can also have subtle costs in terms of weakening the team spirit.
Then you get into the issue where if you have some roles you are fine hiring EAs for and some you want them to be value-aligned for, then you may have an employee who you would not want to receive certain promotions or be elevated into certain positions, which isn’t the best position to be in.
Not to mention, often a lot of time ends up being invested in skilling up an employee and if they are value-aligned then you don’t necessarily lose all of this value when they leave.
Chris, would you be willing to talk more about this issue? I’d love to hear about some of the specific situations you’ve encountered, as well as to explore broad themes or general trends. Would it be okay if I messaged you to arrange a time to talk?
Sorry, I’m pretty busy. But feel free to chat if we ever run into each other at an EA event or to B book a 1-on-1 at an EA Global.
Best books I’ve read in 2023
(I want to share, but this doesn’t seem relevant enough to EA to justify making a standard forum post. So I’ll do it as a quick take instead.)
People who know me know that I read a lot.[1] Although I don’t tend to have a huge range, I do think there is a decent variety in the interests I pursue: business/productivity, global development, pop science, sociology/culture, history. Of all the books I read in 2023, here is my best guess as to the ones that would be of most interest to an effective altruist.
For people who haven’t explored much yet
Scrum: The Art of Doing Twice the Work in Half the Time. If you haven’t worked in ‘startupy’ or lean organizations, this books may introduce you to some new ideas. I first worked for a startup in my late 20s, and I wish that I had read this book at that point.
Developing Cultural Adaptability: How to Work Across Differences. This 32 page PDF is a good introduction to ideas of working with people from other cultures. This will be particularly useful if you are going to work in a different country (although there are cultural variations within a single country). This is fairly light introduction, so don’t stop here if you want to learn more about cross-cultural communication and cross-cultural psychology.
How to Be Perfect: The Correct Answer to Every Moral Question. Less focused on productivity /professional skills, this is a fun and lighthearted exploration of different ethical theories. This book made me smile more than any other I read this year, and also introduced me to some new moral philosophers. This is probably the most easily ‘digestible’ book ever written on moral philosophy. If you enjoyed the TV Show The Good Place, you should listen to the audiobook version of this book, as it features the cast from The Good Place.
Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth. If you aren’t familiar with the problems of the scientific process as it actually exists, or with the ‘industry’ of science, then this book will probably introduce you to some of these ideas, as well as make you a bit more skeptical of scientific publication in general. I think it would be great if we all slightly increased our incredulousness toward any any all new publications. It strikes me as a bit of a kindred spirit to the essay Beware the Man of One Study.
Conscious: A Brief Guide to the Fundamental Mystery of the Mind and Free Will. I started to think more about consciousness in animals this year, and these two short books were the start of my exploration. You probably won’t learn anything new if you have already done some thinking or reading about this topic, but I’m guessing that the average twenty-something interested in EA would gain a bit from reading these.
For people who have already explored a lot and know the basics
The Idealist: Jeffrey Sachs and the Quest to End Poverty. A nicely written portrait that doesn’t pull any punches, highlighting both the good and the bad. I loved how neutral the author’s tone felt; there wasn’t idolizing or vilifying. I view reading this as a good way to a) inoculate oneself a bit against hero worship, and b) understand some of the complications that come with global development work, even when you are relatively well-resourced.
Crucial Conversations Tools for Talking When Stakes Are High. It is rare for me to re-read books, but I think I will revisit this one in a few years. This are useful skills that should be practiced, both in a professional environment and in personal relationships.
How to Measure Anything: Finding the Value of Intangibles in Business. If you are already familiar with the basics of expected value and trying to quantity things via fermi estimates, then this book will help take you to the next level. I enjoyed the balance between examples and explanation, and I could see myself taking this book out and referring to it in the future to figure out the value of an estimate.
What a Fish Knows: The Inner Lives of Our Underwater Cousins. I don’t think that I gained anything specific and concrete from this book, but in a broad sense I got a much greater appreciation for non-human life, and a strong reminder of how little I know of the world. I’m trying to read more about animals as a way of building greater understanding, and there were dozens of fascinating tidbits in this book that I view as pixels in a picture or pieces in a jigsaw puzzle (in the sense that after I gather enough of them I will start to be able to understand something larger).
Maybe not so closely related to effective altruist ideas, but still worth reading for (some people)
Simply Managing: What Managers Do–and Can Do Better, and Humble Inquiry: The Gentle Art of Asking Instead of Telling. If you are interested in being a manager in the future (or if you already are a manager of people) then you should learn how to manage. While there are a lot of aspects to it, this is a good start. Henry Mintzberg is very famous and well respected when it comes to management education, and Edgar H. Schein is one of the foremost experts on organizational culture. These two books are short, simple samplings of their ideas. If you are already a skilled and experienced people manager, reading these might be a bit of a refresher, but you likely won’t encounter new concepts.
I Hate the Ivy League: Riffs and Rants on Elite Education[2]and Where You Go Is Not Who You’ll Be: An Antidote to the College Admissions Mania. If you aren’t interested in how higher education functions in American society, then skip these books. I’m interested in how higher education functions in American society, and these two books were enjoyable and educational explorations. But if you are interested in ideas of justice, gatekeeping, access, inclusion, equality, etc., then you might enjoy these two. These really got me thinking about what admissions criteria ought to be for a university education. Simplistic answers (admit everyone and have resources stretched so thin that quality is bad, or admit only those that are already very well-resourced and then give them lots more resources) don’t seem great paths to building a better society. This is an area that I want to learn more about, and I intend to read more books about this. But there is one quote that has stuck with me: “The prestige associated with going to, say, Yale, was a function of at least in part how many people wanted to get in and couldn’t. It was the logic of the nightclub, it had never occurred to me that a university was a nightclub. I thought it was more like a hospital, an institution judged by how many patients it took in and how many of those later emerged fully healed.”
Hand to Mouth: Living in Bootstrap America. If you have never been poor, read this to try and get a bit of an understanding of what it is like to live in a first world country without having much money or career stability. It isn’t brilliant literature, but it is a decently good whirlwind tour of poverty. I grew up… kind of poor, I guess? For most of my childhood my parents both had jobs, and I think that we were consistently beneath the US median household income. I never noticed it much as a kid, but looking back I can see various indications that my family didn’t have much money. If you know what it is like to be poor in America, then you probably won’t learn anything from this (although you may feel emotionally “seen” and validated).[3] A quote from a different book that seems relevant here: “That’s the difference between being privileged and being poor in America. It’s how many chances you get. If you’re wealthy, all kinds of things can happen and you’ll be okay. You can drop out of school for a year, you can get addicted to pain killers, you can have a bad car accident. No one ever says, of the upper-middle class high school kid whose parents get a terrible divorce, “I wonder if she’ll ever go to college.” She’s going to college; disruption is not fatal to life chances.”
Mixed feelings
Animal Liberation Now: The Definitive Classic Renewed. Yes, I know that it is an influential classic. But I didn’t really gain anything from it (other than the credibility to be able to say “yes, I’ve read it.”) I think that from a few conversations, a few documentaries, and a few online articles over the years I had already picked up the core messages that this book attempted to portray. I already knew that [insert animal here] are treated horribly. It is kind of like watching Dominion after you already have watched three or four other documentaries about farm animal welfare: it is just telling you what you already know.
Utopia for Realists: How We Can Build the Ideal World. It was… fine. I wanted to like it, and I like the idea of it. I’m not sure why this book didn’t resonate more with me; it really should have. I could go through it with a fine-toothed comb and justify, caveat, and explain all my reactions. But it doesn’t seem worth it, so I’ll just be satisfied with a vague shrug and move on to reading other books.
Atomic Habits: An Easy & Proven Way to Build Good Habits & Break Bad Ones. This seemed so incredibly simple that I am amazed people rave about it. But maybe this is just a matter of perspective and age. If I had read this book prior to age ≈24 I likely would have learned a lot from it.
Moral Mazes: The World of Corporate Managers. I thought that I would gain something from this, but it was all kind of… straightforward. Of course people are going to act on incentives, and when those incentives incentivize behavior that is damaging, I am not surprised that damage results. This all struck me as kind of blandly obvious. Maybe if I hadn’t previously read a bunch about behavioral economics and social psychology then maybe I would have learned new things and found this book worthwhile, but I’ve read almost all of the books on these subjects already at this point.[4]
A few books on diversity
DEI Deconstructed: Your No-Nonsense Guide to Doing the Work and Doing It Right. If you are interested in the DEI industry, what is wrong with it, and how it can be better, read this. It focuses a lot on the DEI industry, but it has a really good chapter on practical applications and on what an organization can do.
Getting to Diversity: What Works and What Doesn’t. If you just want to know what does and doesn’t work in a business/organizational environment to improve diversity, read this.
Read This to Get Smarter: About Race, Class, Gender, Disability, and More. If you are brand new to ideas about diversity and inclusion, or if you are intimidated by the terminology, or if you just want a light introduction, then read this book. It you already are familiar with the terminology and the basic ideas, then you won’t learn anything.
A majority via audiobook, so we could quibble on whether or not it really counts as reading, but it is accurate to say that I have ‘consumed’ a lot of books.
I Hate the Ivy League was basically just an audiobook of thematically related podcasts, so not a book in the traditional sense.
One thing that I feel odd about the EA community is the easy confidence I observe. It is very different from the feeling of financial precariousness (I guess the term would be precariat?). Not knowing if your job will be terminated, if you will be able to afford rent, if you will be okay skipping the doctor’s appointment, etc. Existing without stability or predictability or security causes a lot of stress. I’m stunned to meet people who are in the top decile of American income earners (somebody talked about earning nearly a million dollars in a year so casually, as if it was a normal thing), or who have donated more money in the past five years than I have earned in the past ten, or who owned a house in a high cost of living city in their mid-20s. I’m amazed at people who graduate from school and earn more than the average American income by the age of 24, and who then create/found their own organization so that they can pursue their interests and get paid for it. But there is a lot of selection bias at play here: maybe people simply don’t talk about their upbringing and I make shallow and incorrect assumptions. This should be interpreted as musings that are very low-confidence.
No, not literally all of them. I mean that if you compile a list of the most recommended or most read books in behavioral economics and social psychology, I think that I have already read between 40% and 80% of them. Just popular press books, not academic books. So I’d consider myself a fairly well-read layperson.
Super interesting list! I hadn’t heard of most of these and have ordered a few of them to read. Thank you!
One of the best experiences I’ve had at a conference was when I went out to dinner with three people that I had never met before. I simply walked up to a small group of people at the conference and asked “mind if I join you?” Seeing the popularity of matching systems like Donut in Slack workspaces, I wonder if something analogous could be useful for conferences. I’m imagining a system in which you sign up for a timeslot (breakfast, lunch, or dinner), and are put into a group with between two and four other people. You are assigned a location/restaurant that is within walking distance of the conference venue, so the administrative work of figuring out where to go is more-or-less handled for you. I’m no sociologist, but I think that having a small group is better for conversation than a large group, and generally also better than a two-person pairing. An MVP version of this could perhaps just be a Google Sheet with some RANDBETWEEN formulas.
The topics of conversation were pretty much what you would expect for people attending an EA conference: we sought advice about interpersonal relationships, spoke about careers, discussed moral philosophy, meandered through miscellaneous interests, shared general life advice, and so on. None of us were taking any notes. None of us sent any follow up emails. We weren’t seeking advice on projects or trying to get the most value possible. We were simply eating dinner and having casual conversation.
When I claim this was one of the best experiences, I don’t mean “best” in the sense of “most impactful,” but rather as as 1) fairly enjoyable/comfortable, 2) distinct from the talks and the one-on-ones (which often tend to blur together in my memory), and 3) I felt like I was actually interacting with people rather than engaging in “the EA game.”[1] I think that third aspect felt like the most important for me.
Of course, if could simply be that this particular group of individuals just happened to mesh well, and that this specific situation it isn’t something which can be easily replicated.
“The EA game” is very poorly conceptualized on my part. I apologize for the sloppiness of it, but I’ll emphasize that this is a loose concept that I’ve just started thinking about, rather than something rigorous. I think of it as something along the lines of “trying to extract value or trying to produce value.” Exploring job opportunities, sensing if someone is open to a collaboration of some type, getting advice on career plans, picking someone’s brain on their area of expertise, getting intel on new funders and grants, and so on. It is a certain type of professional and para-professional networking. You have your game face on, because there is some outcome that is dependent on your actions and on how people perceive you. This is in contrast to something like interacting without an agenda, or being authentic and present.
@Christoph Hartmann has developed a tool that might be useful! Might try to see if we can use it at EAGxUtrecht. Below is a message he sent me explaining it:
Thanks for tagging me! Fully agree with you Joseph that an easier way to socialise with strangers at conferences would be great and that’s exactly what I’m trying to do with this app. Let me know if you know anybody organising conferences or communities for whom this could be helpful.
I wish that people wouldn’t use “rat” as shorthand for “rationalist.”
For people who aren’t already aware of the lingo/jargon it makes things a bit harder to read and understand. Unlike terms like “moral patienthood” or “mesa-optimizers” or “expected value,” a person can’t just search Google to easily find out what is meant by a “rat org” or a “rat house.”[1] This is a rough idea, but I’ll put it out there: the minimum a community needs to do in order to be welcoming to newcomers is to allow newcomers to figure out what you are saying.
Of course, I don’t expect that reality will change to meet my desires, and even writing my thoughts here makes me feel a little silly, like a linguistic prescriptivist tell people to avoid dangling participles.
Try searching Google for what is rat in effective altruism and see how far down you have to go before you find something explaining that rat means rationalist. If you didn’t know it already and a writer didn’t make it clear from context that “rat” means “rationalist”, it would be really hard to figure out what “rat” means.
For what it’s worth, gpt4 knows what rat means in this context: https://chat.openai.com/share/bc612fec-eeb8-455e-8893-aa91cc317f7d
(I’m writing with a joking, playful, tongue-in-cheek intention) If we are setting the bar at “to join our community you need to be at least as well read at GPT4,” then I think we are setting the bar too high.
More seriously: I agree that it isn’t impossible for someone to figure out what it means, it is just a bit harder than I would like. Like when someone told me to do a “bow tech” and I had no idea what she was talking about, but it turns out she was just using a different name for a Fermi estimate (a BOTEC).
I agree that we should tolerate people who are less well read than GPT-4 :P
I have the opposite stance,
it is a cool and cute shorthand, so I’d like for it to be the widely accepted meaning of rat.
A very minor thought.
TLDR: Try to be more friendly and supportive, and to display/demonstrate that in a way the other person can see.
Slightly longer musings: if you attend an EA conference (or some other event that involves you listening to a speaker), I suggest that you:
look at the speaker while they are speaking
have some sort of smile, nodding, or otherwise encouraging/supportive body language or facial expression.
This is likely less relevant for people that are very experienced public speakers, but for people that are less comfortable and at ease speaking in front of a crowd[1] it can be pretty disheartening to look out at an audience and see the majority of people looking at their phone and their laptops.
I was at EAGxNYC recently, and I found it a little disheartening at how many people in the audience were paying attention to their phones and laptops instead of paying attention to the speaker.[2] I am guilty of doing this in at least one talk that I didn’t find interesting, and I am moderately ashamed of my behavior. I know that I wouldn’t want someone to do that to me if I was speaking in front of a crowd. One speaker mentioned to me later that they appreciated my non-verbal support/agreement.[3]
I’m guessing this correlates pretty strongly with age and with professional status/seniority, so it would probably have a greater positive impact
I do understand that taking notes can be really helpful, but from the point of view of the speaker they can’t tell if an audience member is taking rigorous notes or is browsing cat videos on YouTube. We can talk about the optimum scenario for maximum global utility, but I want us (as a community) to also remember that there is a person standing in front of us.
Although I think I may tend to be more expressive than many of the EAs I’ve interacted with, especially when it comes to friendliness, support, enthusiasm, etc.
I want to provide an alternative to Ben West’s post about the benefits of being rejected. This isn’t related to CEA’s online team specifically, but is just my general thoughts from my own experience doing hiring over the years.
While I agree that “the people grading applications will probably not remember people whose applications they reject,” two scenarios[1] come to mind for job applicants that I remember[2]:
The application is much worse than I expected. This would happen if somebody had a nice resume, a well-put together cover letter, and then showed up to an interview looking slovenly. Or if they said they were good at something, and then were unable to demonstrate it when prompted.[3]
Something about the application is noticeably abnormal (usually bad). This could be the MBA with 20 years of work experience who applied for an entry level part-time role in a different city & country than where he lived[4]. This could be the French guy I interviewed years ago who claimed to speak unaccented American English, but clearly didn’t.[5] It could be the intern who came in for an interview and requested a daily stipend that was higher than the salary of anyone on my team. If you are rude, I’ll probably remember it. I remember the cover letter that actually had the wrong company name at the top (I assume he had recently applied to that company and just attached the wrong file). I also remember the guy I almost hired who had started a bibimbap delivery service for students at his college, so impressive/good things can also get you remembered.
A big caveat here is that memories are fuzzy. If John Doe applies to a job and I reject him and three months later we meet somehow and he says “Hi, I’m John Doe” I probably wouldn’t remember that John Doe applied, nor that I rejected him (unless his name was abnormally memorable, or there was something otherwise notable to spark my memory). But if he says “Hi, I’m John Doe. I do THING, and I used to ACCOMPLISHMENT,” then maybe I’d remember looking at his resume or that he mentioned ACCOMPLISHMENT in a cover letter. But I would expect more than 90% of applications I look at fade completely from my mind within a few days.
I think that it is rare. I have memories of less than a dozen specific applications out of the 1000s I’ve looked at over the years, and if you are self-aware enough to be reading this type of content then you probably won’t have an application bad enough for me to remember.
The other thing I would gently disagree with Ben West on is about how getting rejected can be substantially positive.[6] My rough perspective (not based on data, just based on impressions) is that it is very rare that getting rejected from a job application is a good thing. I imagine that there are some scenarios in which a strong candidate doesn’t get hired, and then the hiring manager refers the candidate to another position. That would be great, but I also think that it doesn’t happen very often. I don’t have data on “of candidates that reach the 3rd stage or further of a hiring process but are not hired, what percent have some specific positive result from the hiring process,” but my guess is that it is a low percentage.
Nonetheless, my impression is that the hiring rounds Ben runs are better than most, and the fact that he is willing to give feedback or make referrals for some rejected candidates already puts his hiring rounds in the top quartile or decile by my judgement.
To the extent that the general claim is “if you think you are a reasonable candidate, please apply,” I agree. You miss 100% of the shots you don’t take. If you are nervous about applying to EA organizations because you think a rejection could damage your reputation at that and other organizations, if your application is better than the bottom 5-10%, you have nothing to worry about. Have a few different people check your resume to make sure you haven’t got any low hanging fruit improvements, and go for it.
Actually, it is just two variations of a single “application is bad” scenario.
I’m thinking about real applications I’ve seen for each of these things that I mention. But they are all several years old, from before I became aware of EA.
I remember interviewing somebody in 2017 or so who was talking about his machine learning project, but then when I poked and prodded he had just cobbled together templates from a tutorial. And I’ve had this linguistically a few times, when a resume /cover letter claims a high level of competence in a language (bilingual, fluency, “practically native”), or something similarly high, yet the person struggles to converse in that language.
I’m 100% open to people taking part-time jobs if they want them, and I don’t mind someone “overqualified” doing a job. But if the job is in-person and requires you to speak the local language, you’ll have to at least convince me why you are a good fit.
His English was very good, far better than my French, and I assume that he spent many hours practicing and studying. But it was noticeably not American English, and that particular job required incumbents to be native English speakers.
There is the general idea getting rejected from MEDIOCRE_COMPANY enabled you to apply and get hired at GREAT_COMPANY. But that seems bland/obvious enough that I’ll set it aside.
A brief thought on ‘operations’ and how it is used in EA (a topic I find myself occasionally returning to).
It struck me that operations work and non-operations work (within the context of EA) maps very well onto the concept of staff and line functions. Line function are those that directly advances an organization’s core work, while staff functions are those that do not. Staff functions have advisory and support functions; they help the line functions. Staff functions are generally things like accounting, finance, public relations/communication, legal, and HR. Line functions are generally things like sales, marketing, production, and distribution. The details will vary depending on the nature of the organization, but I find this to be a somewhat useful framework for bridging concepts between EA and the broader world.
It also helps illustrate how little information is conveyed if I tell someone I work in operations. Imagine ‘translating’ that into non-EA verbiage as I work in a staff function. Unless the person I am talking to already has a very good understanding of how my organization works, they won’t know what I actually do.
I’m skimming through an academic paper[1] that I’d roughly describe as cross-cultural psychology about morality, and the stark difference between what kinds of behaviors Americans and China view as immoral[2] was surprising to me.
The American list has so much of what I could consider as causing harm to others, or malicious. The Chinese list has a lot of what I would consider as rude, crass, or ill-mannered. The differences here remind me of how I have occasionally pushed against the simplifying idea of words having easy equivalents between English and Chinese.[3]
There are, of course, issues with taking this too seriously: issues like spitting, cutting in line, or urinating publicly are much more salient issues in Chinese society than in American society. I’m also guessing that news stories about murders and thefts are more commonly seen in American media than in China’s domestic media. But overall I found it interesting, and a nice nudge/reminder against the simplifying idea that “we are all the same.”
Dranseika, V., Berniūnas, R., & Silius, V. (2018). Immorality and bu daode, unculturedness and bu wenming. Journal of Cultural Cognitive Science, 2, 71-84.
Note that there are issues here relating to meaning of the words in English and Chinese (immoral and bu daode) not being quite the same, which is a big part of the paper. In fact, the authors even claim that daode is not a reasonable translation for morality (a claim that I roughly agree with).
Similarly to morality, words like friend, cousin, to be open, or hair have different connotations and are used in different ways, and shouldn’t be viewed as exact translations, but rather as rough analogues. My naïve assumption is that the more closely related languages and culture are, the easier it is to translate concepts directly.
I wonder if the main difference is that the Americans and Lithuanians are responding more based on how bad the things seem to be, while the Chinese are responding more based on how common they are. Most of the stuff on the Chinese list also seems bad to me, just not nearly as bad as violence.
I’d think the article you’re referencing (link) basically argues against considering “daode” to mean “morality” and vice-versa.
The abstract: “In contemporary Western moral philosophy literature that discusses the Chinese ethical tradition, it is a commonplace practice to use the Chinese term daode 道德 as a technical translation of the English term moral. The present study provides some empirical evidence showing a discrepancy between the terms moral and daode.”
Yes. The idea of English immoral and Chinese bu daode not being quite the same is a big part of the paper.
I think this is a really big and valuable finding, and generally agree with your thinking about language and morality differences, which are valuable research areas.
Anyone doing a deeper dive in the paper might want to think about whether Chinese survey participants are surprised to see relatively extreme and serious crimes like theft and violence and decide not to touch those concepts with a ten foot pole, and default to things that people frequently talk about or are frequently criticized by official news sources and propaganda.
Not that they’re super afraid of checking a box or anything; it’s just that it’s only a survey and they don’t know the details of what’s going on, and by default the tiny action is not worth something complicated happening or getting involved in something weird that they don’t understand. Or maybe it’s only that they think it’s acceptable to criticize things that everyone is obviously constantly criticizing, especially in an unfamiliar environment where everything is being recorded on paper permanently (relative to verbal conversations which are widely considered safer and more comfortable). It’s not that people are super paranoid, but, like, why risk it if some unfair and bizarre situation could theoretically happen (e.g. corruption-related, someone’s filling quotas), and conformity is absolutely guaranteed to be safe and cause no major or minor disturbances to your daily life?
I didn’t read the paper, and these musings should only be seriously considered as potentially helpful for people reading the paper. The paper seems to have run other forms of surveys that point towards similar conclusions.
From the study it looks like participants were given a prompt and asked to “free-list” instead checking boxes so it might be more indicative of what’s actually on people’s minds.
The immoral behaviors prompt being:
My impression is that the differences between the American and Chinese lists (with the Lithuanian list somewhat in between) appear to be a function of differences in the degree of societal order (i.e., crime rates, free speech), cultural differences (i.e., extent of influence of: Anglo-American progressivism, purity norms of parts of Christianity, traditional cultures, and Confucianism), and demographics (i.e, topics like racism/discrimination that might arise in contexts that are ethnically diverse instead of homogenous).
I sort of don’t agree with this idea, and I’m trying to figure out why. It is so different from a formal membership (like being a part of a professional association like PMI), in which you have a list of members and maybe a card or payment.
Here is my current perspective, which I’m not sure that I fully endorse: on the ‘ladder’ or being an EA (or of any other informal identity) you don’t have to be on the very top rung to be considered part of the group. You probably don’t even have to be on the top handful of rungs. Is halfway up the ladder enough? I’m not sure. But I do think that you need to be higher than the bottom rung or two. You can’t just read Doing Good Better and claim to be an EA without any additional action. Maybe you aren’t able to change your career due to family and life circumstances. Maybe you don’t earn very much money, and thus aren’t donating. I think I could still consider you an EA if you read a lot of the content and are somehow engaged/active. But there has to be something. You can’t just take one step up the ladder, then claim the identity and wander off.
My brain tends to jump to analogies, so I’ll use these to try and illustrate my examples:
If I visit your city and watch your local sports team for an hour, and then never watch them play again, I can’t really claim that I’m a fan of your team, can I? The fans are people who watch the matches regularly, who know something about the team, who really feel a sense of connection.
If I started lifting weights twice per week, and I started this week, is it too early for me to identify as a weight lifter? Nobody is going to police the use of the term “weight lifter,” but it feels premature. I’d feel better waiting until I have a regular habit of this activity before I publicly claim the identity.
If I go to yoga classes, which sometimes involve meditation, and I don’t do any other meditation outside of ~5 every now and then, can I call myself a meditator? Meh… If a person never intentionally or actively does meditation, and they just happen to do it when it is part of a yoga class, I would lean toward “no.”
To give more colour to this. During the hype of FTX Future Fund a lot of people called themselves EAs in order to try show value alignment to try get funding and it was painfully awkward and obvious. I think the feeling you’re naming is something like a fair-weather EA effect that dilutes trust within the community and the self-commitment of the label.
That is a good point, and I like the phrasing of fair-weather EA.
I interpreted it in a more literal way, like it’s just true that anyone can literally call themselves part of EA. That doesn’t mean other people consider it accurate.
Good point.
I get the sentiment, but what’s the alternative?
I don’t think you can define who gets to identify as something, whether that’s gender or religion or group membership.
I’m a Christian and I think anyone should be able to call themselves call themselves a Christian, no issue with that at all no matter what they believe or whatever their level of commitment or how good or bad they are as a person.
Any alternative means that someone else has to make a judgement call based on objective or subjective criteria, which I’m not comfortable with.
TBH I doubt people will be clamouring for the EA title for status or popularity haha.
Yeah, I think you are right in implying there aren’t really any good alternatives. We could try having a formal list of members who all pay dues to a central organization, but (having put almost no thought into it) I assume that would come with it’s own set of problems. And I also feel comfortability with an implication that we should have someone else making a judgment based on externally visible criteria. I probably wouldn’t make the cut! (I hardly donate at all, and my career hasn’t been particularly impactful either)
Your example of Christianity makes me think about EA being a somewhat “action-based identity.” This is what I mean: I can verbally claim a particular identity (Christianity, or EA, or something else), and that matters to an extent. But what I do matters a lot also, especially if it is not congruent with the identity I claim. If I claim to be Christian but I fail to treat my fellow man with love and instead I am cruel, other people might (rightly) question how Christian I am. If I claim to be an EA but I behave in anti-EA ways (maybe I eat lots of meat, I fail to donate discretionary funds, I don’t work toward reducing suffering, etc.) I won’t have a lot of credibility as an EA.
I’m not sure how to parse the difference between a claimed identity and a demonstrated identity, but I’d guess that I could find some good thoughts about it if I were willing to spend several hours diving into some sociology literature about identity. I am curious about it, but I am 20-minutes curious, not 8-hours curious. Haha.
EDIT: after mulling over this for a few more minutes, I’ve made this VERY simplistic framework that roughly illustrated my current thinking. There is a lot of interpretation to be made regarding what behavior counts as in accordance with an EA identity or incongruent with an EA identity (eating meat? donating only 2%? not changing your career?). I’m not certain that I fully endorse this, but it gives me a starting point for thinking about it.
100% I really like this. You can claim any identity, but how much credibility you have with that identity depends on your “demonstrated identity”. There is risk though to the movement with this kind of all takers appoach. Before I would have thought that the odd regular person behaving badly while claiming to be EA wasn’t a big threat.
Then there was SBF and the sexual abuse scandals. These however were not so much an issue of fringe, non-committed people claiming to EA and tarnishing the movement, but mostly high profile central figures tarnishing the movement.
Reflecting on this, perhaps the actions of high profile or “core” people matter more than people on the edge, who might claim to be EA without serious committment.
I mean I think it’ll come in waves. As I said in my comment below when FTX Future Fund was up and regrants were abound I had many people around me fake the EA label with hilarious epistemic tripwires abound. Then when FTX collapsed those people were quiet. I think as AI Safety gets more prominent this will happen again in waves. I know a few humanities people pivoting to talking about AI Safety and AI bias people thinking of how to get grant money.
I’m very pleased to see that my writing on the EA Forum is now referenced in a job posting from Charity Entrepreneurship to explain to candidates what operations management is, described as “a great overview of Operations Management as a field.” This gives me some warm fuzzy feelings.
I just looked at [ANONYMOUS PERSON]’s donations. The amount that this person has donated in their life is more than double the amount that I have ever earned in my life. This person appears to be roughly the same age as I am (we graduated from college ± one year of each other). Oof. It makes me wish that I had taken steps to become a software developer back when I was 15 or 18 or 22.
Oh, well. As they say, comparison is the thief of joy. I’ll try to focus on doing the best I can with the hand I’m dealt.
Hi Joseph :) Based on what you’ve written I’m going to guess you have probably donate more than 99% of the world’s population to effective charities. So you’re probably crushing it!
Haha, thanks for bringing a smile to my face.
❤️
Why not start taking those steps today?
Because my best estimate is that there are different steps toward different paths that would be better than trying to rewind life back to college age and start over. Like the famous Sylvia Plath quote about life branching like a fig tree, unchosen paths tend to wither away. I think that becoming a software developer wouldn’t be the best path for me at this point: cost of tuition, competitiveness of the job market for entry-level developers, age discrimination, etc.
Being a 22-year old fresh grad with a bachelor’s degree in computer science in 2010 is quite a different scenario than being a 40-year old who is newly self-taught through Free Code Camp in 202X. I predict that the former would tend to have a lot of good options (with wide variance, of course), while the latter would have fewer good options. If there was some sort of ‘guarantee’ regarding a good job offer or if a wealthy benefactor offered to cover tuition and cost of living while I learn then I would give training/education very serious consideration, but my understanding is that the 2010s were an abnormally good decade to work in tech, and there is now a glut of entry-level software developers.
Is talk about vegan diets being more healthy is mostly just confirmation bias and tribal thinking? A vegan diet can be very healthy or very unhealthy, and a non-vegan diet can also be very healthy or very unhealthy. The simplistic comparisons that I tend to see are contrasting vegans who put a lot of care and attention toward their food choices and the health consequences, versus people who aren’t really paying attention to what they eat (something like the standard American diet or some similar diet without much intentionality). I suppose in a statistics class we would talk about non representativeness.
Does the actual causal factor for health tend to be something more like cares about diet, or pays attention to what they eat, or socio-economic status? If we controlled for factors like these, would a vegan diet still be healthier than a non-vegan diet?
I also think if often is. I find discussions for and against veganism surprisingly divisive and emotionally-charged (see e.g. r/AntiVegan and r/exvegans )
That said, my understanding is that many studies do control for things like socio-economic status, and they mostly find positive results for many diets (including, but not exclusively, plant-based ones). You can see some mentioned in a previous discussion here.
In general, I think it’s very reasonable when deciding whether something is “more healthy” to compare it to a “standard”. As an extreme example, I would expect a typical chocolate-based diet to be less healthy than the standard American diet. So, while it would be healthier than a cyanide-based diet, it would still be true and useful to say that a chocolate-based diet is unhealthy.
You choose great examples! 😂
Strong upvote for the attempted mirth—I think I’m one of the few that appreciates it around here :D.
I think your confounders are on the money.
You might be interested in Elizabeth’s Change my mind: Veganism entails trade-offs, and health is one of the axes. I especially appreciated her long list of cruxes, the pointer to Faunalytics’ study of nutritional issues in ex-vegans & ex-vegetarians, and her analysis of that study attempting to adjust for its limitations which basically strengthens its findings (to my reading).
I’d also guess, without much evidence, that there’s a halo effect-like thing going on where if someone really care about averting animal suffering a vegan diet starts seeming more virtuous, which spills over into their assessment of its health benefits.
In a recent post on the EA forum (Why I Spoke to TIME Magazine, and My Experience as a Female AI Researcher in Silicon Valley), I couldn’t help but notice that a comments from famous and/or well-known people got lots more upvotes than comments by less well-known people, even though the content of the comments was largely similar.
I’m wondering to what extent this serves as one small data point in support of the “too much hero worship/celebrity idolization in EA” hypothesis, and (if so) to what extent we should do something about it. I feel kind of conflicted, because in a very real sense reputation can be a result of hard work over time,[1] and it seems unreasonable to say that people shouldn’t benefit from that. But it also seems antithetical to the pursuit of truth, philosophy, and doing good to weigh to the messenger so heavily over the message.
I’m mulling this over, but it is a complex and interconnected enough issue that I doubt I will create any novel ideas with some casual thought.
Perhaps just changing the upvote buttons to something more like this content creates nurtures a discussion space that lines up with the principles of EA? I’m not confident that would change much.
Although not always. Sometimes a person is just in the right place at the right time. Big issues of genetic lottery and class matter. But in a very simplistic example, my highest ranking post on the EA forum is not one of the posts that I spent hours and hours thinking about and writing, but instead is one where I simply linked to a article about EA in the popular press and basically said “hey guys, look how cool this is!”
I’m not convinced by this example; in addition to expressing the view, Toby’s message is a speech act that serves to ostracize behaviour in a way that messages from random people do not. Since his comment achieves something the others do not it makes sense for people to treat it differently. This is similar to the way people get more excited when a judge agrees with them that they were wronged than when a random person does; it is not just because of the prestige of the judge, but because of the consequences of that agreement.
I’m glad that you mentioned this. This makes sense to me, and I think it weakens the idea of this particular circumstance as an example of “celebrity idolization.”
If the EA forum had little emoji reactions for this made me change my mind or this made me update a bit, I would use them here. 😁
I agree as to the upvotes but don’t find the explanation as convincing on the agreevotes. Maybe many people’s internal business process is to only consider whether to agreevote after having decided to upvote?
Yeah, and in general there’s an extremely high correlation between upvotes and agreevotes, perhaps higher than there should be. It’s also possible that some people don’t scroll to the bottom and read all the comments.
I definitely think you should expect a strong correlation between “number of agree-votes” and “number of approval-votes”, since those are both dependent on someone choosing to engage with a comment in the first place, my guess is this explains most of the correlation.
And then yeah, I still expect a pretty substantial remaining correlation.
I wish that it was possible for agree votes to be disabled on comments that aren’t making any claim or proposal. When I write a comment saying “thank you” or “this has given me a lot to think about” and people agree vote (or disagree vote!), it feels to odd: there isn’t even anything to agree or disagree with there!
In those cases I would interpret agree votes as “I’m also thankful” or “this has also given me a lot to think about”
If we interpret an up-vote as “I want to see more of this kind of thing”, is it so surprising that people want to see more such supportive statements from high-status people?
I would feel more worried if we had examples of e.g. the same argument being made by different people and the higher-status person getting rewarded more. Even then—perhaps we do really want to see more of high-status people reasoning well in public.
Generally, insofar as karma is a lever for rewarding behaviour, we probably care more about the behaviour of high-status people and so we should expect to see them getting more karma when they behave well, and also losing more when they behave badly (which I think we do!). Of course, if we want karma to be something other than an expression of what people want to see more of then it’s more problematic.
Toby’s average karma-per-comment definitely seems higher than average, but it isn’t so much higher than that of other (non-famous) quality posters I spot-checked as to suggest that there are a lot of people regularly upvoting his comments due to hero worship/celebrity idolization. I can’t get the usual karma leaderboard to load to more easily point to actual numbers as opposed to impressionistic ones.
I have this concept I’ve been calling “kayfabe inversion” where attempts to create a social reality that $P$ accidentally enforces $\not P$. The EA vibe of “minimize deference, always criticize your leaders” may just be, by inscrutable social pressures, increasing deference and hero worship and so on. Spurred by my housemate’s view of DoD and it’s ecosystem of contractors (because their dad has a long career in it) that perhaps the military’s explicit deference and hierarchies actually make it easier to do meaningful criticism of or disagreement with leaders, compared to the implicit hierarchies that emerge when you say that you want to minimize deference.
Something along these lines.
Perhaps this hypothesis is made clear by a close reading of tyranny of structurelessness, idk.
Could I bother you to rephrase “$P$ accidentally enforces $\not P$”? I don’t know what you mean by using these symbols.
Oh sorry I just meant a general form for “any arbitrary quality a community may wish to cultivate”
I’ve found explanation freeze to be a useful concept, but I haven’t found a definition or explanation of it on the EA Forum. So I thought I’d share a little description of explanation freeze here so that anyone searching the forum can find it, and so that I can easily link to it.
The short version is:
The slightly longer explanation is:
Thanks—this is helpful as a term, and closely related to privileging the hypothesis; https://www.lesswrong.com/posts/X2AD2LgtKgkRNPj2a/privileging-the-hypothesis The general solution, of course,is expensive but necessary; https://secondenumerations.blogspot.com/2017/03/episode-6-method-of-multiple-working.html
I like the linguistic implication of ‘freezing’ the explanation, as though implying other states (not frozen, warm, malleable, etc).
Baking this implication directly into the term for it carries significant value.
Is this just a combination of anchoring with confirmation bias? Or have I misunderstood.
I think that describing it as a combination of anchoring with confirmation bias seems roughly accurate. Maybe there might be an element of availability bias tossed in as well, since we latch on to the most readily available answer?
I’m not sure, but I think that Julia Galef has spoken about the concept of explanation freeze in interviews or on podcasts, so you might be able to dig up a more detailed and expansive explanation. But with some cursory Google Searching I was only able to find passing references to it, rather than more full explanations.
The 80,000 Hours team just published that “We now rank factory farming among the top problems in the world.” I wonder if this is a coincidence or if this planned to coincide with the EA Forum’s debate week? Combined with the current debate week’s votes on where an extra $100 should be spent, these seem like nice data points to show to anyone that claims EA doesn’t care about animals.
As far as I’m aware it’s a coincidence, but I’m v happy about this :)
Every now and I then I see (or hear) people involved in EA refer to Moloch[1], as if this is a specific force that should be actively resisted and acted against. Genuine question: are people just using the term “Moloch” to refer to incentives [2] that nudge us to do bad things? Is there any reason why we should say “Moloch” instead of “incentives,” or is this merely a sort of in-group shibboleth? Am I being naïve or otherwise missing something here?
Presumably, Scott Alexander’s 2014 Meditations on Moloch essay has been very widely read among EAs.
As well as the other influences on our motives from things external to ourselves, such as the culture and society that we grew up in, or how we earn respect and admiration from peers.
I see it as “incentives that nudge us to do bad things”, plus this incentive structure being something that naturally emerges or is hard to avoid (“the dictatorless dictatorship”).
I think “Moloch” gets this across a bit better than just “incentives” which could include things like bonuses which are deliberately set up by other people to encourage certain behaviour.
This is actually a pretty big issue. It was basically locked in to Meditations on Moloch because it was too good. The essay does a really good job explaining it, and giving examples that create the perspective you need to understand the broad applicability of the concept, but has too many words; “incentives” or even a single phrase (e.g. “race to the bottom”) would have fewer words, but it wouldn’t give the concept the explanation that it’s worth. Maybe there could be some kind of middle ground.
I’ll admit that I really like how there are so many examples shared in Meditations on Moloch, which helps it serve as a kind of intuition flooding.
oh my GOD I cannot tell you how much I needed this
I’m concerned whenever I see things like this:
In my mind, this seems anti-scouty. Rather than finding what works and what is impactful, it is saying “I want my team to win.” Or perhaps the more charitable interpretation is that this person is talking about a rough hypothesis and I am interpreting it as a confident claim. Of course, there are many problems with drawing conclusions from small snippets of text on the internet, and if I meet this person and have a conversation I might feel very differently. But at this point it seems like a small red flag, demonstrating that there is a bit less cause-neutrality here (and a bit more being wedded to a particular issue) than I would like. But it is hard to argue with personal fit; maybe this person simply doesn’t feel motivated about lab grown meat or bednets or bio-risk reduction, and this is their maximum impact possibility.
I changed the exact words to that I won’t publicly embarrass or draw attention to the person who wrote this. But to be clear, this is not a thought experiment of mine, someone actually wrote this. EDIT: And the cause this individual promoted is more along the lines of helping homeless people in America or protect elephants or rescuing political dissidents: it would probably have a positive effect, but I doubt it would be competitive with saving a life (in expectation) for 4-6 thousand USD.
In my experience, many of those arguments are bad and not cause-neutral, though to me your take seems too negative—cause prioritization is ultimately a social enterprise and the community can easily vet and detect bad cases, and having proposals for new causes to vet seems quite important (i.e. the Popperian insight, individuals do not need to be unbiased, unbiasedness/intersubjectivity comes from open debate).
You make a good point. I probably allow myself to be too affected by claims (such as “saving the great apes should be at the center of effective altruism”), when in reality I should simply allow the community sieve to handle them.
This feels misplaced to me. Making an argument for some cause to be prioritised highly is in some sense one of the core activities of effective altruism. Of course, many people who’d like to centre their pet cause make poor arguments for its prioritisation, but in that case I think the quality of argument is the entire problem, not anything about the fact they’re trying to promote a cause. “I want effective altruists to highly prioritise something that they currently don’t” is in some sense how all our existing priorities got to where they are. I don’t think we should treat this kind of thing as suspicious by nature (perhaps even the opposite).
Hi Ben,
It seems to me that one should draw a distinction between, “I see this cause as offering good value for money, and here is my reasoning why”, and “I have this cause that I like and I hope I can get EA to fund it”. Sometimes the latter is masquerading as the former, using questionable reasoning.
Some examples that seem like they might be in the latter category to me:
https://forum.effectivealtruism.org/posts/Dytsn9dDuwadFZXwq/fundraising-for-a-school-in-liberia
https://forum.effectivealtruism.org/posts/R5r2FPYTZGDzWdJEY/how-to-get-wealthier-folks-involved-in-mutual-aid
https://forum.effectivealtruism.org/posts/zsLcixRzqr64CacfK/zzappmalaria-twice-as-cost-effective-as-bed-nets-in-urban
In any case though, I’m not sure it makes a difference in terms of the right way to respond. If the reasoning is suspect, or the claims of evidence are missing, we can assume good faith and respond with questions like, “why did you choose this program”, “why did you conduct the analysis in this way”, or “have you thought about these potentially offsetting considerations”. In the examples above, the original posters generally haven’t engaged with these kind of questions.
If we end up with people coming to EA looking for resources for ineffective causes, and then sealioning over the reasoning, I guess that could be a problem, but I haven’t seen that here much, and I doubt that sort of behavior would ultimately be rewarded in any way.
Ian
The third one seems at least generally fine to me—clearly the poster believes in their theory of change and isn’t unbiased, but that’s generally true of posts by organizations seeking funding. I don’t know if the poster has made a (metaphorically) better bednet or not, but thought the Forum was enhanced by having the post here.
The other two are posts from new users who appear to have no clear demonstrated connection to EA at all. The occasional donation pitch or advice request from a charity that doesn’t line up with EA very well at all is a small price to pay for an open Forum. The karma system dealt with preventing diversion of the Forum from its purposes. A few kind people offered some advice. I don’t see any reason for concern there.
I agree, and to be clear I’m not trying to say that any forum policy change is needed at this time.
those posts all go out of their way to say they’re new to EA. I feel pretty differently about someone with an existing cause discovering EA and trying to fundraise vs someone who integrated EA principles[1] and found a new cause they think is important.
I don’t love the phrase “EA principles”, EA gets some stuff critically wrong and other subcultures get some stuff right. But it will do for these purposes.
I think that to a certain extent that is right, but this context was less along the lines of “here is a cause that is going to be highly impactful” and more along the lines of “here is a cause that I care about.” Less “mental health coaching via an app can be cost effective” and more like “let’s protect elephants.”
But I do think that in a broad sense you are correct: proposing new interventions, new cause areas, etc., is how the overall community progresses.
I think a lot of the EA community shares your attitude regarding exuberant people looking to advance different cause areas or interventions, which actually concerns me. I am somewhat encouraged by the disagreement with you regarding your comment that makes this disposition more explicit. Currently, I think that EA, in terms of extension of resources, has much more solicitude for thoughts within or adjacent to recognized areas. Furthermore, an ability to fluently convey ones ideas in EA terms or with an EA attitude is important.
Expanding on jackva re the Popperian insight, having individuals passionately explore new areas to exploit is critical to the EA project and I am a bit concerned that EA is often disinterested in exploring in directions where a proponent lacks some of the EA’s usual trappings and/or lacks status signals. I would be inclined to be supportive of passion and exuberance in the presentation of ideas where this is natural to the proponent.
I suspect you are right that many of us (myself included) focus more than we ought to on how similar an idea sounds in relation to ideas we are already supporting. I suppose maybe a cruxy aspect of this is how much effort/time/energy we should spend considering claims that seem unreasonable at first glance?
If someone honestly told me that protecting elephants (as an example) should be EA’s main cause area, the two things that go through my heard first are that either that this person doesn’t understand some pretty basic EA concepts[1], or that there is something really important to their argument that I am completely ignorant of.
But depending on how extreme a view it is, I also wonder about their motives. Which is more-or-less what led me to viewing the claim as anti-scouty. If John Doe has been working for elephant protecting (sorry to pick on elephants) for many years and now claims that elephant protection should be a core EA cause area, I’m automatically asking if John is A) trying to get funding for elephant protection or B) trying to figure out what does the most good and to do that. While neither of those are villainous motives, the second strikes me as a bit more intellectually honest. But this is a fuzzy thing, and I don’t have good data to point to.
I also suspect that I myself may have an over-sensitive “bullshit detector” (for lack of a more polite term), so that I end up getting false positives sometimes.
Expected value, impartiality, ITN framework, scout mindset, and the like
I agree that advocacy inspired by other-than-EA frameworks is a concern, I just think that the EA community is already quite inclined to express skepticism for new ideas and possible interventions. So, the worry that someone with high degrees of partiality for a particular cause manages to hijack EA resources is much weaker than the concern that potentially promising cases may be ignored because they have an unfortunate messenger.
I think you’ve phrased that very well. As much as I may want to find the people who are “hijacking” EA resources, the benefit of that is probably outweighed by how it disincentivized people to try new things. Thanks for commenting back and forth with me on this. I’ll try to jump the gun a bit less from now on when it comes to gut feeling evaluations of new causes.
I can only aspire to be as good a scout as you, Joseph. Cheers
I think it’s important to consider that the other person may be coming from a very different ethical framework than you are. I wouldn’t likely support any of the examples in your footnote, but one can imagine an ethical framework in which the balance looks closer than it does to me. To be clear, I highly value saving the lives of kids under five as the standard EA lifesaving projects do. But: I can’t objectively show that a framework that assigns little to no value to averting death (e.g., because the dead do not suffer) is a bad one. And such a significant difference in values could be behind some statements of the sort you describe.
This is in relation to the Keep EA high-trust idea, but it seemed tangential enough and butterfly idea-ish that it didn’t make sense to share this as a comment on that post.
Rough thoughts: focus a bit less on people and a bit more on systems. Some failures are ‘bad actors,’ but my rough impression is that far more often bad things happen because either:
the system/structures/incentives nudge people toward bad behavior, or
the system/structures/incentives allow bad behavior
It very much reminds me of “Good engineering eliminates users being able to do the wrong thing as much as possible. . . . You don’t design a feature that invites misuse and then use instructions to try to prevent that misuse.” I’ve also just learned about the hierarchy of hazard controls, which seems like a nice framework for thinking about ‘bad things.’
I think it is great to be able to trust people, but I also want institutions designed in such a way that it is okay if someone is in the 70th percentile of trustworthiness rather than the 95th percentile of trustworthiness.
Low confidence guess: small failures often occur not because people are malicious or selfish, but because they aren’t aware of better ways to do things. An employee that isn’t aware of EEO in the United States is more likely to make costly mistakes. A manager who has not received good training on how to be a manager is going to fumble more often.
I don’t want to imply that designing systems well is easy, not that I am somehow an expert in it. But my (very) rough impression is that in EA we trust individuals a lot, and we don’t spend as much time thinking about organizational design.
What are the norms on the EA Forum about ChatGPT-generated content?
If I see a forum post that looks like it was generated by a LLM generative AI tool, it is rude to write a comment asking “Was this post written by generative AI?” I’m not sure what the community’s expectations are, and I want to be cognizant of not assuming my own norms/preferences are the appropriate ones.
It seems to me that the proof is in the pudding. The content can be evaluated on what it brings to the discourse and the tools used in producing it are only relevant insofar as these tools result in undesirable content. Rather than questioning whether the post was written by generative AI, I would give feedback as to what aspects of the content you are criticizing.
While I am not aware of any norms or consensus, I would be okay with that. My own view is that use of generative AI should be proactively disclosed where the AI could fairly be considered the primary author of the post/comment. I am unsure how much support this view has, though.
IMO, if the content is good we shouldn’t bring it up. If an author is producing bad content more than once a month and it seems generated by LLMs they should be warned then banned if it continues.
I suspect any comment threads about whether content is LLM-generated aren’t worth reading and thus aren’t worthwhile writing.
This quote made me think of the various bad behaviors that we’ve seen within EA over the past few years. Although this quote is from a book about vegetarianism, the words “keep buying meat” could easily be substituted for some other behavior.
While publicity and marketing and optics all probably oppose this to a certain extent, I take some solace in the fact that some people behaving poorly doesn’t actually diminish the validity of the core principles. I suppose the pithy version would be something like “[PERSON] did [BAD THING]? Well, I’m going to keep buying bednets.”
Decoding the Gurus is a podcast in which an anthropologist and a psychologist critique popular guru-like figures (Jordan Peterson, Nassim N. Taleb, Brené Brown, Imbram X. Kendi, Sam Harris, etc.). I’ve listened to two or three previous episodes, and my general impression is that the hosts are too rambly/joking/jovial, and that the interpretations are harsh but fair. I find the description of their episode on Nassim N. Taleb to be fairly representative:
A few weeks ago they released an episode about Eliezer Yudkowksy titled Eliezer Yudkowksy: AI is going to kill us all. I’m only partway through listening to it, but so far they have reasonable but not rock-solid critiques (such as noting how it is a red flag for someone to list off a variety of fields that they claim expertise in, or highlighting the behavior that lines up with a Cassandra complex).
The difficulty I have in issues like this parallels the difficulty I perceive in evaluating any other “end of the world” claim: the fact that many other individuals have been wrong about each of their own “end of the world” claims doesn’t really demonstrate that this one is wrong. It perhaps suggests that I should not accept it at face value and I should interrogate the claim, but it certainly doesn’t prove falsehood.
You’re right, but it does feel like some pretty strong induction, though not just to not accepting the claim at face value, but for demanding some extraordinary evidence. I’m speaking from the p.o.v. of a person ignorant of the topic, and just making the inference from the perennially recurring apocalyptic discourses.
True, but you only have a finite amount of time to spend investigating claims of apocalypses. If you do a deep dive into the arguments of one of the main proponents of a theory, and find that it relies on dubious reasoning and poor science (like the “mix proteins to make diamondoid bacteria” scenario), then dismissal is a fairly understandable response.
If AI safety wants to avoid this sort of thing from happening, they should pick better arguments and better spokespeople, and be more willing to call out bad reasoning when it happens.
I run some online book clubs, some of which are explicitly EA and some of which are EA-adjacent: one on China as it relates to EA, one on professional development for EAs, and one on animal rights/welfare/advocacy. I don’t like self-promoting, but I figure I should post this at least once on the EA Forum so that people can find it if the search for “book club” or “reading group.” Details, including links for joining each of the book clubs, are in this Google Doc.
I want to emphasize that this isn’t funded through an organization, I’m not trying to get emails to put on a newsletter, and I’m not selling an online course or push people to buy a product. This is literally just online book clubs: we vote on books and have video chats to talk about books.
Here are some upcoming discussions, with links for the events:
August 14, The Culture Map: Breaking Through the Invisible Boundaries of Global Business. https://calendar.app.google/WY6LocYTX4WfCjAw5
August 18, China: The Bubble That Never Pops. https://calendar.app.google/oUkTYWLg29mAK1xH9
August 24, Dialogues on Ethical Vegetarianism. https://lu.ma/xuascqt5
September 21, How Asia Works: Success and Failure in the World’s Most Dynamic Region. https://calendar.app.google/TWWa2yLeKNEupoiaA
September 22, The Scout Mindset: Why Some People See Things Clearly and Others Don’t. https://calendar.app.google/SEtCiaoQw5ZmArhS6
September 28, The Emotional Lives of Animals: A Leading Scientist Explores Animal Joy, Sorrow, and Empathy—and Why They Matter. https://lu.ma/ng492gwf
If there is interest, I’d be open to organizing/coordinating some kind of a “core EA books” reading group, with looks like What We Owe the Future, Scout Mindset, Doing Good Better, Animal Liberation, Poor Economics, etc.
Some people involved in effective altruism have really great names for their blogs: Ollie Base has Base Rates, Diontology from Dion Tan, and Ben West has Benthamite. It is really cool how people are able to take their names and with some slight adjustments make them into cool references. If I was the blogging type and my surname wasn’t something so uncommon/unique, I would take a page from their book.
“When life gives you Lemiens”?
Oh, that’s not bad! Maybe I’ll use that someday. 🤣 Unfortunately, I think that will encourage people to mispronounce my surname; it isn’t pronounced less like “lemon” and more in a way that rhymes with “the mean” or “the keen.”
“Lemiently Stoic”
I suspect that the biggest altruistic counterfactual impact I’ve had in my life was merely because I was in the right place at the right time: a moderately heavy cabinet/shelf thing was tipping over and about to fall on a little kid (I don’t think it would have killed him. He probably would have had some broken bones, lots of bruising, and a concussion). I simply happened to be standing close enough to react.
It wasn’t as a result of any special skillset I had developed, nor of any well thought-out theory of change; it was just happenstance. Realistically, I can’t really take credit for it any more than I can take credit for being born in the time and place that I was. It makes me think about how we plan for things in expectation, but there is such a massive amount of random ‘noise’ in the world. This isn’t exactly epistemic humility or moral cluelessness, but it seems vaguely related to those.
I’m reading Brotopia: Breaking Up the Boys’ Club of Silicon Valley, and this paragraph stuck in my head. I’m wondering about EA and “mission alignment” and similar things.
The parallels seem pretty obvious to me, and here is my altered version:
I think this leads me back to two ideas that I’ve been bouncing around. First, be clear about
to what extentif a particular role needs to be mission-aligned. Second, be clear to what level/extent a particular role needs to be mission aligned (3 out of 10? 8 out of 10?). Does the person you hire to handle physical security need to care about AI safety risk scenarios?If your mission is to reduce animal suffering, should you hire someone that wants to do that but is simply less intense about it? A person who spends 5% of their free time thinking about this when you spend 60% of your free time thinking about this? I do think that mission alignment is important for some roles, but it is hard to specify without really understanding the work.[1]
As an example of “understanding the work,” my superficial guess is that someone planning an EAG event probably doesn’t need to know all about EA in order to book conference rooms, arrange catering, set up sound & lighting, etc. But I don’t know, because I haven’t done that job or managed that job or closely observed that job. Maybe lot of EA context really is necessary in order to make lots of little decisions which otherwise would make the event a noticeably worse experience for the attendees. Indeed, pretty much the only thing that I am confident in in relation to this is that we can’t make strong claims about a role unless we really understand the work.
I didn’t learn about Stanislav Petrov until I saw announcements about Petrov Day a few years ago on the EA Forum. My initial thought was “what is so special about Stanislav Petrov? Why not celebrate Vasily Arkhipov?”
I had known about Vasily Arkhipovfor years, but the reality is that I don’t think one of them is more worthy of respect or idolization than the other. My point here is more about something like founder effects, path dependency, and cultural norms. You see, at some point someone in EA (I’m guessing) arbitrarily decided that Stanislav Petrov was more worth knowing and celebrating than Vasily Arkhipov, and now knowledge of Stanislav Petrovis widespread (within this very narrow community). But that seems pretty arbitrary. There are other things like this, right? Things that people hold dear or believe that are little more than cultural norms, passed on because “that is the way we do things here.”
I think a lot about culture and norms, probably as a result of studying other cultures and then living in other countries (non-anglophone countries) for most of my adult life. I’m wondering what other things exist in EA that are like Stanislav Petrov: things that we do for no good reason other than that other people do them.
The origin of Petrov Day, as an idea for an actual holiday, is this post by Eliezer Yudkowsky. Arkhipov got a shout-out in the comments almost immediately, but “Petrov Day” was the post title, and it’s one syllable shorter.
There are many other things like Petrov Day, in this and every culture — arbitrary decisions that became tradition.
But of course, “started for no good reason” doesn’t have to mean “continued for no good reason”. Norms that survive tend to survive because people find them valuable. And there are plenty of things that used to be EA/rationalist norms that are now much less influential than they were, or even mostly forgotten. The first examples that come to mind for me:
Early EA groups sometimes did “live below the line” events where participants would try to live on a dollar a day (or some other small amount) for a time. This didn’t last long, because there were a bunch of problems with the idea and its implementation, and the whole thing faded out of EA pretty quickly (though it still exists elsewhere).
The Giving What We Can pledge used to be a central focus of student EA groups; it was thought to be really important and valuable to get your members to sign up. Over time, people realized this led students to feel pressure to make a lifelong decision too early on, some of whom regretted the decision later. The pledge gradually attained an (IMO) healthier status — a cool part of EA that lots of people are happy to take part in, but not an “EA default” that people implicitly expect you to do.
I would be happy to celebrate an Arkhipov Day. Is there anything that could distinguish the rituals and themes of the day? Arkhipov was in a submarine and had to disagree with two other officers IIRC? (Also when is it?)
Haha, I don’t think we need another holiday for Soviet military men who prevented what could have been WWIII. More so, I think we should ask ourselves (often) “Why do we do things the way we do, and should we do things that way?”
As Aaron notes, the “Petrov Day” tradition started with a post by Yudkowsky. It is indeed somewhat strange that Petrov was singled out like this, but I guess the thought was that we want to designate one day of the year as the “do not destroy the world day”, and “Petrov Day” was as good a name for it as any.
Note that this doesn’t seem representative of the degree of appreciation for Petrov vs. Arkhipov within the EA community. For example, the Future of Humanity Institute has both a Petrov Room and an Arkhipov Room (a fact that causes many people to mix them up), and the Future of Life Award was given both to Arkhipov (in 2017) and to Petrov (in 2018).
I think Arkhipov’s actions are in a sense perhaps even more consequential than Petrov’s, because it was truly by chance that he was present in that particular nuclear submarine, rather than in any of the other subs from the flotilla. This fact justifies the statement that, if history had repeated itself, the decision to launch a nuclear torpedo would likely not have been vetoed. The counterfactual for Petrov is not so clear.
Random musing from reading a reddit comment:
Some jobs are proactive: you have to be the one doing the calls and you have to make the work yourself and no matter how much you do you’re always expected to carry on making more, you’re never finished. Some jobs are reactive: The work comes in, you do it, then you wait for more work and repeat.
Proactive roles are things like business development/sales, writing a book, marketing and advertising, and research. You can almost always do more, and there isn’t really an end point unless you want to impose an arbitrary end point: I’ll stop when I finish writing this chapter, or I’ll take a break after this research paper. I imagine[1] that a type of stress present in sales and business development is that you are always pushing for more, like the difference between someone who wants to accumulate $950,000 dollars for retirement as opposed to someone who simply wants lots of dollars for retirement.
Reactive roles are things like running payroll, being the cook in a restaurant (or being the waiter in a restaurant), legal counsel, office manager, teacher. There is an ‘inflow’ of tasks or work or customers, and you respond to that inflow. But if there are times when there isn’t any inflow, then you just wait for work to arrive[2]. After you finish running payroll for this pay period, it isn’t like you can take initiative to send the next round of salary payments ahead of schedule. Or imagine being the cook in a restaurant, and there is a 30-minute period when there are no new order placed. Once everything is clean and you are ready for orders to come in, what can you do? You prep what you can, and then you just kind of… wait for more work tasks to arrive.
I’ve never worked in sales and I don’t think I’ve ever even had conversations about it, so I am really just guessing here.
It isn’t always so simplistic of course. Maybe the waiter has some other tasks on ‘standby’ for when there are no customer’s coming in. Maybe the payroll person has some lower priority tasks (back burner tasks) that are now the highest priority available task to do when there isn’t any payroll work to do. Often there are ways that you can do something other than sit around an twiddle your thumbs, and this is also a great way to get noticed and get positive attention from managers. But it seems to be a very slippery slope into busy work with a lot of low-prestige jobs: how often does that supply closet really need to be reorganized? How often does this glass door need to be cleaned? How many months in advance can you realistically really make lesson plans for the students?
I just had a call with a young EA from Oyo State in Nigeria (we were connected through the excellent EA Anywhere), and it was a great reminder of how little I know regarding malaria (and public health in developing countries more generally). In a very simplistic sense: are bednets actually the most cost effective way to fight against malaria?
I’ve read a variety of books on the development economics canon, I’m a big fan of the use of randomized control trials in social science, I remember worm wars and microfinance not being so amazing as people thought and critiques of Tom’s Shoes. I was thrilled when I first read Poor Economics, and it opened my eyes to a whole new world. But I’m a dabbler, not an expert. I haven’t done fieldwork; I’ve merely read popular books. I don’t have advanced coursework in this area.
It was nice to be reminded of how little I actually know, and of how superficial general interest in a field is not the same as detailed knowledge. If I worked professionally in development economics I would probably be hyper aware of the gaps in my knowledge. But as a person who merely dabbles in development as an interest, I’m not often confronted with the areas about which I am completely ignorant, and thus there is something vaguely like a Dunning-Kruger effect. I really enjoyed hearing perspectives from someone that knows a lot more than I do.
If anybody wants to read and discuss books on inclusion, diversity, and similar topics, please let me know. This is a topic that I am interested in, and a topic that I want to learn more about. My main interest is on the angle/aspect of diversity in organizations (such as corporations, non-profits, etc.), rather than broadly society-wide issues (although I suspect they cannot be fully disentangled).
I have a list of books I intend to read on DEI topics (I’ve also listed them at the bottom of this quick take in case anybody can’t access my shared Notion page), but I think I would gain more from the books if I am able to discuss the contents with other people and bounce around ideas. I think that I tend to agree too readily with what I read, and having other people would help me be a more critical consumer of this information. Most of these books are readily available through public libraries (and services like Libby/Overdrive) in the USA or through online book shops.
I’m not planning on formally starting another book club (although I’m open to the possibility if there are a handful of people that express interest), but I would really enjoy having a call/chat once every several weeks. I’m not expecting this to evolve into some sort of a working group or a diversity council, but I’d be open that possibility in time.
- - - - -
The Inclusion Dividend: Why Investing in Diversity & Inclusion Pays Off
We Can’t Talk about That at Work!: How to Talk about Race, Religion, Politics, and Other Polarizing Topics
Inclusion on Purpose: An Intersectional Approach to Creating a Culture of Belonging at Work
Inclusify: The Power of Uniqueness and Belonging to Build Innovative Teams
The 4 Stages of Psychological Safety
How to Be an Ally: Actions You Can Take for a Stronger, Happier Workplace
Race Rules: What Your Black Friend Won’t Tell You
Say the Right Thing: How to Talk About Identity, Diversity, and Justice
OtherWise: The Wisdom You Need to Succeed in a Diverse and Divisive World
Inclusion Revolution: The Essential Guide to Dismantling Racial Inequity in the Workplace
Inclusive Growth: Future-proof your business by creating a diverse workplace
Leading Global Diversity, Equity, and Inclusion: A Guide for Systemic Change in Multinational Organizations
Managing Diversity: Toward a Globally Inclusive Workplace
A Queer History of the United States
The Making of Asian America: A History
White Trash: The 400-Year Untold History of Class in America
No Right to Be Idle: The Invention of Disability
History from the Bottom Up and the Inside Out: Ethnicity, Race, and Identity in Working-Class History
I’ve been reading about performance management, and a section of the textbook I’m reading focuses on The Nature of the Performance Distribution. It reminded me a little of Max Daniel’s and Ben Todd’s How much does performance differ between people?, so I thought I’d share it here for anyone who is interested.
The focus is less on true outputs and more on evaluated performance within an organization. It is a fairly short and light introduction, but I’ve put the content here if you are interested.
A theme that jumps out at me is situational specificity, as it seems some scenarios follow a normal distribution, some scenarios are heavy tailed, and some probably have a strict upper limit. This echoes the emphasis that an anonymous commented shared on the Max’s and Ben ’s post:
I’m roughly imaging an organization in which there is a floor to performance (maybe people beneath a certain performance level aren’t hired), and there is some type of barrier that creates a ceiling to performance (maybe people who perform beyond a certain level would rather go start their own consultancy rather than work for this organization, or they get promoted to a different department/team). But the floor or the ceiling could be more more naturally related to the nature of the work as well, as in the scenario of an assembly worker who can’t go faster than the speed of the assembly line.
This idea of situational specificity is paralleled in hiring/personnel selection, in which a particular assessment might be highly predictive of performance in one context, and much less so in a different context. This is the reason why we shouldn’t simply use GMA and conscientiousness to evaluate every single employee at every single organization.
Very interesting. Another discussion of the performance distribution here.
Thanks for sharing this. I found this to be quite interesting.
I’ve previously written a little bit about recognition in relation to mainanence/prevention, and this passage from Everybody Matters: The Extraordinary Power of Caring for Your People Like Family stood out to me as a nice reminder:
Overall, the Everybody Matters could is the kind of book that could have been an article. I wouldn’t recommend spending the time to read it if you are already superficially familiar with the fact that an organization can choose to treat people well (although maybe that would be revelatory for some people). It was on my to-read list due to it’s mention in the TED Talk Why good leaders make you feel safe.
It is sort of curious/funny that posts stating “here is some racism” get lots of attention, and posts stating “let’s take the time to learn about inclusivity, diversity, and discrimination”[1] don’t get much attention. I suppose it is just a sort of an unconscious bias: some topics are incendiary and controversial and are more appealing/exciting to engage with, while some topics are more hufflepuffy and do the work and aren’t so exciting. Is it vaguely analogous to a polar bear stranded on an ice floe getting lots of clicks, but the randomized control trial for giving schoolchildren preventative medicine doesn’t?
To be clear, these are not real quotes. These are simplistic characterizations.
I think that is probably at least some of it. Other candidate explanations might include the following. (I’m going to use phrases like attitude toward racial issues in an awkward attempt to cover the continuum from overt racism against people of color to DEI superstar status; this is not meant to imply that the more DEI a viewpoint is, the better.)
From the perspective of a Forum commenter, the response to many “here is some racism” things may be more tractable / have a clearer theory of change and impact (which may involve education, norm reinforcement, placing social pressure on people, etc. depending on the poster) than many “let’s learn about DEI” type posts.
In particular, one could think that attitudes about race in the lower half of the progressiveness distribution are more important toward “scoring” the community’s overall attitude toward racial issues. For example, the percentage of people who espouse views on race that are problematic is probably an important metric for the extent to which the community is unwelcoming to people of color.
In contrast, if a person is at the 75th percentile of progressiveness on racial issues already, moving them to the 95th percentile may not accomplish nearly as much as moving the 5th percentile person to the 25th. And it’s likely that the bulk of people who are interested in engaging with “let’s learn about DEI” posts in a supportive manner are at least already above the median here.
Also, a paucity of DEI often implicates structural barriers that are significantly harder to address (especially at the individual-commenter level) than individual/organizational bad behavior.
People tend to react more strongly to losses from an established baseline (e.g., losing $100) than equal-magnitude gains from that baseline (e.g., winning $100).
One could think there are significantly diminishing returns at play. In particular, one might identify a “good enough” point beyond which additional improvements are likely to have relatively little benefit to EA. For instance, this is likely true from a PR/optics standpoint; we’re unlikely to get positive press coverage even if we reach an A+ score on DEI. So there’s not much delta between a B and an A+ through the PR/optics lens. And one might think EA is currently at the “good enough” point (to be clear, this is not my personal view).
Some people could associate the “let’s learn about DEI” type posts—rightly or wrongly—with ideas like affirmative action (positive discrimination) that they find contrary to their values. In contrast, posts focused on bad behavior may be less likely to trigger this association.
Some vocal commenters (and strong-downvoters) are so opposed to DEI-like ideas that commenters may not feel like putting on the emotional armor to engage on pro-DEI-like posts. They feel more social support to comment on the “here is some racism” posts, and they feel that overt racism is stigmatized enough to create some social pressure not to throw flaming arrows at them in response.
All of these ideas are speculative, and I’d be curious about the extent to which any of them resonate / don’t resonate with people.
I love that you took the time to engage with this, and that you typed out these speculations!
Ben West recently mentioned that he would be excited about a common application. It got me thinking a little about it. I don’t have the technical/design skills to create such a system, but I want to let my mind wander a little bit on the topic. This is just musings and ‘thinking out out,’ so don’t take any of this too seriously.
What would the benefits be for some type of common application? For the applicant: send an application to a wider variety of organizations with less effort. For the organization: get a wider variety of applicants.
Why not just have the post openings posted to LinkedIn and allow candidates to use the Easy Apply function? Well, that would probably result in lots of low quality applications. Maybe include a few question to serve as a simple filter? Perhaps a question to reveal how familiar the candidate is with the ideas and principles of EA? Lots of low quality applications aren’t really an issue if you have an easy way to filter them out. As a simplistic example, if I am hiring for a job that requires fluent Spanish, and a dropdown prompt in the job application asks candidates to evaluate their Spanish, it is pretty easy to filter out people that selected “I don’t speak any Spanish” or “I speak a little Spanish, but not much.”
But the benefit of Easy Apply (from the candidate’s perspective) is the ease. John Doe candidate doesn’t have to fill in a dozen different text boxes with information that is already on his resume. And that ease can be gained in an organization’s own application form. An application form literally can be as simple as prompts for name, email address, and resume. That might be the most minimalistic that an application form could be while still being functional. And there are plenty of organizations that have these types of applications: companies that use Lever or Ashby often have very simple and easy job application forms (example 1, example 2).
Conversely, the more than organizations prompt candidates to explain “Why do you want to work for us” or “tell us about your most impressive accomplishment” the more burdensome it is for candidates. Of course, maybe making it burdensome for candidates is intentional, and the organization believes that this will lead to higher quality candidates. There are some things that you can’t really get information about by prompting candidates to select an item from a list.
I’m been thinking about small and informal ways to build empathy[1]. I don’t have big or complex thoughts on this (and thus I’m sharing rough ideas as a quick take rather than as a full post). This is a tentative and haphazard musing/exploration, rather than a rigorous argument.
Read about people who have various hardships or suffering. I think that this is one of the benefits of reading fiction: it helps you more realistically understand (on an emotional level) the lives of other people. Not all fiction is created equal, and you probably won’t won’t develop the same level of empathy reading about vampire romance as you will reading a book about a family struggling to survive a civil war[2]. But good literature can make you cry and leave you shaken for how much you feel. The other approach here is to read things that are not fiction; read real stories. Autobiographies can be one option, but if you don’t want to commit to something so large, try exploring online forums where people tell their own stories of the hard and difficult things they have gone through. Browsing the top posts on the Cancer subreddit might bring tears to your eyes. I suggest that you do not do this during the workday: If you can read about these experiences (a person watching their spouse suffer and die while being helpless to do anything about it, or a parent knowing he won’t live to see his child’s tenth birthday) without crying and losing composure, then you are made of sterner stuff than I am[3]. I remember crying when I read Zhenga Cuomao’s writing about her husband’s “trial” and imprisonment: “How is it this hard to be a good person?” I wanted so desperately for the world to be a just place, and the world so obviously was not. So if you want to build empathy this way, the action might be something like occasionally seek out places where you can hear of actual hardship that real people undergo.
Walk a mile in someone else’s shoes. It shouldn’t be a surprise to anyone that experiencing hardship can build empathy for hardship. It is one of the common tropes of storytelling. But (taking physical disability as an example) it is very different to think “it must be hard to live life with such mobility limitations” and to actually live for a few days being physically unable to drink from a glass of water or raise your arms above your head. The trouble with walking in someone’s shows is that it is normally not feasible. You can understand what it is like to be an immigrant in a foreign country, but only if you are willing to commit multiple years of your life to actually doing that. There are roleplaying exercises people can do, but it is hard to get a full picture. There isn’t any easy way for a man to have the experience that a woman has in American society[4], nor it is easy for a person without any mental illnesses to understand what it is like to live with bipolar or schizophrenia. Nonetheless, some people seriously commit to these efforts. Seneca recommends regularly spending time destitute and depriving yourself of comfortable clothing a good quality food[5]. In 1959 John Howard Griffin (a white man) chemically darkened his skin to appear black. Barbara Ehrenreich wrote a book about her experience spending months trying to make it as a low-wage, unskilled worker (a project that has been duplicated by others). And even she was aware that she could always stop ‘pretending’ if she had a real emergency. The National Center for Civil and Human Rights in Atlanta has an experience/exhibit in which you sit on a bar stool and put on headphones to immerse yourself in a simulated experience of being black in a diner in the deep south.
Why bother? Well, I have a vague and not well-reasoned intuition that being more empathetic makes you a better person. Will it actually increase your impact? I have no idea. Maybe you would have higher impact and you would make the world a better place if you just kept your head down and worked on your project.
A polished article would have some sort of conclusion or a nice takeaway, but for this short form I’ll just end it here.
I’m using “empathy” in a pretty sloppy sense. Something like “caring for other people who are not related/connected to you” or “developing something of an emotional understanding of the suffering people go through, rather than merely an intellectual one.” I’m thinking about this in a very suffering-focused sense.
Half of a Yellow Sun is once of the books that I think made me a little bit more empathetic. It is a book about the Nigerian Civil war, something that I assume most of fellow North Americans know almost nothing about. I certainly knew nothing about it.
And to echo writings from many other people in and around the EA community: if you think that is bad, remember that there is a similar level of suffering happening every day for millions of people.
Although you can read accounts from transgender people. The rough summary would be something like “I am stunned at how different people treat me when they see me as a man/woman.”
Note that the Stoic interpretation here isn’t to build empathy, but rather to make yourself unafraid of hardship. And the trouble with using these for building empathy is that you aren’t really in the situation; you can stop pretending whenever you like. For anyone who is curious, here is the relevant excerpt from The Daily Stoic that turned me on to this idea:
(caution: grammatical pedantry, and ridiculously low-stakes musings. possibly the most mundane and unexciting critique of EA org ever)
The name of Founders Pledge should actually be Founders’ Pledge, right? It is possessive, and the pledge belongs to multiple founders. If I remember my childhood lessons, apostrophes come after the s for plural things:
the cow’s friend (this one cow has a friend)
the birds’ savior (all of these birds have a savior)
A new thought: maybe I’ve been understanding it wrong. I’ve always thought of the “pledge” in Founders Pledge as a noun, but maybe it is actually an verb? In that sense, Founders Pledge would be like Germans Give or Gamblers Donate. I think it sounds a little funny to use pledge as an intransitive verb (without anything coming after it), but I guess it works in the same way that “I eat” sounds a little odd but is grammatically correct, and I suppose Californians Eat sounds fine.
EDIT: It looks like there have been some disagree votes. I find this particularly curious, as this is musings rather than claims/arguments.
I assumed it was functioning as a compound noun rather than a possessive. The word ‘Founders’ is modifying the type of Pledge, not claiming ownership of it.
I just finished reading Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth. I think the book is worth reading for anyone interested in truth and the figuring out what is real, but I especially liked the aspiration Mertonian norms, a concept I had never encountered before, and which served as a theme throughout the book.
I’ll quote directly from the book to explain, but I’ll alter the formatting a bit to make it easier to read:
Although there are lots of differences between the goals of EA and the goals of science, in the areas of similarity I think there might be benefit in more awareness of these norms and more establishment of these as standards. Much of it seems to line up with broad ideas of scout mindset and epistemic rationality.
My vague impressions are that the EA community generally holds up fairly well when measured against these norms. I suspect there is some struggle with organized skepticism (ideas from high-status people often get accepted at face value) and there are a lot of difficulties with disinterestedness (people need resources to survive and to pursue their goals, and most of us have a desire for social desirability), but overall I think we are doing decently well.
I remember being very confused by the idea of an unconference. I didn’t understand what it was and why it had a special name distinct from a conference. Once I learned that it was a conference in which the talks/discussions were planned by participants, I was a little bit less confused, but I still didn’t understand why it had a special name. To me, that was simply a conference. The conferences and conventions I had been to had involved participants putting on workshops. It was only when I realized that many conferences lack participative elements that I realized my primary experience of conferences was non-representative of conferences in this particular way.
I had a similar struggle understanding the idea of Software as a Service (SaaS). I had never had any interactions with old corporate software that required people to come and install it on your servers. The first time I heard the term SaaS as someone explained to me what it meant, I was puzzled. “Isn’t that all software?” I thought. “Why call it SaaS instead of simply calling it software?” All of the software I had experienced and was aware of was in the category of SaaS.
I’m writing this mainly just to put my own thoughts down somewhere, but if anyone is reading this I’ll try to put a “what you can take from this” spin on it:
If your entire experience of X falls within X_type1, and you are barely even aware of the existence of X_type2, then you will simply think of X_type1 as X, and you will be perplexed when people call it X_type1.
If you are speaking to someone who is confused by X_type1, don’t automatically assume they don’t know what X_type1 is. It might be that they simply don’t know why you are using such an odd name for (what they view as X).
Silly example: Imagine growing up in the USA, never travelling outside of the USA, and telling people that you speak “American English.” Most people in the USA don’t think of their language as American English; they just think of it as English. (Side note: over the years I have had many people tell me that they don’t have an accent)
In discussions (both online and in-person) about applicant experience in hiring rounds, I’ve heard repeatedly that applicants want feedback. Giving in-depth feedback is costly (and risky), but here is an example I have received that strikes me as low-cost and low-risk. I’ve tweaked it a little to make it more of a template.
The phrasing “you are currently in the top 20% of all applicants” is nice. I like that. I haven’t ever seen that before, but I think it is something that EA organizations (or hiring teams at any organization) could easily adapt and use in many hiring rounds. While you don’t always know exactly what percentile a candidate falls into, you can give broad/vague information, such as being in the top X%. It is a way to give a small amount of feedback to candidates without requiring a large amount of time/effort and without taking on legal risk.
Some questions cause me to become totally perplexed. I’ve been asked these (or variations of these) by a handful people in the EA community. These are not difficulties or confusions that require PhD-level research to explain, but instead I think they represent a sort of communication gap/challenge/disconnect and differing assumptions.
Note that this fuzzy musings on communication gaps, and on differing assumptions of what is normal. In a very broad sense you could think of this as an extension of the maturing/broadining of perspectives that we all do when we realize “the way I’m used to things isn’t the way everybody does things.”[1] This is musings and meanderings rather than well-thought out conclusions, so don’t take any of this too seriously.
It isn’t quite an issue inferential distance, but it seems to be a vaguely similar communication error. Questions like this are surprising to me since they seem somewhat self evident, or obvious (which maybe speaks to my own difficulty in communicating well). The questions I’m thinking of are questions like
why is it important to use data and evidence for decision-making?
why do you like ice cream?
why don’t you want to do [non-standard thing]?
The use of data/evidence is something that I have difficulty justifying off the top of my head, simply because it strikes me as so incredibly obvious. I would probably respond “would it be better if we referenced animal entrails or the shapes of the clouds to make decisions?” But if pressed I would probably compare decisions made using data and decisions made without data, and see which tends to turn out better, using that as justification.[2] It would feel really weird to have to justify use of data though, like needing to justify why I prefer a 10% chance of pain to a 50% chance of pain; one of them strikes me as obviously preferably.
I like ice cream because it tastes good. While we could dive deep into the chemistry of taste and the evolutionary biology of human preferences, what me like ice cream tasty is simply the fact that I get pleasure/enjoyment from eating it. It is an end in itself: I’m not eating the ice cream as part of a plan toward some larger goal; I enjoy it, and that is all there is to it.
Asking me why I don’t want to do something has confused me on more than one occasion in the past. I’ve always thought that absence of an action doesn’t require justification, and rather that taking an action requires justification. The generic form of this is someone suggesting an activity that is expensive and unenjoyable to me, and expressing some level of surprise or concern when I decline.[3] The times people have asked me “why don’t you want to pay money to do [thing that you probably won’t find fun]” or “why don’t you want to join us in [expensive activity that you really didn’t enjoy last time you joined us],” they haven’t really been satisfied with me simply saying “I don’t like doing that kind of thing. It isn’t much fun for me.”[4]
My most stark memory of this was when I made hot chocolate, and an adult man said something along the lines of “Oh my god, you are putting water in your hot chocolate instead of milk?” as if this was something outrageous. A more internet-friendly version would be the silliness surrounding pineapple on pizza: it really doesn’t matter that other people have mundane preferences different than yours. It is mostly just reflexive in-group/out-group dynamics.
If I want to eat ice cream, should should I consider the distance, price, and quality of two different shops and base my decision on that, or should I just flip a coin to decide which ice cream shop to go to? To me, the answer is as obvious as the answer to “is astrology predictive?”
If you don’t drink alcohol, you probably get this a lot; people have often assumed I have some specific religious reason. I’m guessing that people who don’t engage in other common practices also get similar responses.
Of course, the examples I’m choosing share here tell you something about my own preferences. For a person that inherently enjoys sitting in a bar, spending money to drink alcohol, and having shouted conversations over loud music, it is quite unusual for someone to day “no thanks, I wouldn’t enjoy that.” But I’ve tried to enough bars and parties, and I’ve discovered that sitting around with a bunch of people I barely know having shallow/forgettable conversations usually isn’t really my thing. The topics of conversation that people often want to talk about didn’t really have much overlap with things that I would be interested in talking about, and the levels of performative behaviors and the affectations aren’t something that I enjoy very much. So it is no surprise that I enjoy “grown up”
dinner parties/cocktail parties more than “college kid” parties with loud music. (Of course, if I had loads of money to spare, a high level of beauty and charisma, and a group of friends that regularly hung out in bars, then my preferences would probably be quite different.)
I was recently reminded about BookMooch, and read a short interview with the creator, John Buckman.
I think that the interface looks a bit dated, but it works well: you send people books you have that you don’t want, and other people send you books that you want but you don’t have. I used to use BookMooch a lot from around 2006 to 2010, but when I moved outside of the USA in 2010 I stopped using it. One thing I like is that it feels very organic and non-corporate: it doesn’t cost a monthly membership, there are no fees for sending and receiving books,[1] and it isn’t full of superfluous functions. There is a pretty simple system to prevent people from abusing the system, which is basically just transparency and having a “give:mooch ratio” visible. Although it is registered as a for-profit corporation, John Buckman runs it without trying to maximize profits. BookMooch earns a bit of money by using Amazon affiliate fees if people want to buy a book immediately rather than mooch the mooch the book, but the site doesn’t have advertisements or any other revenue.[2]
I love this, and it makes me think about creating value in the world. In my mind, this is kind of the ideal of a startup: you have an idea and you implement it, literally making value out of nothing. There really was an unrealized “market” for second-hand books, but there was no way to “liberate” it. And I also love that this is simply providing a service to the world. I wonder what similar yet-to-be-realized ventures there are that would create more impact than merely the joy of getting a book you want.
Now that I am in the USA again I think I’ll start using BookMooch again. I probably won’t use it as much as I used to, with how I’ve become more adapted to reading PDFs and EPUBs and listening to audiobooks, but I’ll use it some for books that I haven’t been able to get digital copies of.
You need to pay the post office to send the book, but what I mean is that BookMooch doesn’t charge any fees.
I had an anarchist streak when I was younger, and the fact that this corporation lacks so many of the trappings of standard extractive capitalism is emotionally quite appealing. If a bunch of hippies had created silicon valley instead of venture capitalists, maybe the big tech firm would look more like this.
I want to try and nudge some EAs engaged in hiring to be a bit more fair and a bit less exclusionary: I occasionally see job postings for remote jobs with EA organizations that set time zone location requirements.[1] Location seems like the wrong criteria; the right criteria is something more like “will work a generally similar schedule to our other staff.” Is my guess here correct, or am I missing something?
What you actually want are people who are willing to work “normal working hours” for your core staff. You want to be able to schedule meetings and do collaborative work. If most staff are located in New York City, and you hire someone in Indonesia who is willing and able to do a New York City working schedule, for the organization and for teamwork that isn’t different than hiring someone in Peru (which is in the time zone as New York City).[2]
I’ve previously spoken with people in Asian time zones who emphasized the unreasonableness of this; people who have the skills and who are happy/able to work from 9pm to 4am. If someone who lives in a different time zone is happy to conform to your working schedule, don’t disqualify them. You can disqualify them because they lack the job-relevant skills, or because they wouldn’t perform well enough in the role, but don’t do it due to their location.[3] If they have stable internet connection and they state that they are willing to work a particular schedule, believe them. You could even have a little tick-box on your job application to clarify that they understand and consent that they need to be available for at least [NUMBER] hours during normal business hours in your main/preferred time zone.
Such as must be located between UTC and UTC +8, or must live in a time zone compatible with a North American time zone.
You might make the argument that the person in Indonesia would be giving themselves a big burden working in the middle of the night and (presumably) sleeping during the day, but that is a different argument. That is about whether they are able to conform to the expected work schedule/availability or about how burdensome they would find it, not about whether they are physically located in a similar time zone. Lots of people in low income countries would be happy to have a weird sleeping & work schedule in exchange for the kinds of salaries that EA organizations in the UK and USA tend to pay; that is a good tradeoff for many people.
There are, of course, plenty of other reasons to care about location. There are legal and tax reasons that a organization should only hire people in certain locations. Not all employers of record can employee people in all countries. And there are practical reasons related to the nature of the job. If you need someone to physically be somewhere occasionally, location matters. That person should probably shouldn’t be located a 22-hour trip away if they need to be there in-person twice a month; they should be able to travel there in a reasonable amount of time.
Night work just does seem to be worse for people’s cognition: metastudy
Hmm, I don’t entirely disagree but I also don’t fully agree either:
Where I agree: I have indeed hired people on the opposite side of the world (eg Australia) for whom it was not a problem.
Where I disagree: working at weird hours is a skill, and one that is hard to test for in interviews. There is a reasonably high base rate (off the cuff: maybe 30 percent?) of candidates claiming overconfidently in interviews that they can meet a work schedule that is actually incredibly impractical for them and end up causing problems or needing firing later on. I would rather not take that collective risk—to hire you and discover 3 months in, that the schedule you signed up for is not practical for you.
That is a very real concern, and strikes me as reasonable. While I don’t have a good sense of what the percent would be, I agree with you that people in general tend to exaggerate what they are able to do in interviews. I wonder if there are good questions to ask to filter for this, beyond simply asking about how the candidate would plan to meet the timing requirements.
For the time zones, I had been thinking of individuals that had done this previously and can honestly claim that they have done this previously. But I do understand that for many people (especially people with children or people who live with other people) it would be impractical. Maybe my perception of people is fairly inaccurate, in the sense that I expect them to be more honest and self-aware than they really are? 😅
Meandering and exploratory follow-up.
Even if the justification is reasonable, it is quite exclusionary to candidates outside of the required time zone. Think of a company who wants to hire a data analyst, but instead of the job posting listing ‘skilled at data analytics’ it instead lists ‘MA in data analytics.’ It is excluding a lot of people that might be skilled but which don’t have the degree.
I think the broader idea I’m trying to get at is when X is needed, but Y is listed as the requirement, and they are two distinct things. Maybe I need someone that speaks German as a native language for a job, but on the job describing I write that I need someone who grew up in Germany; those are distinct things. I’d reject all the German expats that grew up abroad, as well as the native-German speakers who grew up in Switzerland or Austria.
There might also be something here related to the non-central fallacy: applying the characteristics of an archetypical category member to a non-typical category member. Most people in distant time zones probably wouldn’t be able to manage an abnormal working schedule, but that doesn’t mean we should assume that no people in distant time zones can handle it.
Of course, the tradeoffs are always an issue. If I would get 5 additional candidates who would be good and 95 additional candidates who are poor fits, then maybe it wouldn’t be worth it. But something about the exclusion that I can’t quite put my finger on strikes me as unjust/unfair.
Imperfect Parfit (written by by Daniel Kodsi and John Maier) is a fairly long review (by 2024 internet standards) of Parfit: A Philosopher and His Mission to Save Morality. It draws attention to some of his oddities and eccentricity (such as brushing his teeth for hours, or eating the same dinner every day (not unheard of among famous philosophers)). Considering Parfit’s influence on the ideas that many of us involved in EA have, it seemed worth sharing here.
This is about donation amounts, investing, and patient philanthropy. I want to share a simple excel graph showing the annual donation amounts from two scenarios: 10% of salary, and 10% of investment returns.[1] While back a friend was astounded at the difference in dollar amounts, so I thought I should share this a bit more widely. The specific outcomes will change based on the assumptions that we input, of course.[2] A person could certainly combine both approaches, and there really isn’t anything stopping you from donating more than 10%, so interpret this as illustrative rather than definitive.
The blue line is someone who donates 10% of their salary for the rest of their career. The orange line is someone who invests 10% of their salary for the rest of their career, followed by donating 10% of investment returns starting at retirement.
I’m not going to share the spreadsheet simply because I have some personal information that I don’t want to share tied up in this spreadsheet and it would be a bit of a hassle to separate it out. But for anyone who wants to re-create something like this and fiddle with your own inputs to look at various scenarios, it shouldn’t be too hard to make a few columns like this:
I’m a big fan of using compound interest, and I lean somewhat toward patient philanthropy. The upsides and downsides of patient philanthropy have been written about already, so I won’t repeat all the pros and cons.
What the starting salary, how much and how fast salary increases, what the annual return is for the investments, how old you will be when you retire, etc. I used a starting salary of 70,000 USD at age 30, with 2% annual salary increases, 7.5% annual investment growth
Yeah I think this is a good point! Donor-advised funds seem like a good way to benefit from compound interest (and tax deductions) while avoiding the risk of value drift.
I guess shortform is now quick takes. I feel a small amount of negative reaction, but my best guess its that this reaction is nothing more than a general human “change is bad” feeling.
Is quick takes a better name for this function that shortform? I’m not sure. I’m leaning toward yes.
I wonder if this will have an effect to nudge people to not write longer posts using the quick takes function.
Would anyone find it interesting/useful for me to share a forum post about hiring, recruiting, and general personnel selection? I have some experience running hiring for small companies, and I have been recently reading a lot of academic papers from the Journal of Personnel Psychology regarding the research of most effective hiring practices. I’m thinking of creating a sequence about hiring, or maybe about HR and managing people more broadly.
Please do! I’d absolutely love to read that :)
Some musings about experience and coaching. I saw another announcement relating to mentorship/coaching/career advising recently. It looked like the mentors/coaches/advisors were all relatively junior/young/inexperienced. This isn’t the first time I’ve seen this. Most of this type of thing I’ve seen in and around EA involves the mentors/advisors/coaches being only a few years into their career. This isn’t necessarily bad. A person can be very well-read without having gone to school, or can be very strong without going to a gym, or can speak excellent Japanese without having ever been to Japan. A person being two or three or four years into their career doesn’t mean that it is impossible for them to have have good ideas and good advice.[1] But it does seem a little… odd. The skepticism I feel is similar to having a physically frail person as a fitness trainer: I am assessing the individual on a proxy (fitness) rather than on the true criteria (ability to advise me regarding fitness). Maybe that thinking is a bit too sloppy on my part.
This doesn’t mean that if you are 24 and you volunteer as a mentor that you should stop; you aren’t doing anything wrong. And I wouldn’t want some kind a silly and arbitrary rule, such as “only people age 40+ are allowed to be career coaches.” And there are some people doing this kind of work that have a decade or more of professional experience; I don’t want to make it sound like all of the people doing coaching and advising are fresh grads.
I wonder if there are any specific advantages or disadvantages to this ‘junior skew.’ Is there a meaningful correlation between length of career and ability to help other people with their careers?
EA already skews somewhat young, but from the last EA community survey it looks like the average age was around 29. So I wonder why are the vast majority of people doing mentorship/coaching/career advising are younger than that? Maybe the older people involved in EA are disproportionality not employed for EA organizations and are thus less focused on funneling people into impactful careers? I do have the vague impression that many 35+ EAs lean more toward earn-to-give. Maybe older EAs tend to be a little more private and less focused on the EA community? Maybe older people simply are less interested, or don’t view it as a priority? Maybe the organizations that employ/hire coaches all prefer young people? Maybe this is a false perception and I’m engaging in sloppy generalization from only a few anecdotes?
And the other huge caveat is that you can’t really know what a person’s professional background is from a quick glance at their LinkedIn Profile and the blurb that they share on a website, any more than you can accurately guess age from a profile photo. People sometimes don’t list everything. I can see that someone earned a bachelor’s degree in 2019 or 2020 or 2021, but maybe they didn’t follow a “standard” path: maybe they had a 10-year career prior to that, so guesses about being fairly young or junior are totally off. As always, drawing conclusions based on tiny snippets of information with minimal context is treacherous territory.
I checked and people who currently work in an EA org are only slightly older on average (median 29 vs median 28).
This is a sloppy rough draft that I have had sitting in a Google doc for months, and I figured that if I don’t share it now, it will sit there forever. So please read this as a rough grouping of some brainstormy ideas, rather than as some sort of highly confident and well-polished thesis.
- - - - - -
What feedback do rejected applicants want?
From speaking with rejected job applicants within the EA ecosystem during the past year, I roughly conclude that they want feedback in two different ways:
The first way is just emotional care, which is really just a different way of saying “be kind rather than being mean or being neutral.”[1] They don’t want to feel bad, because rejection isn’t fun. Anybody who has been excluded from a group of friends, or kicked out of a company, or in any way excluded from something that you want to be included in knows that it can feel bad.[2] It feels even worse if you appear to meet the requirements of the job, put in time and effort to try really hard, care a lot about the community and the mission, perceive this as one of only a few paths available to you for more/higher impact, and then you get summarily excluded with a formulaic email template. There isn’t any feasible way to make a rejection feel great, but you can minimize how crappy it feels. Thank the candidates for their time/effort, and emphasize that you are rejecting this application for this role rather than rejecting this person in general. Don’t reject people immediately after their submission; wait a couple of days. If Alice submits a work trial task and less than 24 hours later you reject her, it feels to her like you barely glanced at her work, even if you spent several hours diligently going over it.
Improving. People want actionable feedback. If they lack a particular skill, they would like to know how to get better so that they can go learn that skill and then be a stronger candidate for this type of role in the future. If the main differentiator between candidates Alice and Bob is that Alice scored 50 points better on an IQ test or that Alice attended Impressive School while Bob attended No Name School, maybe don’t tell Bob that.[3] But if the main differentiator is that Alice has spent a year being a volunteer for the EA Virtual Program or that Alice is really good with spreadsheets or that Bob didn’t format his documents well, that is actionable, and gives the candidate a signal regarding how they can improve. Now the candidate knows something they can do to make become a more competitive candidate. They will practice their Excel skills and look up spreadsheet tutorials, they can get some volunteering experience with a relevant organization, and they look up how to use headers and to adjust line spacing. Think of this like a company investing in the local community college and sponsoring a professorship at the college: they are building a pipeline of potential future employees.
Here is a rough hierarchy of what, in an ideal world, I’d like to receive when I am rejected from a job application:
“Thanks for applying. We won’t be moving forward with your application. Although it is never fun to receive an email like this, we want to express appreciation for the time you spent on this selection process. Regarding why we choose to not move forward with your application, it looks like you don’t have as much experience directly related to X as the candidates we are moving forward with, and we also want someone who is able to Y. Getting experience with Y is challenging, but some ideas are here: [LINK].”
“Thanks for applying. We won’t be moving forward with your application. It looks like you don’t have as much experience directly related to X as the most competitive candidates, and we also want someone who is able to Y.”
“Thanks for applying. We won’t be moving forward with your application.”
That last bullet point is what most EA organizations send (according to conversations I’ve had with candidates, as well as my own experiences in EA hiring rounds). I have seen two or three that sometimes send rejections that are similar to the first or similar to the second.[4] If the first bullet point looks too challenging and you think that it would take too much staff time, then see if you can do the second bullet point: simply telling people why (although this will dependent on the context) can make rejections a lot less hurtful, and also points them in the right direction for how to get better.
I haven’t seen any EA orgs being mean in their rejections, but I have seen and heard of most of them being neutral.
I still remember how bad it felt being told that I couldn’t join a feminist reading group because they didn’t want any men there. I think that was totally understandable, but it still felt bad to be excluded. I remember not being able to join a professional networking group because I was older than the cutoff age (they required new members to be under 30, and I was 31 when I learned about it). These things happened years ago, and were not particularly influential in my life. But people remember being excluded.
Things that people cannot change with a reasonable amount of time and effort (or things that would require a time machine, such as what university someone attended) are generally not good pieces of feedback to give people. These things aren’t actionable.
Last I saw, the Centre for Effective Altruism and Animal Advocacy Careers both had systems in place helping them to do better than average. It has been a while since I’ve interaction with the internals of either of their hiring systems, but last I checked they both send useful and actionable feedback for at least some of their rejections.
I’m on board with a lot of your emotional care advice, but,,,
...I feel like your mileage may vary on this one. I don’t like being in suspense, and moreover it’s helpful from a planning perspective to know what’s up sooner rather than later. I’d say instead that if you want to signal that you spent time with someone’s application, do it by making sure your rejection is conspicuously specific (i.e. mentions features of the applicant or their submissions, even if only superficially).
I also think you missed an entire third category of reason to want feedback, which is that if I stand no hope of getting job X, no matter how much I improve, I do really want to know that, so I can make choices about how much time to spend trying to get that job or jobs like it. It feels like a kindness to tell me I can do anything I put my mind to, but if it’s not true then you’re just setting me up for more pain in the future. (Similarly, saying “everyone should apply, even if you’re not sure you’re qualified” sounds like a kindness but does have a downside in terms of increasing the number of unsuccessful applicants; sometimes it’s worth it anyway, but the downside should be acknowledged.)
There is a sort of a trade-off to notifying people immediately or notifying them after a couple of days. My best guess is that it generally won’t make a difference for someone’s planning to be rejected from a job application in less than 24 hours or to be rejected within a few days. But there is probably a lot of variation in preferences from one person to another; maybe I am impacted by this more than average. I’m probably heavily influences by a typical mind fallacy here as well, as I am very sloppily generalizing from my own internal state.
I’ve had a few job applications that I submitted and then got rejected for an hour or two later, and emotionally that felt so much worse. But at the end of the day I think you are right that “your mileage may vary.”
Good point! I hadn’t thought of that, but that would be very helpful feedback to have.
I’m been mulling over the idea of proportional reciprocity for a while. I’ve had some musings sitting a a Google Doc for several months, and I think that I either share a rough/sloppy version of this, or it will never get shared. So here is my idea. Note that this is in relation to job applications within EA, and I felt nudged to share this after seeing Thank You For Your Time: Understanding the Experiences of Job Seekers in Effective Altruism.
- - - -
Proportional reciprocity
I made this concept up.[1] The general idea is that relationships tend to be somewhat reciprocal, but in proportion to the maturity/growth of the relationship: the level of care and effort that I express toward you should be roughly proportional to the level of effort and care that you express toward me. When that is violated (either upward or downward) people feel that something is wrong.[2] The general idea (as far as it relates to job applications and hiring rounds) is that the more of a relationship the two parties have, the more care and consideration the rejection should involve. How does this relate to hiring in the context of EA? If Alice puts in 3 hours of work, and then Alice perceives that Bob puts in 3 minutes of work, Alice feels bad. That the simplistic model.
As a person running a hiring round, you might not view yourself as having a relationship with these people, but there is a sort of psychological contract which exists, especially after an interview; the candidate expects you to behave in certain ways.
One particularly frustrating experience I had was with an EA organization that had a role with a title, skills, and responsibilities that matched my experience fairly well. That organization reached out to me and requested that I answer multiple short essay-type questions as a part of the job application.[3] I did so, and I ended up receiving a template email from a noreply email address that stated “we have made the decision to move forward with other candidates whose experience and skills are a closer match to the position.” In my mind, this is a situation in which a reasonable candidate (say, someone not in the bottom 10%) who spent a decent chunk of time thoughtfully responding to multiple questions and who actually does meet the stated requirements for the role, is blandly rejected. This kind of scenario appears to be fairly common. And I wouldn’t have felt so bitter about it if they hadn’t specifically reached out to me and asked me to apply. Of course, I don’t know how competitive I was or wasn’t; maybe my writing was so poor that I was literally the worst-ranked candidate.
What would I have liked to see instead? I certainly don’t think that I am owed an interview, nor a job offer, and in reality I don’t know how competitive the other candidates were.[4] But I would have liked to have been given a bit more information beyond the implication of merely “other candidates are a better match.” I would love to be told in what way I fell short, and what I should do instead. If they specifically contacted me to invite me to apply, something along the lines of “Hey Joseph, sorry for wasting your time. We genuinely thought that you would have been among the stronger candidates, and we are sorry that we invited you to apply only to reject you at the very first stage.” That would have felt more human and personal, and I wouldn’t hold it against them. But instead I got a very boilerplate email template.
Of course, I’m describing my own experience, but lots of other people in EA and adjacent to EA go through this. It isn’t unusual for candidate to be asked to do 3-hour work trials without compensation, to be invited to interview and then rejected without information, or to meet 100% of the requirements of a job posting and then get rejected 24 hours after submitting an application.[5]
If this is an example of the applicant putting in effort and not getting reciprocity, the other failure mode that I’ve seen is the applicant being asked for more and more effort. A hiring round from one EA adjacent organization involved a short application form, and then a three-hour unpaid trial task. I understand the need to deal with a large volume of applicants; interviewing 5-10 people is feasible, interviewing 80 is less so. What would I have liked to see instead? Perhaps a 30-minute trial task instead of a three-hour trial task. Perhaps a 10-minute screening interview. Perhaps an additional form with some knockout questions and non-negotiables. Perhaps a three hour task that is paid.
Although some social psychologist has probably thought of it before me and in much more depth.
There are plenty of exceptions, of course. I can’t obligate you to form a friendship with me by doing favors or by giving you gifts. The genuineness matters also: a sycophant who only engages in a relationship in order to extract value isn’t covered by proportionally reciprocity. And there are plenty of misperceptions regarding what level a relationship has reached; I’ve seen many interpersonal conflicts arise from two people having different perceptions of the current level of reciprocity. I think that this is particularly common in romantic relationships among young people.
I don’t remember exactly how much time I spent on the short essays. I know that it wasn’t a five-hour effort, but I also know that I didn’t just type a sentence or two and click ‘submit.’ I put a bit of thought into them, and I provided context and justification. Maybe it was between 30 and 90 minutes? One question was about DEI and the relevance it has to the work that organization did. I have actually read multiple books on DEI and I’ve been exploring that area quite a bit, so I was able to elaborate and give nuance on that.
Maybe they had twice as much relevant work experience as me, and membership in prestigious professional institutions, and experience volunteering with the organization. Or maybe I had something noticeably bad about my application, such as a blatant typo that I didn’t notice.
None of these are made up scenarios. Each of these has happened either to me or to people I know.
maybe a version of this that is more durable to the considerations in your footnote is: the level of care and effort that I ask from you should be roughly proportional to the level that I express towards you
if I ask for not much care and effort and get a lot, that perhaps should be a prompt to figure out if I should have done more to protect my counterpart from overinvesting, if I accidentally overpromised or miscommunicated, but ultimately there’s only so much responsibility you can take for other people’s decisions
For anyone who is interested in tech policy, I thought I’d share this list of books from the University of Washington’s Gallagher Law Library: https://lib.law.uw.edu/c.php?g=1239460&p=9071046
The collection ranging from court-focused, to privacy-focused books, to ethics, to criminal justice. There is an excellence breadth of material.
(not well thought-out musings. I’ve only spent a few minutes thinking about this.)
In thinking about the focus on AI within the EA community, the Fermi paradox popped into my head. For anyone unfamiliar with it and who doesn’t want to click through to Wikipedia, my quick summary of the Fermi paradox is basically: if there is such a high probability of extraterrestrial life, why haven’t we seen any indications of it?
On a very naïve level, AI doomerism suggests a simple solution to the Fermi paradox: we don’t see signs of extraterrestrial life because civilizations tend to create unaligned AI, which destroys them. But I suspect that the AI-relevant variation would actually be something more like this:
Like many things, I suppose the details matter immensely. Depending on the morality of the creators, an aligned AI might reach spend resources expanding civilization throughout the galaxy, or it might happily putt along maintaining a globe’s agricultural system. Depending on how an unaligned AI is unaligned, it might be focused on turning the whole universe into paperclips, or it might simply kill its creators to prevent them from enduring suffering. So on a very simplistic level it seems that the claim of “civilizations tend to make AI eventually, and it really is a superintelligent and world-changing technology” is consistent with reality of “we don’t observe any signs of extraterrestrial intelligence.”
This is a random musings of cultural norms, mainstream culture, and how/where we choose to spend our time and attention.
Barring the period when I was roughly 16-20 and interested in classic rock, I’ve never really been invested in music culture. By ‘music culture’ I mean things like knowing the names of the most popular bands of the time, knowing the difference between [subgenre A] and [subgenre A] off the top of my head, caring about the lives of famous musicians, etc.[1] Celebrity culture in general is something I’ve never gotten into, but avoiding TV, radio, and advertisements has meant that the messaging which most people are inundated with passes me by.
A YouTube video called 5 Songs You’ve Never Heard That You’ve Heard 1000 Times reminded me this morning of what a HUGE difference there is between the level of care/attention I have for music and the level of care/attention that I perceive as normal. I don’t think I have ever heard any of these songs before, and half of the musicians I’ve never heard of either.[2] Which is a little curious/odd/funny from a cultural perspective, since apparently the target audience has heard these songs so many times.
I suppose we all have our our areas of focus and specialization.
I’ve heard of the names of a variety of famous bands or musicians from the past few decades, but for most of them I’ve never bothered to spend time exploring what they really are.
If mainstream/pop music is a category in a pub quiz, I am probably not going to be of any help.
This is just for my own purposes. I want to save this info somewhere so I don’t lose it. This has practically nothing to do with effective altruism, and should be viewed as my own personal blog post/ramblings.
I read the blog post What Trait Affects Income the Most?, written by Blair Fix, a few years ago, I really enjoyed seeing some data on it. At some point later I wanted to find it and I couldn’t find it, and today I stumbled upon it again. The very short and simplistic summary is that hierarchy (a fuzzy concept that I understand to be roughly “class,” including how wealthy your parents were, were you were born, and other factors) is the biggest influence on lifetime earnings[1]. This isn’t a huge surprise, but it is nice to see some references to research comparing class, education, occupation, race, and other factors.
Opportunity, equity, justice/fairness… these are topics that I probably think about too much for my own good.[2]
Of course, like most research, this isn’t rock solid, and lacking the breadth of knowledge I’m not able to make a sound critique of the research. I also want to be wary of confirmation bias, since this is basically a blog post telling me that what I want to be true it true, so there is another grain of salt I should keep in mind.
I would probably think about them less if I had been born into an upper-middle class family, or if I suddenly inherited $500,000. Just like a well-fed person doesn’t think about food, or a person with career stability isn’t anxious about their job. However, I think that if write about or talk about what leads to success in life then I will be perceived as angry/bitter/envious (especially since I don’t have any solutions or actions, other than a vague “fortunate people be more humble”), and that isn’t how I want people to perceive me. Thus, I generally try to avoid bringing up these topics.
I vaguely remember reading something about buying property with a longtermism perspective, but I can’t remember the justification against doing it. This is basically using people’s inclination to choose immediate rewards over rewards that come later in the future. The scenario was (very roughly) something like this:
This feels like a very naïve question, but if I had enough money to support myself and I also had excess funds outside of that, why not do something like this as a step toward building an enormous pool of resources for the future? Could anyone link me to the original post?
That’s like what is known as a “life estate” except for a fixed term of years. It has similiarities to offering a long-term lease for an upfront payment . . and many of the same problems. The temporary possessor doesn’t care about the value of the property in year 51, so has every incentive to defer maintenance and otherwise maximize their cost/benefit ratio. Just ask anyone in an old condo association about the tendency to defer major costs until someone else owns their unit . . .
If you handle the maintenance, then this isn’t much different than a lease . . . better to get a bank loan and be an ordinary lessor, because the 50-year term and upfront cash requirement are going to depress how much you make. If you plan on enforcing maintenance requirements for the other person, that will be a headache and could be costly.
I’m grappling with an idea of how to schedule tasks/projects, how to prioritize, and how to set deadlines. I’m looking for advice, recommending readings, thoughts, etc.
The core question here is “how should we schedule and prioritize tasks whose result becomes gradually less valuable over time?” The rest of this post is just exploring that idea, explaining context, and sharing examples.
Here is a simple model of the world: many tasks that we do at work (or maybe also in other parts of life?) fall into either sharp decrease to zero or sharp reduction in value.
The sharp decrease to zero category. These have a particular deadline beyond which they offer no value, so you should really do the task before that point.
If you want to put me in touch with a great landlord to rent from, you need to do that before I sign a 12-month lease for a different apartment; at that point the value of the connection is zero.
If you want to book a hotel room prior to a convention, you need to do it before the hotel is fully booked; if you wait until the hotel is fully booked, calling to make that reservation is useless.
If you want to sharing the meeting agenda to allow attendees to prepare for a meeting, you have to share it prior to the meeting starting.
The sharp reduction in value category. You should do these tasks before the sharp reduction in value. Thus, the deadline is when value is about to sharply decrease.
Giving me food falls into the sharp sharp reduction category, because if you wait until I’ve I’m already satiated by eating a full meal, the additional food that you give me has far less value than if you had given it to me before my meal.
Setting deadlines for these kinds of tasks is, in a certain sense, simple: do it at some point before the decrease in value. But what about tasks that decrease gradually in value over time?
We can label these as the gradual reduction category.
Examples include an advertisement for a product that launched today and will be sold for the next 100 days. If I do this task today I will get 100% of it’s value, or if I do it tomorrow I will get 99% of it’s value, and so on, all the way to last day that will add any value.
I could start funding my retirement savings today or tomorrow, and the difference is negligible. In fact, the difference between any two days is tiny. But if I delay for years, then the difference will be massive. This is kind of a “drops of water in a bucket” issue: a single drop doesn’t matter, but all together they add up to a lot.
Should you start exercising today or tomorrow? Doesn’t really matter. Or start next week? No problem. Start 15 years from now? That is probably a lot worse.
If you want to stop smoking, what difference does a day make?
Which sort of leads us back to the core question. If the value decreases gradually rather than decreasing sharply, then when do you do the task?
I suppose one answer is to do the task immediately, before it has any reduction in value. But that also seems like it isn’t what we actually do. In terms of prioritizing, instead of doing everything immediately, people seem to push tasks back to the point just before they would cause problems. If I am prioritizing, I will probably try hard to to the sharp reduction in value task (orange in the below graph) before it has the reduction in value, and then I’ll prioritize the sharp decrease to zero task (blue in the graph), finally starting on my lowest priority task once the other two are finished. But that doesn’t seem optimal, right?
I’ve been reading a few academic papers on my “to-read” list, and The Crisis of Confidence in Research Findings in Psychology: Is Lack of Replication the Real Problem? Or Is It Something Else? has a section that made me think about epistemics, knowledge, and how we try to make the world a better place. I’ll include the exact quote below, but my rough summary of it would be that multiple studies found no relationship between the presence or absence of highway shoulders and accidents/deaths, and thus they weren’t built. Unfortunately, none of the studies had sufficient statistical power, and thus the conclusions drawn were inaccurate. I suppose that absence of evidence is not evidence of absence might be somewhat relevant here. Lo and behold, later on a meta-analysis was done, finding that having highway shoulders reduced accidents/deaths. So my understanding is that inaccurate knowledge (shoulders don’t help) led to choices (don’t build shoulders) that led to accidents/deaths that wouldn’t otherwise have happened.
I’m wondering if there are other areas of life that we can find facing similar issues. These wouldn’t necessarily be new cause areas, but the general idea of identify an area that involves life/death decisions, and then either make sure the knowledge is accurate or attempt to bring accurate knowledge to the decision-makers would be incredibly helpful. Hard though. Probably not very tractable.
For anyone curious, here is the relevant excerpt that prompted my musings:
Evidence-Based Management
What? Isn’t it all evidence-based? Who would take actions without evidence? Well, often people make decisions based on an idea they got from a pop-business book (I am guilty of this), off of gut feelings (I am guilty of this), or off of what worked in a different context (I am definitely guilty of this).
Rank-and-yank (I’ve also heard it called forced distribution and forced ranking, and Wikipedia describes it as vitality curve) is an easy example to pick on, but we could easily look at some other management practice in hiring, marketing, people management, etc.
I like one-on-ones. I think that one-on-ones are a great way to build a relationship with the people on your team, and they also provide a venue for people to bring you issues. But where is the evidence? I’ve never seen any research or data to suggest that one-on-ones lead to particular outcomes. I’ve heard other people describe how they are good, and I’ve read blog posts about why they are a best practice, but I’ve never seen anything stronger than anecdote and people recommending them from their own experience.
It was an HBR article from 2006 (which I found as a result of a paper titled Evidence-Based I–O Psychology: Not There Yet) that I recently read that got me thinking about this more, but I’m considering reading into the area more and writing a more in-depth post about it. It lines up nicely with two different areas of interest of mine: how we often make poor decisions even when we have plenty of opportunities to make better decisions, and learning how to run organizations well.
I’m curious if you have evidence-based answers to Ben West’s question here.
I haven’t read any research or evidence demonstrating one leadership style is better than another. My intuitions and other people’s anecdotes that I’ve heard tell me that certain behaviors are more likely or less likely to lead to success, but I haven’t got anything more solid to go on that that at the moment.
Similarly, I haven’t read any research showing (in a fairly statistically rigorous way) that lean, or agile, or the Toyota Production System, or other similar concepts are effective. Anecdote tells me that they are, and the reasoning for why they work makes sense to me, but I haven’t seen anything more rigorous.
Nicholas Bloom’s research is great, and I am glad to see his study of consulting in India referenced on the EA forum. I would love to see more research measuring impacts of particular management practices, and if I was filthy rich that is probably one of the things that I would fund.
I’m assuming that there are studies about smaller-level actions/behaviors, but it is a lot easier to A-B test what color a button on a homepage should be than to A-B test having a cooperative work culture or a competitive work culture.
I think of the the tricky things is how context matters to much. Just because practice A is more effective than practice B in a particular culture/industry/function, doesn’t mean it will apply to all situations. As a very simplistic example, rapid iteration is great for a website’s design, but imagine how horrible it would be for payroll policy.