An interesting data point is that the current Director of Operations at Open Philanthropy, Beth Jones, was previously the Chief Operating Officer of the Hillary Clinton 2016 campaign.
On the other hand, the four operations associates most recently hired by OpenPhil have impressive but not overwhelmingly intimidating backgrounds. I’d like to know how many applied for those four positions.
Could you clarify a bit what you mean by “who”? As in, are you looking for organizations, names of individuals, personality types, or backgrounds of people who’d be more interested in management, or something else?
Wouldn’t the prosecutor drop the charge?
I wouldn’t say I’m opposed to the idea of sentientism, I agree with basically all of its claims and conclusions. But I don’t think it’d be a good to strongly associate EA with sentientism, and I don’t think it adds much to discussions of ethics.
On the first, I agree pretty strongly of the framing that effective altruism is a question, not an ideology, so I don’t want to prescribe the ethics that someone must agree with in order to care about effective altruism.
Second, as I currently understand it (which is not super well), sentientism seems to only to take one ethical stance: conscious experience is the source of all moral value. This is definitely different from a stance that gods or humans or carbon-based life are the only sources of moral value, so kudos for having a position. But it takes no stance on most of the most important ethical questions: deontology vs consequentialism vs others, realism vs non-realism, internalism vs externalism, moral uncertainty. Even assuming a utilitarian starting point, it takes no stance on person affecting views, time discounting, preference vs hedonic utilitarianism, etc. Sentientism is my favorite answer to the question it’s trying to answer, but it’s hardly a comprehensive moral system.
[Meta: I’m still glad you posted this. We need people to think about about new ideas, even though we’re not going to agree with most of them.]
Thank you for this, there are plenty of others who feel the same way.While I never experienced these feelings in an overwhelming or depressing way, I’ve felt these same concerns of guilt for taking care of myself before engaging in altruism.
This SlateStarCodex post convinced me that my view was simply incorrect. To be an effective altruist is to do the most good possible, and to feel guilty or to shame others for only doing some good and not all of the good is counterproductive to EA goals—it hurts you, it hurts EA as a movement, and ultimately that will hurt the people you’re trying to help in the first place. There is no “correct” line of how much to give, so to help us help others without feeling guilty, EA/GWWC has decided to draw that line at 10%. Feel free to go above, but it’s absolutely not an obligation.
Of course, knowing you shouldn’t feel guilty is easier than escaping the emotion of guilt, and nobody can blame you for the feeling. But I genuinely believe on an intellectual level that I ought not feel guilty for most of the good I don’t do, and it helps.
Good point, I hadn’t considered that. If I were to try to fit this to my model, I would say that there’s nobody really looking to produce the best military technology/tactics in between wars. But if you look at a period of sustained effort in staying on the military cutting edge, i.e. the Cold War, you won’t see as many of these mistakes and you’ll instead find fairly continuous progress with both sides continuously using the best available military technology. I’m not sure if this is actually a good interpretation, but it seems possible. (I’d be interested in where you think we’re failing today!)
But even if this is true, your original claim remains true: if it takes a Cold War-level of vigilance to stay on the cutting edge, then terrorists probably aren’t deploying the best available weaponry, just because they don’t know about it.
So maybe an exceptional effort can keep you on the cutting edge, but terrorist groups aren’t at that cutting edge?
The clearest explanation seems to be that extremely few people, terrorists included, are seriously trying to figure out the most effective ways to kill strangers—if they were, they’d be doing a better job of it.
AI Impacts’ discontinuous progress investigation finds that it’s really hard to make sudden progress on metrics that anyone cares about, because the low hanging fruit will already be gone. I doubt national militaries routinely miss effective ways to conduct war—when they make a serious effort, they find the best weapons.
If terrorists aren’t noticing the most effective ways to maximize their damage, it could be good evidence that they’re not seriously trying. (So +1 to Gwern’s theory)
Hey, saw your other post so just wanted to give some feedback. FWIW I think this is a good idea and good post. It builds on a concept that’s already been somewhat discussed, does a good job brainstorming pros cons challenges and ideas, and overall is a very good conversation starter and continuer.
As for the negative feedback, one possibility is that I could see people disliking your “hard to abandon” concept. There’s a fair bit of focus in EA on not causing harm when trying to do good, and one of the most advocated ways to avoid doing harm is to be cautious before taking irreversible actions. I could see someone arguing that a poor implementation of this idea is worse than none at all (because it would undermine possible future attempts, or lower the reputation of startup EA projects for actual success). I’d personally agree that a poor rollout could well be worse than none, and that the general mindset around this should probably be to do it right or not at all, though I don’t see that as reason enough to downvote.
Also, as another newcomer who feels self-conscious/nervous beginning to post on here, just my encouragement to stick with it. It seems very likely that our input is valuable and valued, even when it feels ignored.
Thanks for this! I think there should be a lot more introduction material to effective altruism, and this is a great step.
One stat I’d nitpick: I think GiveWell and other charity effectiveness estimators would pretty strongly disagree with the statement that someone can save a life with $586.
First, $586 is on the very low end of GiveWell’s estimates for the cost of saving a life. From their website: “As of November 2016, the median estimate of our top charities’ cost-effectiveness ranged from ~$900 to ~$7,000 per equivalent life saved.”
Second, that’s not literally saying $ per life saved, it’s saying $ per “equivalent life saved”. GiveWell does moral weight conversions, meaning e.g. if an intervention increases consumption by 25% for 100 people for one year, using their moral weight system, that would be equivalent to saving 0.685 lives. It’s tough to make conversions like that, but it’s essential in a world with unavoidable tradeoffs—but we should be transparent about when we’re doing these conversions. (I’m actually not sure if this is an important factor in the fistula case, more just a general warning.)
Third, GiveWell seems to strongly believe that “we can’t take expected value estimates literally, even when they’re unbiased”, because experience shows that exceptionally effective charities are simply rare. An example: if a high school physics student collects some experimental data that disproves F=ma, do you believe him? No, because this new evidence is much weaker than our prior belief. Similarly, if a new charity comes out with an estimate that says it can save a life for $1, do we believe it? Probably not—not because the study was flawed or biased or malicious or anything like that, but because there’s way better odds the study was somehow wrong than there are that they can actually saves lives for $1.
One of the toughest parts about intros to EA is dealing with numbers like these. It’s been debated with Giving What We Can and Will MacAskill’s Doing Good Better. It’s tempting and effective to give a jarring headline like “This campaign saved #x lives today”, but all in all, I think it’s the right move not to oversell and to be honest about our uncertainty.
(But seriously, really cool project)
Good point, I think you could reframe it to still work: If the goal is to treat mental health issues in EA, the subset of people you could actually reach with treatment is probably fairly similar to the subset that would answer this poll: people that use the Forum.
It probably can’t deliver accurate numbers on prevalence, but it can profile the people it’s targeting on their demographics and desires.
My 2 cents: Nobody’s going to solve the question of social justice here, the path forward is to agree on whatever common ground is possible, and make sure that disagreements are (a) clearly defined, avoiding big vague words, (b) narrow enough to have a thorough discussion, and (c) relevant to EA. Otherwise, it’s too easy to disagree on the overall “thumbs up or down to social justice” question, and not notice that you in fact do agree on most of the important operational questions of what EA should do.
So “When introducing EA to newcomers, we generally shouldn’t discuss income and IQ, because it’s unnecessary and could make people feel unwelcome at first” would be a good claim to disagree on, because it’s important to EA, and because the disagreement is narrow enough to actually sort out.
Other examples of narrow and EA-relevant claims that therefore could be useful to discuss: “EA orgs should actively encourage minority applicants to apply to positions”; “On the EA Forum, no claim or topic should be forbidden for diversity reasons, as long as it’s relevant to EA”; or “In public discussions, EAs should make minority voices welcome, but not single out members of minority groups and explicitly ask for their opinions/experiences, because this puts them in a potentially stressful situation.”
On the other hand, I think this conversation has lots of claims that are (a) too vague to be true or false, (b) too broad to be effectively discussed, or (c) not relevant to EA goals. Questions like this would include “Are women oppressed?“, “Is truth more important than inclusivity?“, or “Is EA exclusionary?” It’s not obvious what it would really mean for these to be true or false, you’re unlikely to change anyone’s mind in a reasonable amount of time, and their significance to EA is unclear.
My guess is that we all probably agree a lot on specific operationalized questions relevant to EA, and disagree much more when we abstract to overarching social justice debates. If we stick to specific, EA-relevant questions, there’s probably a lot more common ground here than there seems to be.
Strongly agreed. I really like Raemon’s analysis why it’s so hard to get EA careers: we’re network constrained. [This isn’t exactly how he frames it, more my take on his idea.]
Right now, EA operates very informally, relying heavily on the fact that the several hundred people working at explicitly EA orgs are all socially networked together to some degree. This social group was significantly inherited from LessWrong and Bay Area rationalism, and EA has had great success in co-opting it for EA goals.
But as EA grows beyond its roots, more people want in, and you can’t have a social network of ten thousand, let alone a million. So we have two options: (a) increase the bandwidth of the social network, or (b) stop relying so much on the social network.
(a) increasing bandwidth looks like exactly what you’re talking about: create ways for newcomers to EA to make EA friends, develop professional relationships with EAs, etc., by creating better online platforms and in person groups.
(b) not relying on personal relationships looks like becoming more corporate, relying on traditional credentials, scaling up until people actually stand a strong chance of landing jobs via open application, etc.
(a) seems to have clear benefits with no obvious harms, as long as it can be done, so it seems very much worth it for us to try.
I think part of what might be driving the difference of opinion here is that the type of EAs that need a 45 minute chat are not the type of EAs that 80k meets. If you work at 80k, you and most of the EAs you know: probably have dozens of EA friends, have casual conversations about EA, pick up informal knowledge easily, and can talk out your EA ideas with people who can engage. But the majority of people who call themselves EA probably don’t have many if any friends who work at EA organizations, donate lots, provide informal knowledge of EA, or who can seriously help you figure out how to have a high impact career.
A 45 minute discussion can therefore do a lot more good for someone outside the EA social circle than for someone who has friends who can have this conversation with them.
Good point, I wasn’t fully considering that. I think Michael Plant’s recent investigation into mental health as a cause area is a perfect example of the value of independent research—mental health isn’t something . While I still think it’s going to be extremely difficult to beat GiveWell in i.e., evaluating which deworming charity is most effective, or which health intervention tends to be most effective, I do think independent researchers can make important contributions in identifying GiveWell’s “blind spots”.
Mental health and education both could be good examples. At this point, GiveWell doesn’t recommend either. But they’re not areas that GiveWell has spent years building expertise in. So it’s reasonable to expect that, in these areas, a dedicated newcomer can produce research that rivals GiveWell’s in quality.
So I’d revise my stance to: Do your own research if there’s an upstream question (like the moral value of mental suffering, the validity of life satisfaction surveys, or the intrinsic value of education) that you think GiveWell might be wrong about. Often, you’ll conclude that they were right, but the value of uncovering their occasional mistakes is high. Still, trust GiveWell if you agree with their initial assumptions on what matters.
Just a thank you for sharing, it can be scary to share your personal background like this but it’s extremely helpful for people looking into EA careers.
I really like the education review, it seems like a great introduction to the literature on effective education interventions. And it’s even better that you’ll be reviewing health interventions soon, given that they seem generally more effective than education, both in terms of certainty and overall impact.
But I would still have strong confidence that GiveWell’s top charities all have significantly higher expected value than the results of this investigation, for two reasons.
First, GiveWell has access to the internal workings of charities, allowing them to recommend charities that do a better job of achieving their intervention. This goes as far as GiveWell making almost a dozen site visits over the past five years to directly observe these charities in action. There’s just no way to replicate this without close, prolonged contact with all the relevant charities.
Second, GiveWell simply has more experience and expertise in development evaluations than someone doing this in their free time. It’s fantastic that you all are working with these donors, and your actions seem likely to have a strong impact. But GiveWell has 25 staff, a decade of experience in the area, and access to any relevant experts and insider information. It’s very difficult to replicate the quality of recommendations that come from that process. Doing the research yourself has other benefits: it increases engagement with the cause, it teaches a valuable skill, etc. But when there’s a million dollars to be donated, it might be best to trust GiveWell.
If the donors want an intervention that’s both certain and transformative, GiveDirectly seems like an obvious choice.
Really cool thought, this is persuasive to me.
If I can try to rephrase your beliefs: economic rationality tells us that tradeoffs do in fact exist, and therefore rational agents must be able to make a comparison in every case. There has to be some amount of every value that you’d trade for another amount of every other value, otherwise you’ll end up paralyzed and decisionless.
You’re saying that, although we’d like to have this coherent total utility function, realistically it’s impossible to do so. We run into the theoretical problems you mention, and more fundamentally, some of our goals simply are not maximizing goals, and there is no rule that can accurately describe the relationship between those goals. Do we end up paralyzed and decisionless, with no principled way to tradeoff between the different goals? Yes, that’s unavoidable.
And one clarification: Would you say that this non-comparability is a feature more of human preferences, where we biologically have desires that aren’t integrated into a single utility function, or morality, where there are independent goals with independent moral worth?
As a college student, I volunteer a few hours a week at Faunalytics, an EA-aligned animal welfare advocacy/research group. I think volunteering with Faunalytics is a good candidate for a small-scale Task Y.
I started off by editing their old article archives and updating them to fit their new article formatting. It was pretty boring, but it was useful for Faunalytics because it let them publish their archived research summaries, and it let me show Faunalytics that I was committed and could be trusted with responsibility.
Sometimes I’d rewrite old articles that seemed poorly done, so after a few months, my supervisor liked my writing and moved me up to doing my own research summaries. Each week, I’d be assigned a paper about something relevant to animal or environmental advocacy. I’d write an 800 word summary in the style of a blog post, and Faunalytics would publish it to their library. Here’s some of what I wrote (the tagging system is buggy, it doesn’t list a lot of my articles).
I recently stopped doing research summaries for time reasons, but I’m now working with their research team on analyzing data from their annual Animal Tracker survey poll.
The parts I’ve really enjoyed about the work are:
The papers could be interesting, and I learned a bit about animal topics
I think most of what I wrote was informative and would be useful to e.g. animal activists who wanted to better understand a particular question. Examples: Does ecotourism help or harm local wildlife? What’s the relationship between domestic violence and animal abuse? (But, see below: informative and useful to some people is not necessarily the same as effective in doing good)
Writing research summaries is very engaging work, just the right level of difficulty, and my writing skills markedly improved
It can lead to other opportunities: They now trust me enough to let me do their data analysis project, which is really fun, educational, and (given that I’m a student) will be probably the most legitimate thing I’ve published once it’s done. I’d also be comfortable asking my supervisor for a recommendation letter for a job, and if I wanted to get more involved in EA animal rights, I think I’d be able to make connections through Faunalytics.
The parts that weren’t so great are:
On the whole, I’m not sure I’ve had much impact. If I were convinced that the majority of causes within animal welfare are effective, then I would probably think I’ve had a good positive impact. But I don’t think e.g. the environmental impacts of ecotourism are very important from an altruistic standpoint, which really decreases my value.
Being a low-commitment volunteer is simply a bad arrangement in a lot of ways. At least for me, doing something a few hours a week often leads to doing it zero hours a week, especially when it’s a volunteer relationship where you’ve made very little firm commitment and there’s no consequences for being late or failing to deliver. I think I combatted this pretty well by forcing myself to stick to deadlines, but I totally understand the GiveWell position of not accepting volunteers because they’re not committed enough.
On the whole, for anyone looking to explore working in EA more broadly, I think volunteering at Faunalytics is a great idea: the possibility of direct impact, mostly engaging work, and a strong opportunity to prove yourself and make connections that can lead to future opportunities. Check it out here if you’re interested, and feel free to message me with questions.
(Anybody have input on whether I should write a full post about my experience/advertising the opportunity?)
The prize definitely seems useful for encouraging deeper, better content. One question: would a smaller, more frequent set of prizes be more effective? Maybe a prize every two weeks?
My intuition says a $1000 top prize won’t generate twice as much impact as a $500 top prize every two weeks—thinking along the lines of prospect theory, where a win is a win and winning $500 is worth a lot more than half of winning $1000; or prison reform literature, where a higher chance of a smaller punishment is more effective in deterring crime than a small chance of a big punishment.
These prize posts probably create buzz and motivate people to begin, improve, and finish their posts; doubling their frequency and halving their payout could be more effective at the same cost.
(Counterargument: the biggest cost isn’t money, it’s time, and a two week turnaround is a lot for moderators. Not sure how to handle that.)
I agree that LW has been a big part of keeping EA epistemically strong, but I think most of that is selection rather than education. It’s not that reading LW makes you much more clearer-thinking or focused on truth, it’s that only people who are that way to begin with decide to read LW, and they then get channeled to EA.
If that’s true, it doesn’t necessarily discredit rationality as an EA cause area, it just changes the mechanism and the focus: maybe the goal shouldn’t be making everybody LW-rational, it should be finding the people that already fit the mold, hopefully teaching them some LW-rationality, and then channeling them to EA.