LW is a rape cult
If you wouldn’t bail out a bank then would you bail out EA?
LW is a rape cult
If you wouldn’t bail out a bank then would you bail out EA?
Anti-aging seems like a plausible area for effective altruists to consider giving to, so thank you for raising this thought. It looks like GiveWell briefly looked into this area before deciding to focus its efforts elsewhere.
I’ve seen a few videos of Aubrey de Grey speaking about how SENS could make use of $100 million per year to fund research on rejuvenation therapies, so presumably SENS has plenty of room for more funding. SENS’s I-990 tax forms show that the organization’s assets jumped by quite a lot in 2012, though this was because of de Grey’s donations during this year, and though I can’t find SENS’s I-990 for 2013, I would naively guess that they’ve been able to start spending the money donated in 2012 during the last couple of years. I still think that it would be worthwhile to ask someone at SENS where the marginal donation to the foundation would go in the short term—maybe a certain threshold of donations needs to be reached before rejuvenation research can be properly begun in the most cost-effective way.
I agree with Aubrey that too much money is spent researching cures to specific diseases, relative to the amount spent researching rejuvenation and healthspan-extension technology. I’ve focused this response on SENS because, as a person with a decent science background, I feel like Aubrey’s assertion that (paraphrased from memory) “academic research is constrained in a way that rewards low expected value projects which are likely to yield results quickly over longer term, high expected value projects” is broadly true, and that extra research into rejuvenation technologies is, on the margin, more valuable than extra research into possible treatments for particular diseases.
Thank you for sharing this! I hadn’t known that Bronies for Good had switched to fundraising for organizations recommended by GiveWell—given the variety of organizations that Bronies for Good has supported in the past, I certainly hope that they continue to support EA-approved organizations in the future, rather than moving on to another cause.
I haven’t found any such breakdown, even after looking around for a while. The 80,000 Hours interview with Aubrey, as well as a number of Youtube interviews featuring Aubrey (I don’t remember which ones, sorry) note that Aubrey thinks SENS could make good use of $1 billion over the next ten years, but none of these sources justify why this much money is needed.
I agree with everything in your two replies to my post.
You know, I’m probably more susceptible to being dazzled by de Grey than most—he’s a techno-optimist, he’s an eloquent speaker, he’s involved in Alcor, and I personally have a stake in life-extension tech being developed. I’m not sure how much these factors have influenced me in subtle ways while I was writing up my thoughts on SENS.
Anyhow, doing cost-effectiveness estimates is one of my favorite ways of thinking about and better understanding problems, even when I end up throwing out the cost-effectiveness estimates at the end of the day.
I’m an emotivist—I believe that “x is immoral” isn’t a proposition, but, rather, is just another way of saying “boo for x”. This didn’t keep me from becoming an EA, though; I would feel hugely guilty if I didn’t end up supporting GiveWell and other similar organizations once I have an income, and being charitable just feels nice anyways.
I’ve been tentatively considering a career in the actuarial sciences recently. It seems like the field compensates people pretty well, is primarily merit-based, doesn’t require much, if any programming ability (which I don’t really have), and doesn’t have very many prerequisites to get into, other than strong mathematical ability and a commitment to taking the actuarial exams.
Also, actuarial work seems much slower paced than the work done in many careers that are frequently discussed on 80K Hours, which would make me super happy. I’m a bit burnt out on life right now, and I really don’t want to go into a high-stress job, or a job with unusually long hours after I graduate at the end of this semester. I guess that if I wasn’t a failure, I would have figured out what I was doing after graduation by now.
Are there any actuaries in the EA movement, or does anyone have any insights about this field that I might not have? My main concern regarding potentially becoming a trainee actuary is that the field is somewhat prone to automation. Page 71 of this paper, which was linked to in 80K Hours’ report on career automation, suggests that there’s a 21 % chance that actuarial work can be automated. The automation of certain tasks done by actuaries is frequently discussed on the actuary subreddit, as well.
Thanks for reading, and for any advice or thoughts that you might have for me!
Thanks for the encouragement, Ryan!
Does anyone have any thoughts on how much we should value leading other people to donate? I mean this in a very narrow sense, and my thoughts on this topic are quite muddled, so I’ll try to illustrate what I mean with a simplified example. I apologize if my confusion ends up making my writing unclear.
If I talk with a close friend of mine about EA for a bit, and she donates $100 to, say, GiveWell, and then she disengages from EA for the rest of her life, how much should I value her donation to GiveWell? In this scenario, it seems like I’ve put some time and effort into getting my friend to donate, and she presumably wouldn’t have donated $100 if I hadn’t chatted with her, so it feels like maybe I did a few dollars worth of good by chatting with her. At the same time, she’s the one who donated the money, so it feels like she should get credit for all of the good that was done because of her donation. But wait—if I did a few dollars of good, then does that mean that she did less than $100 worth of good?
At this point, my moral intuitions on this issue are all over the place. I guess that positing that the story above actually has a problem implies that the sum of good done by my friend and I should sum to $100, but the only reason I’ve tacitly assumed that to be true is because it intuitively feels true. I previously wrote a comment on LessWrong on this topic that wasn’t any clearer than this comment, and this response was quite clear, but I’m still confused.
You mention that far meta concerns with high expected value deserve lots of scrutiny, and this seems correct. I guess that you could use a multi-level model to penalize the most meta of concerns, and calculate new expected values for different things that you might fund, but maybe even that wouldn’t be sufficient.
It seems like funding a given meta activity on the margin should be given less consideration (i.e. your calculated expected value for funding that thing should be further revised downwards) if x % of charitable funds being spent by EA’s are already going to meta causes, and more consideration if only e.g. 0.5 * x % of charitable funds being spent by EA’s are already going to meta causes. This makes since because of reputational effects—it looks weird to new EA’s if too much is being spent on meta projects.
Epistemic status: low confidence on both parts of this comment.
On life extension research:
See here and here, and be sure to read Owen’s comments after clicking on the latter link. It’s especially hard to do proper cost effectiveness estimates on SENS, though, because Aubrey de Grey seems quite overconfident (credence-wise) most of the time. SENS is still the best organization I know of that works on anti-aging, though.
On cyonics:
I suspect that most of the expected value from cyonics comes from the outcomes in which cyonics becomes widely enough available that cyonics organizations are able to lower costs (especially storage costs) substantially. Popularity would also help on the legal side of things—being able to start cooling and perfusion just before legal death could be a huge boon, and earlier cooling is probably the easiest thing that could be done to increase the probability of successful cryonics outcomes in general.
Thanks! I’ve never looked into the Brain Preservation Foundation, but since RomeoStevens’ essay, which is linked to in the post you linked to above, mentions it as being potentially a better target of funding than SENS, I’ll have to look into it sometime.
Nice post. Spending resources on self-improvement is generally something EA’s shouldn’t feel bad about.
One solution may be different classes of risk-aversity. One low-risk class may be dedicated to GiveWell- or ACE-recommended charities, another to metacharities or endeavors as Open Phil might evaluate, and another high-risk class to yourself, an intervention as 80,000 Hours might evaluate.
I do intuit that the best high-risk interventions ought to be more cost-effective than the best medium-risk interventions, which ought to be more cost-effective than the best low risk interventions, such that someone with a given level of risk tolerance might want to mainly fund the best known interventions at a certain level of riskiness. However, since effective philanthropy isn’t an efficient market yet, this needn’t be true.
It seems like there’s a disconnect between EA supposedly being awash in funds on the one hand, and stories like yours on the other.
This line is spot-on. When I look around, I see depressingly many opportunities that look under-funded, and a surplus of talented people. But I suspect that most EAs see a different picture—say, one of nearly adequate funding, and a severe lack of talented people.
This is ok, and should be expected to happen if we’re all honestly reporting what we observe! In the same way that one can end up with only Facebook friends who are more liberal than 50% of the population, so too can one end up knowing many talented people who could be much more effective with funding, since people’s social circles are often surprisingly homogeneous.
Thank you for posting this, Ian; I very much approve of what you’ve written here.
In general, people’s ape-y human needs are important, and the EA movement could become more pleasant (and more effective!) by recognizing this. Your involvement with EA is commendable, and your involvement with the arts doesn’t diminish this.
Ideally, I wouldn’t have to justify the statement that people’s human needs are important on utilitarian grounds, but maybe I should: I’d estimate that I’ve lost a minimum of $1k worth of productivity over the last 6 months that could have trivially been recouped if several less-nice-than-average EAs had shown an average level of kindness to me.
I would be more comfortable with you calling yourself an effective altruist than I would be with you not doing so; if you’re interested in calling yourself an EA, but hesitate because of your interests and past work, that means that we’re the ones doing something wrong.
Creating a community panel that assesses potential egregious violations of those principles, and makes recommendations to the community on the basis of that assessment.
This is an exceptionally good idea! I suspect that such a panel would be taken the most seriously if you (or other notable EAs) were involved in its creation and/or maintenance, or at least endorsed it publicly.
I agree that the potential for people to harm EA by conducting harmful-to-EA behavior under the EA brand will increase as the movement continues to grow. In addition, I also think that the damage caused by such behavior is fairly easy to underestimate, for the reason that it is hard to keep track of all of the different ways in which such behavior causes harm.
I think liberating altruists to talk about their accomplishments has potential to be really high value, but I don’t think the world is ready for it yet… Another thing is that there could be some unexpected obstacle or Chesterton’s fence we don’t know about yet.
Both of these statements sound right! Most of my theater friends from university (who tended to have very good social instincts) recommend that, to understand why social conventions like this exist, people like us read the “Status” chapter of Keith Johnstone’s Impro, which contains this quote:
We soon discovered the ‘see-saw’ principle: ‘I go up and you go down’. Walk into a dressing-room and say ‘I got the part’ and everyone will congratulate you, but will feel lowered [in status]. Say ‘They said I was old’ and people commiserate, but cheer up perceptibly… The exception to this see-saw principle comes when you identify with the person being raised or lowered, when you sit on his end of the see-saw, so to speak. If you claim status because you know some famous person, then you’ll feel raised when they are: similarly, an ardent royalist won’t want to see the Queen fall off her horse. When we tell people nice things about ourselves this is usually a little like kicking them. People really want to be told things to our discredit in such a way that they don’t have to feel sympathy. Low-status players save up little tit-bits involving their own discomfiture with which to amuse and placate other people.
Emphasis mine. Of course, a large fraction of EA folks and rationalists I’ve met claim to not be bothered by others bragging about their accomplishments, so I think you’re right that promoting these sorts of discussions about accomplishments among other EAs can be a good idea.
This post was incredibly well done. The fact that no similarly detailed comparison of AI risk charities had been done before you published this makes your work many times more valuable. Good job!
At the risk of distracting from the main point of this article, I’d like to notice the quote:
Xrisk organisations should consider having policies in place to prevent senior employees from espousing controversial political opinions on facebook or otherwise publishing materials that might bring their organisation into disrepute.
This seems entirely right, considering society’s take on these sorts of things. I’d suggest that this should be the case for EA-aligned organizations more widely, since PR incidents caused by one EA-related organization can generate fallout which affects both other EA-related organizations, and the EA brand in general.
Hi there! In this comment, I will discuss a few things that I would like to see 80,000 Hours consider doing, and I will also talk about myself a bit.
I found 80,000 Hours in early/mid-2012, after a poster on LessWrong linked to the site. Back then, I was still trying to decide what to focus on during my undergraduate studies. By that point in time, I had already decided that I needed to major in a STEM field so that I would be able to earn to give. Before this, in late 2011, I had been planning on majoring in philosophy, so my decision in early 2012 to do something in a STEM field was a big change from my previous plans. I hadn’t known which STEM field I wanted to major in at this point; I had only realized that STEM majors generally had better earning potentials than philosophy majors.
The way that this ties back into 80,000 Hours is that I think that I would have liked someone to help me decide which STEM field to go into. Actually, I can’t find any discussion of choosing a college major on the 80,000 Hours site, though there are a couple of threads on this topic posted to LessWrong. I would like to see an in-depth discussion page on major choice as one of the core posts on 80,000 Hours.
Anyhow, I ended up majoring in chemistry because it seemed like one of the toughest things that I could major in—I made this decision under the rule-of-thumb that doing hard things makes you stronger. I probably should have majored in mathematics, because I actually really enjoy math, and have gotten good grades in most of my math classes; neither of those two things are true of the chemistry classes I have taken. I think that my biggest previous misconception about major choice was that all STEM majors were roughly equal in how well they prepared you for the job market—looking back, I feel that CS and Math are two of the best choices for earning to give, followed by engineering and then biology, with chemistry and physics as the two worst options for students interested in earning to give. Of course, YMMV, and people with physics degrees do go into quantitative finance, but I do think that not all STEM majors are equally useful for earning to give.
The second thing that I would like to mention is that, from my point of view, 80,000 Hours seems very elitist. I don’t mean this in a bad way, really, I don’t, but it is hard to be in the top third of mathematics graduates from an ivy league university. The first time that I had a face-to-face conversation with an effective altruist who had been inspired by 80,000 Hours, I told them that I was planning on doing important scientific research, and they just gave me a look and asked me why I wasn’t planning on going into one of the more lucrative earning-to-give type of careers.
I am sure that this person is a good person, but this episode leads me to wonder if adding more jobs that very smart people who aren’t quite ready to go into quantitative finance or strategic consulting could do to the top careers page on 80,000 Hours’ site would be a good idea. Specifically, mechanical, chemical, and electrical engineering, as well as the actuarial sciences, could be acceptable fields for one to go into for earning to give.