LW is a rape cult
If you wouldn’t bail out a bank then would you bail out EA?
LW is a rape cult
If you wouldn’t bail out a bank then would you bail out EA?
It updates me in the direction that the right queries can produce a significant amount of valuable material if we can reduce the friction to answering such queries (esp. perfectionism) and thus get dialogs going.
Definitely agreed. In this spirit, is there any reason not to make an account with (say) a username of username, and a password of password, for anonymous EAs to use when commenting on this site?
It’s not a coincidence that all the fund managers work for GiveWell or Open Philanthropy.
Second, they have the best information available about what grants Open Philanthropy are planning to make, so have a good understanding of where the remaining funding gaps are, in case they feel they can use the money in the EA Fund to fill a gap that they feel is important, but isn’t currently addressed by Open Philanthropy.
It makes some sense that there could be gaps which Open Phil isn’t able to fill, even if Open Phil thinks they’re no less effective than the opportunities they’re funding instead. Was that what was meant here, or am I missing something? If not, I wonder what such a funding gap for a cost-effective opportunity might look like (an example would help)?
There’s a part of me that keeps insisting that it’s counter-intuitive that Open Phil is having trouble making as many grants as it would like, while also employing people who will manage an EA fund. I’d naively think that there would be at least some sort of tradeoff between producing new suggestions for things the EA fund might fund, and new things that Open Phil might fund. I suspect you’re already thinking closely about this, and I would be happy to hear everyone’s thoughts.
Edit: I’d meant to express general confidence in those who had been selected as fund managers. Also, I have strong positive feelings about epistemic humility in general, which also seems highly relevant to this project.
Thank you! I really admired how compassionate your tone was throughout all of your comments on Sarah’s original post, even when I felt that you were under attack . That was really cool. <3
I’m from Berkeley, so the community here is big enough that different people have definitely had different experiences than me. :)
I should add that I’m grateful for the many EAs who don’t engage in dishonest behavior, and that I’m equally grateful for the EAs who used to be more dishonest, and later decided that honesty was more important (either instrumentally, or for its own sake) to their system of ethics than they’d previously thought. My insecurity seems to have sadly dulled my warmth in my above comment, and I want to be better than that.
This issue is very important to me, and I stopped identifying as an EA after having too many interactions with dishonest and non-cooperative individuals who claimed to be EAs. I still act in a way that’s indistinguishable from how a dedicated EA might act—but it’s not a part of my identity anymore.
I’ve also met plenty of great EAs, and it’s a shame that the poor interactions I’ve had overshadow the many good ones.
Part of what disturbs me about Sarah’s post, though, is that I see this sort of (ostensibly but not actually utilitarian) willingness to compromise on honesty and act non-cooperatively more in person than online. I’m sure that others have had better experiences, so if this isn’t as prevalent in your experience, I’m glad! It’s just that I could have used stronger examples if I had written the post, instead of Sarah.
I’m not comfortable sharing examples that might make people identifiable. I’m too scared of social backlash to even think about whether outing specific people and organizations would even be a utilitarian thing for me to do right now. But being laughed at for being an “Effective Kantian” because you’re the only one in your friend group who wasn’t willing to do something illegal? That isn’t fun. Listening to hardcore EAs approvingly talk about how other EAs have manipulated non-EAs for their own gain, because doing so might conceivably lead them to donate more if they had more resources at their disposal? That isn’t inspiring.
Since there are so many separate discussions surrounding this blog post, I’ll copy my response from the original discussion:
I’m grateful for this post. Honesty seems undervalued in EA.
An act-utilitarian justification for honesty in EA could run along the lines of most answers to the question, “how likely is it that strategic dishonesty by EAs would dissuade Good Ventures-sized individuals from becoming EAs in the future, and how much utility would strategic dishonesty generate directly, in comparison?” It’s easy to be biased towards dishonesty, since it’s easier to think about (and quantify!), say, the utility the movement might get from having more peripheral-to-EA donors, than it is to think about the utility the movement would get from not pushing away would-be EAs who care about honesty.
I’ve [rarely] been confident enough to publicly say anything when I’ve seen EAs and ostensibly-EA-related organizations acting in a way that I suspect is dishonest enough to cause significant net harm. I think that I’d be happy if you linked to this post from LW and the EA forum, since I’d like for it to be more socially acceptable to kindly nudge EAs to be more honest.
Good Ventures recently announced that it plans to increase its grantmaking budget substantially (yay!). Does this affect anyone’s view on how valuable it is to encourage people to take the GWWC pledge on the margin?
It’s worth pointing out past discussions of similar concerns with similar individuals.
I’d definitely be happy for you to expand on how any of your points apply to AMF in particular, rather than aid more generally; constructive criticism is good. However, as someone who’s been around since the last time we had this discussion, I’m failing to find any new evidence in your writing—even qualitative evidence—that what AMF is doing is any less effective than I’d previously believed. Maybe you can show me more, though?
Thanks for the post.
This post was incredibly well done. The fact that no similarly detailed comparison of AI risk charities had been done before you published this makes your work many times more valuable. Good job!
At the risk of distracting from the main point of this article, I’d like to notice the quote:
Xrisk organisations should consider having policies in place to prevent senior employees from espousing controversial political opinions on facebook or otherwise publishing materials that might bring their organisation into disrepute.
This seems entirely right, considering society’s take on these sorts of things. I’d suggest that this should be the case for EA-aligned organizations more widely, since PR incidents caused by one EA-related organization can generate fallout which affects both other EA-related organizations, and the EA brand in general.
I think liberating altruists to talk about their accomplishments has potential to be really high value, but I don’t think the world is ready for it yet… Another thing is that there could be some unexpected obstacle or Chesterton’s fence we don’t know about yet.
Both of these statements sound right! Most of my theater friends from university (who tended to have very good social instincts) recommend that, to understand why social conventions like this exist, people like us read the “Status” chapter of Keith Johnstone’s Impro, which contains this quote:
We soon discovered the ‘see-saw’ principle: ‘I go up and you go down’. Walk into a dressing-room and say ‘I got the part’ and everyone will congratulate you, but will feel lowered [in status]. Say ‘They said I was old’ and people commiserate, but cheer up perceptibly… The exception to this see-saw principle comes when you identify with the person being raised or lowered, when you sit on his end of the see-saw, so to speak. If you claim status because you know some famous person, then you’ll feel raised when they are: similarly, an ardent royalist won’t want to see the Queen fall off her horse. When we tell people nice things about ourselves this is usually a little like kicking them. People really want to be told things to our discredit in such a way that they don’t have to feel sympathy. Low-status players save up little tit-bits involving their own discomfiture with which to amuse and placate other people.
Emphasis mine. Of course, a large fraction of EA folks and rationalists I’ve met claim to not be bothered by others bragging about their accomplishments, so I think you’re right that promoting these sorts of discussions about accomplishments among other EAs can be a good idea.
Creating a community panel that assesses potential egregious violations of those principles, and makes recommendations to the community on the basis of that assessment.
This is an exceptionally good idea! I suspect that such a panel would be taken the most seriously if you (or other notable EAs) were involved in its creation and/or maintenance, or at least endorsed it publicly.
I agree that the potential for people to harm EA by conducting harmful-to-EA behavior under the EA brand will increase as the movement continues to grow. In addition, I also think that the damage caused by such behavior is fairly easy to underestimate, for the reason that it is hard to keep track of all of the different ways in which such behavior causes harm.
Thank you for posting this, Ian; I very much approve of what you’ve written here.
In general, people’s ape-y human needs are important, and the EA movement could become more pleasant (and more effective!) by recognizing this. Your involvement with EA is commendable, and your involvement with the arts doesn’t diminish this.
Ideally, I wouldn’t have to justify the statement that people’s human needs are important on utilitarian grounds, but maybe I should: I’d estimate that I’ve lost a minimum of $1k worth of productivity over the last 6 months that could have trivially been recouped if several less-nice-than-average EAs had shown an average level of kindness to me.
I would be more comfortable with you calling yourself an effective altruist than I would be with you not doing so; if you’re interested in calling yourself an EA, but hesitate because of your interests and past work, that means that we’re the ones doing something wrong.
It seems like there’s a disconnect between EA supposedly being awash in funds on the one hand, and stories like yours on the other.
This line is spot-on. When I look around, I see depressingly many opportunities that look under-funded, and a surplus of talented people. But I suspect that most EAs see a different picture—say, one of nearly adequate funding, and a severe lack of talented people.
This is ok, and should be expected to happen if we’re all honestly reporting what we observe! In the same way that one can end up with only Facebook friends who are more liberal than 50% of the population, so too can one end up knowing many talented people who could be much more effective with funding, since people’s social circles are often surprisingly homogeneous.
Nice post. Spending resources on self-improvement is generally something EA’s shouldn’t feel bad about.
One solution may be different classes of risk-aversity. One low-risk class may be dedicated to GiveWell- or ACE-recommended charities, another to metacharities or endeavors as Open Phil might evaluate, and another high-risk class to yourself, an intervention as 80,000 Hours might evaluate.
I do intuit that the best high-risk interventions ought to be more cost-effective than the best medium-risk interventions, which ought to be more cost-effective than the best low risk interventions, such that someone with a given level of risk tolerance might want to mainly fund the best known interventions at a certain level of riskiness. However, since effective philanthropy isn’t an efficient market yet, this needn’t be true.
Thanks! I’ve never looked into the Brain Preservation Foundation, but since RomeoStevens’ essay, which is linked to in the post you linked to above, mentions it as being potentially a better target of funding than SENS, I’ll have to look into it sometime.
Epistemic status: low confidence on both parts of this comment.
On life extension research:
See here and here, and be sure to read Owen’s comments after clicking on the latter link. It’s especially hard to do proper cost effectiveness estimates on SENS, though, because Aubrey de Grey seems quite overconfident (credence-wise) most of the time. SENS is still the best organization I know of that works on anti-aging, though.
On cyonics:
I suspect that most of the expected value from cyonics comes from the outcomes in which cyonics becomes widely enough available that cyonics organizations are able to lower costs (especially storage costs) substantially. Popularity would also help on the legal side of things—being able to start cooling and perfusion just before legal death could be a huge boon, and earlier cooling is probably the easiest thing that could be done to increase the probability of successful cryonics outcomes in general.
You mention that far meta concerns with high expected value deserve lots of scrutiny, and this seems correct. I guess that you could use a multi-level model to penalize the most meta of concerns, and calculate new expected values for different things that you might fund, but maybe even that wouldn’t be sufficient.
It seems like funding a given meta activity on the margin should be given less consideration (i.e. your calculated expected value for funding that thing should be further revised downwards) if x % of charitable funds being spent by EA’s are already going to meta causes, and more consideration if only e.g. 0.5 * x % of charitable funds being spent by EA’s are already going to meta causes. This makes since because of reputational effects—it looks weird to new EA’s if too much is being spent on meta projects.
I’d be interested in contributing to something like this (conditional on me having enough mental energy myself to do so!). I tend to hang out mostly with EA and EA-adjacent people who fit this description, so I’ve thought a lot about how we can support each other. I’m not aware of any quick fixes, but things can get better with time. We do seem to have a lot of depressed people, though.
Speculation ahoy:
1) I wonder if, say, Bay area EAs cluster together strongly enough that some of the mental health techniques/habits/one-off-things that typically work best for us are different from the things that work for most people in important ways.
2) Also, something about the way in which status works in the social climate of the EA/LW Bay Area community is both unusual and more toxic than the way in which status works in more average social circles. I think this contributes appreciably to the number and severity of depressed people in our vicinity. (This would take an entire sequence to describe; I can elaborate if asked).
3) I wonder how much good work could be done on anyone’s mental health by sitting down with a friend who wants to focus on you and your health for, say, 30 hours over the course of a few days and just talking about yourself, being reassured and given validation and breaks, consensually trying things on each other, and, only when it feels right, trying to address mental habits you find problematic directly. I’ve never tried something like this before, but I’d eventually like to.
Well, writing that comment was a journey. I doubt I’ll stand by all of what I’ve written here tomorrow morning, but I do think that I’m correct on some points, and that I’m pointing in a few valuable directions.