Nice work, and looks like a good group of advisors!
Daniel_Dewey
Re: donation: I’d personally feel best about donating to the Long-Term Future EA Fund (not yet ready, I think?) or the EA Giving Group, both managed by Nick Beckstead.
Thanks for recommending a concrete change in behavior here!
I also appreciate the discussion of your emotional engagement / other EAs’ possible emotional engagement with cause prioritization—my EA emotional life is complicated, I’m guessing others have a different set of feelings and struggles, and this kind of post seems like a good direction for understanding and supporting one another.
ETA: personally, it feels correct when the opportunity arises to emotionally remind myself of the gravity of the ER-triage-like decisions that humans have to make when allocating resources. I can do this by celebrating wins (e.g. donations / grants others make, actual outcomes) as well as by thinking about how far we have to go in most areas. It’s slightly scary, but makes me more confident that I’m even-handedly examining the world and its problems to the best of my abilities and making the best calls I can, and I hope it keeps my ability to switch cause areas healthy. I’d guess this works for me partially because those emotions don’t interfere with my ability to be happy / productive, and I expect there are people whose feelings work differently and who shouldn’t regularly dwell on that kind of thing :)
I agree that if engagement with the critique doesn’t follow those words, they’re not helpful :) Editing my post to clarify that.
The pledge is really important to me as a part of my EA life and (I think) as a part of our community infrastructure, and I find your critiques worrying. I’m not sure what to do, but I appreciate you taking the critic’s risk to help the community. Thank you!
This is a great point—thanks, Jacob!
I think I tend to expect more from people when they are critical—i.e. I’m fine with a compliment/agreement that someone spent 2 minutes on, but expect critics to “do their homework”, and if a complimenter and a critic were equally underinformed/unthoughtful, I’d judge the critic more harshly. This seems bad!
One response is “poorly thought-through criticism can spread through networks; even if it’s responded to in one place, people cache and repeat it other places where it’s not responded to, and that’s harmful.” This applies equally well to poorly thought-through compliments; maybe the unchallenged-compliment problem is even worse, because I have warm feelings about this community and its people and orgs!
Proposed responses (for me, though others could adopt them if they thought they’re good ideas):
For now, assume that all critics are in good faith. (If we have / end up with a bad-critic problem, these responses need to be revised; I’ll assume for now that the asymmetry of critique is a bigger problem.)
When responding to critiques, thank the critic in a sincere, non-fake way, especially when I disagree with the critique (e.g. “Though I’m about to respond with how I disagree, I appreciate you taking the critic’s risk to help the community. Thank you! [response to critique]”)
Agree or disagree with critiques in a straightforward way, instead of saying e.g. “you should have thought about this harder”.
Couch compliments the way I would couch critiques.
Try to notice my disagreements with compliments, and comment on them if I disagree.
Thoughts?
Thanks!
I think parts of academia do this well (although other parts do it poorly, and I think it’s been getting worse over time). In particular, if you present ideas at a seminar, essentially arbitrarily harsh criticism is fair game. Of course, this is different from the public internet, but it’s still a group of people, many of whom do not know each other personally, where pretty strong criticism is the norm.
One guess is that ritualization in academia helps with this—if you say something in a talk or paper, you ritually invite criticism, whereas I’d be surprised to see people apply the same norms to e.g. a prominent researcher posting on facebook. (Maybe they should apply those norms, but I’d guess they don’t.)
Unfortunately, it’s not obvious how to get the same benefits in EA.
Prediction-making in my Open Phil work does feel like progress to me, because I find making predictions and writing them down difficult and scary, indicating that I wasn’t doing that mental work as seriously before :) I’m quite excited to see what comes of it.
I have very mixed feelings about Sarah’s post; the title seems inaccurate to me, and I’m not sure about how the quotes were interpreted, but it’s raised some interesting and useful-seeming discussion. Two brief points:
I understand what causes people to write comments like “lying seems bad but maybe it’s the best thing to do in some cases”, but I don’t think those comments usually make useful points (they typically seem pedantic at best and edgy at worst), and I hope people aren’t actually guided by considerations like those. Most EAs I work with, AFAICT, strive to be honest about their work and believe that this is the best policy even when there are prima facie reasons to be dishonest. Maybe it’s worth articulating some kind of “community-utilitarian” norms, probably drawing on rule utilitarianism, to explain why I think honesty is the best policy?
I think the discussion of what “pledge” means to different people is interesting; a friend pointed out to me that blurring the meaning of “pledge” into something softer than an absolute commitment could hurt my ability to make absolute commitments in the future, and I’m now considering ways to be more articulate about the strength of different commitment-like statements I make. Maybe it’s worth picking apart and naming some different concepts, like game-theoretic cooperation commitments, game-theoretic precommitments (e.g. virtues adopted before a series of games is entered), and self-motivating public statements (where nobody else’s decisions lose value if I later reverse my statement, but I want to participate in a social support structure for shared values)?
I’m really glad you posted this! I’ve found it helpful food for thought, and I think it’s a great conversation for the community to be having.
For many Americans, income taxes might go down; probably worth thinking about what to do with that “extra” money.
You’re welcome :) Glad you liked it!
Thanks for mentioning this—I totally see what you’re pointing at here, and I think you make valid points re: there always being more excuses later.
I just meant to emphasize that “giving now feels good” wasn’t something I was prepared to justify in terms of its actual impact on the world; if I found out that this good feeling was justified in terms of impact, that’d be great, but if it turned out that I could give up that good feeling in order to have a better impact, I’d try my best to do so.
Thanks Milan!
I haven’t thought a lot about that, and might be making the wrong call. Off the top of my head:
There’s a community norm toward donating 10%, and I’m following that without thinking too hard.
I expect donation effectiveness on the scale of my donations to get worse over time, so giving earlier at the cost of giving a little (?) less over my career seems like it might be better.
Giving feels good in a way that paying debt doesn’t. This isn’t an EA reason :)
I guess I could put my 10% toward debt reduction instead—if you or anyone else has pointers to info that might cause me to decide to do that, I’d be interested in seeing it, and in promoting it so that other debt-saddled EAs can make better decisions!
Thanks! I’ll check it out.
How I missed my pledge and how I’m fixing it
I was glad to see this article—I think it’s a very interesting issue, and generally want to encourage people to bring up this kind of thing so that we can continue to look for more effective causes and beneficiary groups. Nice work!
I didn’t find the presentation unpleasant, personally, but I have a high tolerance for being opinionated, and it’s been helpful to see others’ reactions in the comments.
That’s great, thanks for letting me know! Score one for posting on fora :)
Since the groups above seem to exhaust the space of beneficiaries (if what we care about is well-being), we can’t expect to get more effectiveness improvements in this way. In future, such improvements will have to come from finding new interventions, or intervention types.
Though I think the conclusion may well be correct, this argument doesn’t seem valid to me. Thinking about it more produced some ideas I found interesting.
Imagine that we instead had only one group of beneficiaries: all conscious beings. We could run the same argument—this group exhausts all possible beneficiaries, etc. -- and conclude that discovering new beneficiary groups isn’t helpful. However, breaking down “conscious beings” into present and future groups, and breaking down further into humans and animals, has in fact been very helpful, so we would have been wrong to stop looking for beneficiary groups.
From where we stand now, I can imagine discovering more useful beneficiary groups by breaking down the three you highlight further. Arguably, this is what happened with people in extreme poverty: they are a very help-able subgroup. Similarly, factory-farmed animals seem to be a very help-able subgroup of non-human animals, and maybe chickens are the most-help-able. Maybe the discovery of more very help-able subgroups, e.g. subgroups of future conscious beings (artificial beings? future animals?) or subgroups of wild animals (species that suffer a lot in the wild?), will lead to big EA breakthroughs in the future.
Of course, which groups are help-able basically depends on the interventions available, so splitting EA research into “find new beneficiary groups” and “find new interventions” is a blurry distinction.
- Moral circles: Degrees, dimensions, visuals by Jul 24, 2020, 4:04 AM; 87 points) (
- Jan 16, 2018, 3:58 PM; 3 points) 's comment on How to get a new cause into EA by (
Thanks for putting StrongMinds on my radar!