Thank you for writing this—a lot of what you say here resonates strongly with me, and captures well my experience of going from very involved in EA back in 2012-14 or so, to much more actively distancing myself from the community for the last few years. I’ve tried to write about my perspective on this multiple times (I have so many half written Google docs) but never felt quite able to get to the point where I had the energy/clarity to post something and actually engage with EA responses to it. I appreciate this post and expect to point people to it sometimes when trying to explain why I’m not that involved in or positive about EA anymore.
Jess_Whittlestone
I also interpreted this comment as quite dismissive but I think most of that comes from the fact Max explicitly said he downvoted the post, rather than from the rest of the comment (which seems fine and reasonable).
I think I naturally interpret a downvote as meaning “I think this post/comment isn’t helpful and I generally want to discourage posts/comments like it.” That seems pretty harsh in this case, and at odds with the fact Max seems to think the post actually points at some important things worth taking seriously. I also naturally feel a bit concerned about the CEO of CEA seeming to discourage posts which suggest EA should be doing things differently, especially where they are reasonable and constructive like this one.
This is a minor point in some ways but I think explicitly stating “I downvoted this post” can say quite a lot (especially when coming from someone with a senior position in the community). I haven’t spent a lot of time on this forum recently so I’m wondering if other people think the norms around up/downvoting are different to my interpretation, and in particular whether Max you meant to use it differently?
[EDIT: I checked the norms on up/downvoting, which say to downvote if either “There’s an error”, or “The comment or post didn’t add to the conversation, and maybe actually distracted.” I personally think this post added something useful to the conversation about the scope and focus of EA, and it seems harsh to downvote it because it conflated a few different dimensions—and that’s why Max’s comment seemed a bit harsh/dismissive to me]
Firstly, I very much appreciate the grant made by the LTF Fund! On the discussion of the paper by Stephen Cave & Seán Ó hÉigeartaigh in the addenda, I just wanted to briefly say that I’d be happy to talk further about both: (a) the specific ideas/approaches in the paper mentioned, and also (b) broader questions about CFI and CSER’s work. While there are probably some fundamental differences in approach here, I also think a lot may come down to misunderstanding/lack of communication. I recognise that both CFI and CSER could probably do more to explain their goals and priorities to the EA community, and I think several others beyond myself would also be happy to engage in discussion.
I don’t think this is the right place to get into that discussion (since this is a writeup of many grants beyond my own), but I do think it could be productive to discuss elsewhere. I may well end up posting something separate on the question of how useful it is to try and “bridge” near-term and long-term AI policy issues, responding to some of Oli’s critique—I think engaging with more sceptical perspectives on this could help clarify my thinking. Anyone who would like to talk/ask questions about the goals and priorities of CFI/CSER more broadly is welcome to reach out to me about that. I think those conversations may be better had offline, but if there’s enough interest maybe we could do an AMA or something.
I’d be keen to hear a bit more more about the general process used for reviewing these grants. What did the overall process look like? Were participants interviewed? Were references collected? Were there general criteria used for all applications? Reasoning behind specific decisions is great, but also risks giving the impression that the grants were made just based on the opinions of one person, and that different applications might have gone through somewhat different processes.
Thanks for your detailed response Ollie. I appreciate there are tradeoffs here, but based on what you’ve said I do think that more time needs to be going into these grant reviews.
It don’t think it’s unreasonable to suggest that it should require 2 people full time for a month to distribute nearly $1,o00,000 in grant funding, especially if the aim is to find the most effective ways of doing good/influencing the long-term future. (though I recognise that this decision isn’t your responsibility personally!) Maybe it is very difficult for CEA to find people with the relevant expertise who can do that job. But if that’s the case, then I think there’s a bigger problem (the job isn’t being paid well enough, or being valued highly enough by the community), and maybe we should question the case for EA funds distributing so much money.
The plan seemed good, but I had no way of assessing the applicant without investing significant amounts of time that I had not available (which is likely why you see a skew towards people the granting team had some past interactions with in the grants above)
I’m pretty concerned about this. I appreciate that there will always be reasonable limits to how long someone can spend vetting grant applications, but I think EA funds should not be hiring fund managers who don’t have sufficient time to vet applications from people they don’t already know—being able to do this should be a requirement of the job, IMO. Seconding Peter’s question below, I’d be keen to hear if there are any plans to make progress on this.
If you really don’t have time to vet applicants, then maybe grant decisions should be made blind, purely on the basis of the quality of the proposal. Another option would be to have a more structured/systematic approach to vetting applicants themselves, which could be anonymous-ish: based on past achievements and some answers to questions that seem relevant and important.
This may be a bit late, but: I’d like to see a bit more explanation/justification of why the particular grants were chosen, and how you decided how much to fund—especially when some of the amounts are pretty big, and there’s a lot of variation among the grants. e.g. £60,000 to revamp LessWrong sounds like a really large amount to me, and I’m struggling to imagine what that’s being spent on.
Did SlateStarCodex even exist before 2009? I’m sceptical—the post archives only go back to 2013: http://slatestarcodex.com/archives/. Maybe not a big deal but does suggest at least some of your sample were just choosing options randomly/dishonestly.
If I could wave a magic wand it would be for everyone to gain the knowledge that learning and implementing new analytical techniques cost spoons, and when a person is bleeding spoons in front of you you need a different strategy.
I strongly agree with this, and I hadn’t heard anyone articulate it quite this explicitly—thank you. I also like the idea of there being more focus on helping EAs with mental health problems or life struggles where the advice isn’t always “use this CFAR technique.”
(I think CFAR are great and a lot of their techniques are really useful. But I’ve also spent a bunch of time feeling bad the fact that I don’t seem able to learn and implement these techniques in the way many other people seem to, and it’s taken me a long time to realise that trying to ‘figure out’ how to fix my problems in a very analytical way is very often not what I need.)
Thanks for writing this Roxanne, I agree that this is a risk—and I’ve also cringed sometimes when I’ve heard EAs say they “don’t care” about certain things. I think it’s good to highlight this as a thing we should be wary of.
It reminds me a bit of how in academia people often say, “I’m interested in x”, where x is some very specific, niche subfield, implying that they’re not interested in anything else—whereas what they really mean is, “x is the focus of my research.” I’ve found myself saying this wrt my own research, and then often caveating, “actually, I’m interested in a tonne of wider stuff, this is just what I’m thinking about at the moment!” So I’d like it if the norm in EA were more towards saying things like, “I’m currently focusing on/working on/thinking about x” rather than, “I care about x”
If you haven’t tried just avoiding eggs, it seems worth at least trying.
Yeah, that seems right!
I don’t understand the “completely trivial difference” line. How do you think it compares to the quality of life lost by eating somewhat cheaper food? For me, the cheaper food is much more cost-effective, in terms of world-bettering per unit of foregone joy.
I think this is probably just a personal thing—for me I think eating somewhat cheaper food would be worse in terms of enjoyment than cutting out dairy. The reason I say it’s a basically trivial difference is that, while I enjoy dairy products, I don’t think I enjoy them more than I enjoy various other foods—they’re just another thing that I enjoy. So given that I can basically replace all the non-vegan meals I would normally have with vegan meals that I like as much (which requires some planning, of course), then I don’t think there will be much, if any, difference in my enjoyment of food over time. I also think that even a very small difference in the pleasure I get from eating dairy vs vegan food would be trivial in terms of my happiness/enjoyment over my life as a whole, or even any day as a whole—I don’t think I’d ever look back on a day and think “Oh, my enjoyment of that day would have been so much greater if I’d eaten cheese.” I enjoy food, but it’s not that big a deal relative to a lot of other more important things.
Regarding willpower: If you maintain a vegan diet for a few months, it will probably stop requiring willpower since you will stop thinking of animal products as an option that you have available. This has been my experience and the experience of lots of other vegans, although it’s probably not universal.
Yeah, my experience previously has been that the willpower required mostly decreases over time—there was definitely a time a while ago when the thought of buying and eating eggs was kind of absurd to me. This was slightly counterbalanced by sometimes getting odd cravings for animal products, though. I think that if I put conscious effort into developing negative associations around animal products, though, I could probably end up in a situation where it took zero willpower. That would obviously take effort though.
Does it take willpower for you to be vegetarian? If not, then it probably won’t take willpower for you to be vegan either once you get used to it.
No, being vegetarian takes zero willpower for me, but I was raised vegetarian, so I have hardly eaten any meat in my entire life, so I have very little desire to eat it—and even an aversive reaction to a lot of meat. (Which I’m very grateful to my parents for!)
I like the idea of counting non-vegan meals, that sounds great. Maybe I’ll beemind it… then I’d have an incentive to keep it low, but I don’t have to be absolute about it. Diana told me that whenever she eats something non-vegan she makes a donation to an animal welfare charity—I like that idea too.
The way I see this is getting from 85% to 100% is probably the most costly part for me (most inconvenience, most social cost) and I am getting the vast majority of the benefit with very little of the cost. I do feel uncomfortable with that 15% though. I think I will continue until September, and then reasses after a year, maybe getting closer to 100% with new rules.
Yeah, I think that’s right. It’s quite possible that the main downside of not going 100% vegan is just the discomfort that you end up feeling about it! (And that in particular this is larger than any actual consequences, especially if you’re mostly eating dairy.)
Yeah, I think lacto-vegetarianism is probably 95% of the way in terms of impact on animal suffering anyway (or even more.) As I said above, for me the main reason for cutting out dairy too is that I think if I eat dairy I might be more likely to slip into eating eggs too down the line. But it’s possible I could just protect against that by setting more solid rules in place etc.
Yeah, good point. I’m definitely a lot less concerned about eating dairy than I am eggs. The main reason for lumping them together is that I think I’d find it quite a bit easier psychologically to be “vegan” than to be someone who “doesn’t eat eggs”, and I think I’d be more likely to keep it up, but it’s possible that’s more malleable than I think.
I’m not totally convinced that not eating dairy will make my life worse in any nontrivial way, though. I enjoy eating cheese, sure, but it’s not an experience that’s unlike any other. I’m pretty sure that the difference in enjoyment in a life in which I eat dairy products and one in which I don’t will basically be completely trivial.
Should I be vegan?
Ah, thanks for pointing these things out! I didn’t realise either of these things—admittedly, I didn’t have as much time as I would have liked to research the historical facts for this. A lot of these points were taken from some top posts on Quora on a thread about progress over the past few centuries, and I was (perhaps naively) hoping that crowdsourced info would give me fairly accurate info. Anyway, I was thinking of writing a more detailed article about human progress at some point, so I’ll definitely try to do a bit more research and take these points into account—thanks for flagging my errors/sloppiness!
Hope: How Far Humanity Has Come
Yeah, I think it was a really good thing to prompt discussion of, the post just could have been framed a little better to make it clear you just wanted to prompt discussion. Please don’t take this as a reason to stop posting though! I’d just take it as a reason to think a little more about your tone and whether it might appear overconfident, and try and hedge or explain your claims a bit more. It’s a difficult thing to get exactly right though and I think something all of us can work on.
Thanks Peter—I continue to feel unsure whether it’s worth the effort for me to do this, and am probably holding myself to an uncecessarily high standard, but it’s hard to get past that. At the same time, I also haven’t been able to totally give up on the idea of writing something either—I do have a recent draft I’ve been working on that I’d be happy to share with you.
I thought about the criticism contest, but I think trying to enter that creates the wrong incentives for me. It makes me feel like I need to write a super well-reasoned and evidenced critique which feels too high a bar and if I’m going to write anything, something that I can frame more as my own subjective experience feels better. Also, if I entered and didn’t win a prize I might feel more bitter about EA, which I’d rather avoid—I think if I’m going to write something it needs to be with very low expectations about how EAs are going to respond to it.