I’m a freelance writer and editor for the EA community. I can help you edit drafts and write up your unwritten ideas. If you’d like to work with me, book a short calendly meeting or email me at ambace@gmail.com. Website with more info: https://amber-dawn-ace.com/
Amber Dawn
This is a fascinating idea! I have a question though. I’m not exactly sure why (2) (more women have chronic pain, less pain tolerance etc) is evidence for this. Is the idea that women in the ancestral environment were more in need of assistance (eg because they were physically weaker, or made more vulnerable by bearing/raising children), and therefore evolved more capacity to feel (and thus express) pain?
I really enjoyed this post, thank you! As a non-STEM-y person in EA, I relate to lots of this. I’ve picked up a lot of the ‘language of EA’ - and indeed one of things I like about EA is that I’ve learnt lots of STEM-y concepts from it! - but I did in fact initially ‘bounce off’ EA, and may never have got involved if I hadn’t continued to hear about it. I’ve also worried about people unfairly dismissing me because the ‘point’ of my field (ancient philosophy) is not obvious to STEM-y EAs.
A note on ‘assessing promisingness’: a recent forum post on Introductory Fellowships mentioned that at some universities, organizers sort fellows into cohorts according to perceived ‘promisingness’. This bothered me. I think part of what bothered me was egalitarian intuitions, but part of it was a consciousness that I might be unfairly assessed as ‘unpromising’ because my capacities and background are less legibly useful to EAs than others.
[this a comment about the post/project, not an answer to the question about moral discounting]
I’m curious—when talking to people new to EA, have you heard that question a lot, in those words and terms?
I’m asking because—and I might be typical-minding here—I’d be surprised if most people who are new to longtermism have the explicit belief ‘people in the future have less moral value than people in the present’. In particular, the language of moral discounting sounds very EA-ish to me. I imagine that if you ask most people who are sceptical to longtermism ‘so do future people have less moral value than present people?‘, they’d be like ‘of course not, but [insert other argument for why it nonetheless makes more sense to focus on the present.’
(Analogously, imagine an EA having a debate with someone who thinks that we should focus on helping people in our local communities. At one point the EA says ‘so, do you think that people in other countries have less moral value than people in your community?’
I find it hard to imagine that the local-communitarian would say ‘yeah! Screw people in other countries!’ [even if from an EA perspective, their actions and beliefs would seem to entail this attitude]
I find it more likely that they would say something like ‘of course people everywhere have moral value, but it’s my job to help people in my community, and people in other countries should be helped by people in their own communities’. And they might give further reasons for why they think this.)
I’m really glad that you want to support EA-adjacent writers and spread EA ideas to a wider audience. I think this is crucially important work and I’m really happy that you’re taking it seriously. This prize has given me a nudge to take my own EA-adjacent blogging more seriously!
Like many others, I have concerns about the amount. I think it’s overkill and, as others have said, it may be easier for the privileged to take a gamble on winning the prize, while great writers who don’t have the option of cutting down their working hours will still be neglected.
Another concern that others haven’t mentioned is PR. I don’t think EAs always need to be super ‘image focussed’ and paranoid about PR, and indeed sometimes we skew too far in that direction. But it seems some concern is appropriate here because part of the aim of the project is to spread EA ideas to people who are not already in the movement. I think if one of the first things I heard about EA was ‘this is a movement whose stated aim is to spend money super efficiently to do the most good, and they just spent $500,000 paying people in/adjacent to their community to write blogs that are vaguely supportive of their community’, that would seem suss to me. It seems cronyish. Of course, *I* can easily believe that good blogs could create way more than $500,000 of value by bringing people into the movement, improving decision-making, etc. But that involves *already* thinking in very EA ways and trusting the community to be acting in good faith and not just trying to enrich their friends.As an alternative way of incentivizing good writing: a thought I’ve often had is making a google doc of all the blog posts that “live rent free” in my head—blogs whose main idea has seeped into my consciousness, blogs that I constantly recommend when certain topics come up. I bet many EAs, if they introspect, have an internal list of blog posts like this. You could ask a large-ish number of trusted people about which specific blog posts have been most influential for them, and grant awards for blogs that are cited by many people (or offer to pay those bloggers to do it full-time for a while, if they want). If you are interested in funding more popularizing writings, you could choose people who are newer to the movement or more ‘adjacent’, rather than hardcore EAs who will choose something niche.
Thank you so much for posting this! I really appreciate it when EAs talk about their mental health and emotional wellbeing struggles. What we are doing is super intense and a lot of us go through stuff like this. I missed most of my Sunday conference plans because of my mental health, and I think this was a good decision since I organized one of the afterparties and I wouldn’t have made it through that if I hadn’t rested. I’ve been pretty tired this whole week.
I’ve had lots of situations where, like you, I felt bad enough that I needed to cancel my plans, but, because I felt so emotionally distressed, cancelling those plans felt like the worst thing in the world. Over the years I’ve become better at realising that lots of the time, missing things is either completely fine, or (at most) an inconvenience to others.
Take care of yourself and get lots of rest! I hope you feel better soon.
As a non-technical person struggling to wrap my head around AI developments, I really appreciated this post! I thought it was a good length and level of technicality, and would love to read more things like it!
I think 1-on-1s have their uses, but at the EA conferences I went to this Spring, I did find myself wishing that there was more space for unstructured group conversations (e.g., possibly physical spaces where you could go and sit if you were open to conversations with strangers). 1-on-1s can be very intense, and since my aims were somewhat vague, I think I could have gotten value out of meeting and chatting to more people casually.
Strong agree.
I’ve seen some discourse on Twitter along the lines of “EA’s critics never seem to actually understand what we actually believe!” In some ways, this is better than critics understanding EA well and strongly opposing the movement anyway! But it does suggest to me that EA has a problem with messaging, and part of this might be that some EAs are more concerned with making technically-defensible and reasonable statements—which, to be clear, is important! - than with meeting non-EAs (or not-yet-EAs) where they’re at and empathizing with how weird some EA ideas seem at first glance.
+ 1 to the point that it doesn’t really make sense to compare FGM and male circumcision.
I support bodily autonomy and lean towards believing that parents should not circumcise male infants. I’m also not claiming that there are no negative effects to male circumcision. And as Henry said, some forms of FGM are indeed quite minor (a symbolic ‘nicking’ or small cut).
That said, other forms of FGM are...horrifying and just seem way worse than male circumcision. I’m going to drop the wikipedia article here—considered yourself content-warned. https://en.wikipedia.org/wiki/Female_genital_mutilation#Types
Some types involve cutting out the clitoris (which is more equivalent to the whole penis than to the foreskin); other types involve sewing up the vagina. Because of its relative rarity I’m not sure it qualifies as a sensible EA cause area, but I think the horror and outcry against it seems very merited and it makes sense that more countries have outlawed it than have outlawed male circumcision (though as I say, I’d tentatively support making that illegal also and don’t want to ignore the fact that that’s also a harm).
On a meta level, I’m surprised by how unpopular Sjlver and DukeGartzea’s comments are in this discussion relative to others’. It doesn’t seem that controversial to argue that women face more violence, particularly of certain types, than men (though it’s fair to argue the other side, of course).
I also think IFS is a great paradigm and could be really helpful for lots of people, and I know lots of other EAs who are into it—maybe we should have an “EA IFS fans” Facebook group or Discord or something? (If you’d be interested in such a thing, reply to this comment)
I’m not sure what to suggest about how to use your abilities to promote IFS. You could train as a counsellor (if you’re not one already). You could write popular books about IFS. Or you could try to get involved in mental health policy and promote it in health systems. I don’t know where you’re from, but in the UK where I am, the ‘go-to’ psychotherapeutic treatment offered by the health service is CBT. I’m not against CBT and I think it’s very helpful for some people, but it’s not useful for everyone and for all issues, so I think a person could have a big positive impact if they (for example) successfully persuaded the NHS to be more willing to fund and offer different therapy modalities, including IFS.
This is so cool! I’m definitely going to order something :)
Just FYI, the link to the site seems to be broken—it just links back to this post!
fwiw I disagree with this. People often ‘advertize’ or argue for things on the Forum—e.g. promoting some new EA project, saying ‘come work for us at X org!‘, or arguing strongly that certain cause areas should be considered. The main difference with this post is that the language is more ‘advertizing-esque’ than normal—but this seems to me an aesthetic consideration. I’m not sure what would be gained by OP rewriting it with more caveats.
Re “one of the most effective charities”, OP does immediately justify this in the bullet points below—it’s recommended by The Life You Can Save, and Givewell says it ‘may be in the range of cost-effectiveness of our top charities’.
This is a really interesting idea! I’m very fond of charity shops so I love the idea of making ones for EA charities. I have no idea how easy or hard it is to do and how it compares to other fundraising tactics, but it seems like it could have a big impact both from profits and from raising awareness. It could be a good thing to do for people with experience starting or running shops.
Yes, agree 100%! In general, I think EA neglects humanities skills and humanistic ways of solving problems.
This is such a good post + I agree so much! I’m sorry you feel like you don’t fit in :( and I’m also worried about the alienating effect EA can have on people. Fwiw, I’ve also had worries like this in the past—not so much that I wasn’t smart enough, but that there wasn’t a place for me in EA because I didn’t have a research background in any of the major cause areas (happy to DM about this).
A couple of points, some echoing what others have said:
-there’s a difference between ‘smart’ and ‘has fancy credentials’
-some stuff that’s posted on the Forum is written for a niche audience of experts and is incomprehensible to pretty much everyone
-imo a lot of EA stuff is written in an unnecessarily complicated/maths-y/technical way (and the actual ideas are less complicated than they seem)
-maybe you have strengths other than “intellectual” intelligence, e.g. emotional intelligence, people skills, being organized, conscientiousness...
I really do think this is a problem with EA, not with you—EAs should offer more resources to people who are excited to contribute but don’t fit into the extremely narrow demographic of nerdy booksmart STEM graduates.
Thank you for framing this in terms of wanting to support women have children that they desire—often when people talk about wanting to ‘increase the birth rate’ they don’t disentangle ‘helping people have kids that they want to have’ from more coercive measures, which makes me nervous.
‘The primary interventions I think a funder could make to support women achieving their fertility goals are through political advocacy and research. I don’t think any philanthropic funder, no matter how rich, is capable of directly moving this issue by, for example, offering financial support to families.‘
-why wouldn’t offering financial support be effective?
Does the research on ‘missing children’ ask why the respondents didn’t have as many children as they wanted? Because this would be useful to know, and would help determine what interventions might be most effective. For example, if most people say that they didn’t have as many children as they wanted because they couldn’t afford it, then financial support would be the best intervention; if they say that they didn’t find the right partner in time, maybe the best intervention is ?trying to make dating sites better?; if they say they waited too long and were then unable to conceive, then the fertility education you suggested might be very effective. Other reasons I can think of might be: lack of maternity leave, lack of social support, or their partner didn’t want more kids.
Yeah absolutely! And it’s not always worth experts’ time to optimize for accessibility to all possible readers (if it’s most important that other experts read it). But this does mean that sometimes things can seem more “advanced” or complex than they are.
What are the implications you disagree with?
I might be up for doing something like this! I might DM you about it.
Thank you for asking this! I’m afraid I don’t have any answers, but I also think that it would be great if EAs researched this question (and I’m happy Open Phil seems to be doing some of this). I also think that how ‘fighting racism’ or ‘US criminal justice reform’ compare against other cause areas on neglectedness, tractability and impact is somewhat beside the point. There is a huge amount of enthusiasm to tackle these problems at the moment, and people are eager to donate to organizations that combat them, but I’ve not seen much discussion or reflection on which are most effective. Most of these people would never be persuaded to donate to (e.g.) AI risk prevention or animal rights orgs, but they might be persuaded to donate to more-effective anti-racism/criminal-justice-reform organizations. If EAs can find out which orgs are more effective in this area, and promote them, that could create a lot of impact compared to the counterfactual.