Master in Public Policy student at Georgetown University. Previously worked in operations at Rethink Charity et. al. and co-founded EA Anywhere.
Marisa
[Question] What’s the evidence for and against modern psychiatry?
Deadline Extended: Announcing the Legal Priorities Summer Institute! (Apply by June 24)
Very valid! I guess I’m thinking of this as “approaches EA values” [verb] rather than “values” [noun]. I think most if not all of the most abstract values EA holds are still in place, but the distinction between core and secondary values is important.
Good catch, thank you!
The EA movement’s values are drifting. You’re allowed to stay put.
I made this so I could easily link all the posts on this but then I realized they’re almost all here: https://forum.effectivealtruism.org/posts/4mWxEixs5RZ8DfCKe/annotated-list-of-project-ideas-and-volunteering-resources Feel free to delete!
99% Invisible had a podcast on this that I found really interesting. The scale of the problem must have gone completely over my head. Great write-up!
That makes sense though I feel like this still applies. It’s still not great optics to pay lots of money to people working on global poverty, but it’s far from unheard of and, if there’s concrete evidence that those people are having an impact then I think a lot of people would consider it justified.
I think the reason it’s acceptable for AI researchers to bring in large sums of money is more because of the market rate for their skillset and less because of the cause directly. I think if someone were paid a high salary to build complex software that solved poverty (if such a thing existed) I would guess that that would be viewed roughly equally. On the other hand if you pay longtermist and/or global poverty community-builders lots of money, this looks much worse.
Maybe I’m misunderstanding this but I disagree. I think the average person thinks spending tons of money on global health poverty is good, particularly because it has concrete, visible outcomes that show whether or not the work is worthwhile (and these quick feedback loops mean the money can usually be spent on projects we have stronger confidence in).
But I think that spending lots of money on people who might have a .000001% chance of saving the world (in ways that are often seen as absurd to the average person) is pretty bad optics. A lot non-EAs don’t think we can realistically make traction on existential risk because they haven’t seen any evidence of traction. Plus, longtermists/x-risk people can come across as having an unfounded sense of grandiosity—because there are a whole bunch of people out there who think their various projects will drastically transform the world, and most people won’t assume that the longtermist approach is the only one that’ll actually work.
Is the source of this graphic public? This affected my perspective a lot and it’d be great to have a clean copy. :)
This is fantastic, thanks for writing this up! I’ve been hearing a lot about federal consulting (it seems to be one of the most common careers people pursue after an MPP) so it’s helpful to see an analysis from an EA perspective. :)
Very cool! A few companies have taken the Giving What We Can pledge.
Just found this now, thank you for putting it together! I’d never thought about this before.
In case anyone else is curious I dug into Polish citizenship by descent. It looks like the main requirements are (a) someone in your family had citizenship after 1920, (b) the chain of citizenship isn’t broken, i.e. you can’t become a citizen if neither of your parents are, and they can’t become a citizen if neither of their parents are, etc. and (c) no one in that chain was adopted, involved in the military (including signing up for the draft), in a publicly funded job (including teaching), plus maybe a few more exceptions.
This is great! I’ll add a personal endorsement for GregMat. I actually found it quite enjoyable and it bumped my verbal score up 8 points.
I totally understand your concerns. FWIW as a former group organizer, as the Torres pieces were coming out, I had a lot of members express serious concerns about longtermism as a result of the articles and ask for my thoughts about them, so I appreciate having something to point them to that (in my opinion) summarizes the counterpoints well.
There is! Linked it in the last point now too, thanks!
Actually in the very early days of EA Anywhere, I toyed with the idea of having a separate student sub-group in part for this purpose (and for university students without EA groups). I dropped it partially for capacity reasons and partially because there didn’t seem like much demand for it, but I’d be excited about this being part of our expansion with our new organizer.
I see EA Anywhere as a good supplement to small groups. While we advertise as “a local group for people without local groups”, I think it makes a lot of sense to also work with group organizers and members from groups that are too small to warrant larger events, or with organizers that are too time-poor to run events often.
I also think this could fit well into the local group incubation pipeline we’ve considered. There’s a cycle that’s hard to break out of with small groups—if an event is so small that it’s not valuable, then less people come, then it gets smaller, then it’s even less valuable, etc. (Of course small events can be valuable if the chemistry is right with the group, but that can take a long time to facilitate.) A virtual group like EA Anywhere could potentially break groups out of that cycle by bringing in more people and ideally creating more interesting discussions from that.
Having graduated from university just before the pandemic I don’t have a sense of how interested students will be in Zoom meetings and the like in future years, which is one uncertainty I have, but I think it’s unlikely that this will be a major issue.
Agree with Sami’s comment below. Virtual events are certainly a good way to get people from more isolated parts of the region engaged, but if 90% of the attendees already know each other from in-person events, that may be even more isolating. I suspect this is fairly easy to mitigate though if the organiser is conscientious about it.
It might be worth connecting them with other virtual communities too. Besides us, there are lots of virtual groups popping up (Giving What We Can, EA for Christians, EA for Jews, the EA Hispanic group, EA Consulting, Effective Animal Advocacy, etc.) which might be good for getting people engaged if your group doesn’t run virtual events very often. (FWIW they are also very welcome to get involved with EA Anywhere—we have some members in metro areas of local groups but who are just too far away to come to most in-person events.)
I think a lot of this will also be case-by-case depending on where the person is in their EA involvement, and a lot of those rules won’t be that much different from engaging someone who’s not in an isolated area. It’s mostly a matter of making sure the usual pathways through “the funnel” are accessible to them, even if they aren’t able to attend in-person events.
Update: We’ve extended the deadline to apply for LPSI to June 24. If you think you might be a good fit, we’d love to see your application!