I’m Aaron, I’ve done Uni group organizing at the Claremont Colleges for a bit. Current cause prioritization is AI Alignment.
Aaron_Scher
A Simpler Version of Pascal’s Mugging Background: I found Bostrom’s original piece (https://www.nickbostrom.com/papers/pascal.pdf) unnecessarily confusing, and numerous Fellows in the EA VP Intro Fellowship have also been confused by it. I think we can be more accessible in our ideas. I wrote this in about 30 minutes though, so it’s probably not very good. I would greatly appreciate feedback on how to improve it. I also can’t decide if it would be useful to have at the end a section of “possible solution” because as far as I can tell, theses solutions are all subject to complicated philosophical debate that goes over my head. So including it might be necessarily too confusing. Might be easiest to provide comments on the Google Doc itself (https://docs.google.com/document/d/1NLfDK7YqPGdYocxBsTX1QMldLNB4B-BvbT7sevPmzMk/edit)
Pascal is going about his day when he is approached by a mugger demanding Pascal’s wallet. Pascal refuses to give over his wallet, at which point the mugger offers the following deal: “Give me your wallet now and tomorrow I will give you twice as much money as is in the wallet now” Pascal: “I have $100 in my wallet, but I don’t think it’s very likely you’re going to keep your promise” Mugger: “What do you think is the probability that I keep my promise and give you the money?” Pascal: “Hm, maybe 1 in a million because you might be some elaborate YouTube prankster” Mugger: “Ok, then you give me your $100 now, and tomorrow I will give you $200 million” Let’s do the math. We can calculate expected value by multiplying the value of an outcome by the probability of that outcome. The expected value of taking the deal, based on Pascal’s stated belief that the mugger will keep their word, is 200,000,000 * 1/(1,000,000) = $200. Whereas, the expected value of not taking the deal is $100 * 1 (certainty) = $100. Pascal should take the deal if he is an expected value maximizing person. Maybe at this point Pascal realizes that the chances of the mugger having 200 million dollars is extremely low. But this doesn’t change the conundrum because the mugger will simply offer more money to account for the lower probability of them following through. For example, Pascal thinks the probability of the mugger having the money decreases the chance of the mugger following through to one in a trillion. Then the mugger offers 200 trillion dollars. The mugger is capitalizing on the fact that everything we know, we know with a probability less than one. We can not be 100% certain that the mugger won’t follow through on their promise, even though we intuitively know they won’t. Extremely unlikely outcomes are still possible.
Pascal: “200 trillion dollars is too much money, in fact I don’t think I would benefit from having any more than 10 million dollars” Pascal is drawing a distinction between expected value (uses units of money) and expected utility (uses units of happiness, satisfaction, other things we find intrinsically valuable), but the mugger is unphased.
Mugger: “Okay, but you do value happy days of life in such a way where more happy days is always better than fewer happy days. It turns out that I’m a wizard and I can grant you 200 trillion happy days of life in exchange for your wallet” Pascal: “It seems extremely unlikely that you’re a wizard, but the amount I value 200 trillion happy days of life is so high that the expected utility is still positive, and greater than what I get from just keeping my $100” Pascal hands his wallet to the mugger but doesn’t feel very good about doing so.
So what’s the moral of this story? -Expected value is not a perfect system for making decisions, because we all know Pascal is getting duped. -We should be curious and careful about how to deal with low probability events with super high or low expected value (like extinction risks). Relatedly, common sense seems to suggest that spending effort on too unlikely scenarios is irrational
Thanks for posting this! One worry I have, particularly relevant to a Project Based Fellowship, is that it would not involve sufficiently learning key ideas. Mauricio discussed this, but I think there’s even more to it than is obvious. In this critique of EA (https://www.lesswrong.com/posts/CZmkPvzkMdQJxXy54/another-critique-of-effective-altruism), it is brought up that we frequently “Over-focus on “tried and true” and “default” options, which may both reduce actual impact and decrease exploration of new potentially high-value opportunities.” The less content presented in a fellowship, the more likely we are to go down that route, I think; EA is really really complex, and one thing I like about the Intro Fellowship is that you can end it thinking “I have the basics, but there is so much more to know,” and I worry with a shorter fellowship participants may not realize how little they’ve scratched the surface. They may come to identify EA with just RCT-backed global poverty related work; it almost feels better if people think of EA as global poverty + animal welfare + AI + longtermism + pandemics and climate change – even though this is cause areas and not principles. Anecdotally, I’ve found that many folks just learning about EA are turned off by what feels like armchair cause prio that is too theoretical; giving them specific causes makes more sense for many folks, and if you give them enough causes, they will internalize that EA is actually about the principles which lead to such diversity in causes.
While I share your worry of EA becoming defined by cause areas than principles, it feels much more likely that we would get a situation like Mauricio mentioned of “vaguely EA-related project ideas” and people who walk away from the fellowship without actually understanding EA very well. On this note, conversations with students not involved in EA often go like so: Them: “What does your club do?” Me: “We discuss way of improving the world most effectively and prepare students to do something really valuable with their lives” Them: “do you do anything besides talking?!” Me: “Do career workshops count?...”
At least at the Claremont Colleges, students are really excited about actually doing stuff. And this can be difficult to reconcile with EA. This semester, we decided to do Effectively Altruistic projects limited into the scope of our school (e.g., what can we do to improve student wellness the most? Decrease the school’s carbon footprint? etc.). We’ve been working on Cause Prioritization for this, narrowing down a large list into a small one. And we’re going to have small groups of students tackle these projects in the spring. Will follow up with forum posts afterward to report on how it went.
However, I don’t think doing this alone is a good idea; it doesn’t actually give folks a sense of what EA is all about unless they already had good background knowledge. So, this Winter Break, we’re doing a bunch of programming that we are pushing super hard. Mainly, taking the 8 week Intro Fellowship and squishing it into 3.5-4 weeks; this is the main program we want people to do. The idea is, folks learn about EA ideas during break when they’re not stressed about class, then we come back to school and the post-fellowship engagement is Project Based Fellowship (I expect for most people this will be good), and Career Planning. I’m optimistic about this plan for a bunch of reasons, and it potentially presents one solution to the problem.
Pros of doing this: students don’t have fellowship overlapping with school, fairly intense and fast which has the benefits you discuss, keep students connected to one another and mentally engaged during break (very good in my opinion/experience cuz I get lonely and lazy).
This is similar to the 3 week fellowship sprint you suggest, except that I do not think of this as at all about identifying promising fellows. I need to write up my thoughts on this more thoroughly in a shortform post, but pretty much I think the content of the Intro Fellowship would be useful to like 50-80% of students, even if only 20% continue engaging with EA afterward. EA has really good ideas that are useful to almost everybody, and the emphasis on highly promising people seems elitist and holds us back from impacting more students in a smaller way.
Yes. Will do an end of the year assessment of what worked and what didn’t. Focus will likely be on Winter Break Programming and Project Fellowships.
Good points. We should have explained what our approach is in a separate post that we could link to; because I didn’t explain it too well in my comment. We are trying to frame the project like so: This is not the end goal. It is practice at what this process looks like, it is a way to improve our community in a small but meaningful way. Put another way, the primary goals are skill building and building our club’s reputation on campus. Another goal is to just try more stuff to help meta-EA-community building; even though we have a ton of resources on community building, we don’t (seem) to have all that many trials or examples of groups doing weird stuff and seeing what happens.
Some of the projects we are considering are related to global problems (e.g., carbon labeling on food in dining hall). I like the project ideas you suggest and we will consider them.
One reason we’re focusing on local is that the “international charity is colonialism” sentiment is really strong here. I think it would be really bad for the club if we got strongly associated with that sentiment. Attempting to dispel this idea is also on my to-do list, but low.
Another point of note is that some of what the EA community does is only good in expectation. For instance, decreasing extinction risk by 0.5% per century is considered a huge gain for most EAs. But imagine tabling at a club fair and saying “Oh what did we actually accomplish last year? We trained up students to spend their careers working on AI safety in the hopes of decreasing the chance of humanity ending from robots by 0.02%”. Working on low probability, high impact causes and interventions is super important, but I think it makes for crappy advertising because most people don’t think about the world in Expected Value.
Side point to the side point: I agree that a dollar would go much further in terms of extreme poverty than college students, but I’m less sure about an hour of time. I am in this college community; I know what its needs are. I would spend 5 minutes of the hour figuring out what needs to be done and the rest of the time actually helping folks. If I spent an hour on global poverty, it’s unclear I would actually “do” anything. I would spend most the time either researching or explaining to my community why it is morally acceptable to do international charity work at all. But, again, we are considering some relevant projects.
Again, thank you for some amazing thoughts. I’ll only respond to one piece:
\begin{quotation}But, anecdotally, it seems like a big chunk (most?) of the value EA groups can provide comes from:
Taking people who are already into weird EA stuff and connecting them with one another
And taking people who are unusually open/receptive to weird EA stuff and connecting them with the more experienced EAs \end{quotation}
I obviously can’t disagree with your anecdotal experience, but I think what you’re talking about here is closely related to what I see as one of EA’s biggest flaws: lack of diversity. I’m not convinced that weird people know how to do good better than anybody else, but by not creating a way for other people to be involved in this awesome movement, we lose the value they would create for us and the value we would create for them. There also seems to be a suspicious correlation between these kind of “receptive to EA ideas” people and white men, which appears worrisome. That is, even if our goal is to target marketing to weird EAs or receptive to EA-s, it seems like the way we’re doing that might have some bias that has led our community to disproportionately white and male relative to most general populations.
On that note, I think learning about EA has made my life significantly better, and I think this will be the case for many other people. I think everybody who does an Intro Fellowship (and isn’t familiar with EA) learns something that could be useful to their life – even if they don’t join the community for become more involved. I don’t want to miss out on these people, even if it’s a more efficient allocation of time/resources to only focus on people we expect will become highly engaged.
Shortform post coming soon about this ‘projects idea’ where I’ll lay out the pros and cons.
Great post, I totally agree that we need more work in this area. Also agree with other commenters that volunteering isn’t a main focus of EA advice, but it probably should be – given the points Mauricio made.
Nitpicky, but it would have been nice to have a summary at the start of the post.
I want to second Bonus #2, I think EA is significantly about a toolkit for helping others effectively, and using examples of tools seems helpful for an engaging pitch. Is anybody familiar with a post or article listing the main EA tools? One of my side-projects is developing a workshop on these, because I think it could be a really good first introduction to EA for newcomers; even if they don’t want to get further involved, they’ve learned something (we’ve added value to their life) and therefore (hopefully) have a positive attitude toward EA.
The phrasing “helping others” will turn off some progressives. I’m not sure how to deal with this, but it is worth being aware of. This might help explain why (tho I only skimmed it): https://sojo.net/articles/mutual-aid-changing-way-we-help-each-other
Progressives might be turned off by the phrasing of EA as “helping others.” Here’s my understanding of why. Speaking anecdotally from my ongoing experience as a college student in the US, mutual aid is getting tons of support among progressives these days. Mutual aid involves members of a community asking for assistance (often monetary) from their community, and the community helping out. This is viewed as a reciprocal relationship in which different people will need help with different things and at different times from one another, so you help out when you can and you ask for assistance when you need it; it is also reciprocal because benefiting the community is inherently benefiting oneself. This model implies a level field of power among everybody in the community. Unlike charity, mutual aid relies on social relations and being in community to fight institutional and societal structures of oppression (https://ssw.uga.edu/news/article/what-is-mutual-aid-by-joel-izlar/).
“[Mutual Aid Funds] aim to create permanent systems of support and self-determination, whereas charity creates a relationship of dependency that fails to solve more permanent structural problems. Through mutual aid networks, everyone in a community can contribute their strengths, even the most vulnerable. Charity maintains the same relationships of power, while mutual aid is a system of reciprocal support.” (https://williamsrecord.com/376583/opinions/mutual-aid-solidarity-not-charity/).
Within this framework, the idea of “helping people” often relies on people with power aiding the helpless, but doing so in a way that reinforces power difference. To help somebody is to imply that they are lesser and in need of help, rather than an equal community member who is particularly hurt by the system right now. This idea also reminds people of the White Man’s Burden and other examples of people claiming to help others but really making things worse.
I could ask my more progressive friends if they think it is good to help people, and they would probably say yes – or at least I could demonstrate that they agree with me given a few minutes of conversation – but that doesn’t mean they wouldn’t be peeved at hearing “Effective Altruism is about using evidence and careful reasoning to help others the best we can”
I would briefly note that mutual aid is not incompatible with EA to the extent that EA is a question; however, requiring that we be in community with people in order to help them means that we are neglecting the world’s poorest people who do not have access to (for example) the communities in expensive private universities.
- 28 Nov 2021 11:29 UTC; 2 points) 's comment on The Explanatory Obstacle of EA by (
Thank you for looking into this! This strikes me as really important!! Your post is long so I didn’t read it – sorry – but this made me think of an article that I didn’t see you cite which might be relevant: https://www.cambridge.org/core/services/aop-cambridge-core/content/view/136610C8C040C3D92F041BB2EFC3034C/S000305542000009Xa.pdf/agenda_seeding_how_1960s_black_protests_moved_elites_public_opinion_and_voting.pdf
Pilot study results: Cost-effectiveness information did not increase interest in EA
Having more types of content on the forum is appealing to me. There’s probably discussion of this elsewhere, but would it be difficult to have audio versions of all posts? Like a built in text to speech component option.
I really like Ajeya Cotra’s Intro EA talk (https://youtu.be/48VAQtGmfWY) (35 mins 1x speed). I also like this article on longtermism (https://80000hours.org/articles/future-generations/) although it took me about 25 mins to read. This is a really important question, I’m glad you’re asking it, and I would really like to see more empirical work on it rather than simply “I like this article” or “a few people I talked to like this video” which seems to be the current state. I’m considering spending the second semester of my undergrad thesis on trying to figure out the best ways to introduce longtermism.
Also worth considering MacAskill’s video What we Owe the Future (https://youtu.be/vCpFsvYI-7Y) 40 mins at 1x speed.
Thanks for your thorough comment! Yeah I was shooting for about 60 participants, but due to time constraints and this being a pilot study I only ended up with 44, so even more underpowered.
Intuitively I would expect a larger effect size, given that I don’t consider the manipulation to be particularly subtle; but yes, it was much subtler than it could have been. This is something I will definitely explore more if I continue this project; for example, adding visuals and a manipulation check might do a better job of making the manipulation salient. I would like to have a manipulation check like “What is the difference between average and highly cost-effective charities?” And then set it up so that participants who get it wrong have to try again.
The fact that Donation Change differed significantly between Info groups does support that second main hypothesis, suggesting that CE info affects effective donations. This result, however, is not novel. So yes, the effect you picked up on is probably real – but this study was underpowered to detect it at a level of p<.05 (or even marginal significance).
In terms of CE info being ineffective, I’m thinking mainly about interest in EA – to which there really seems to be nothing going on, “There was no significant difference between the Info (M = 32.52, SD = 5.92) and No Info (M = 33.12, SD = 4.01) conditions, F(1, 40) = .118, p = .733, ηp2 = .003.” There isn’t even a trend in the expected direction. This was most important to me because, as far as I know, there is no previous empirical evidence to suggest that CE info affects interest in EA. It’s also more relevant to me as somebody running an EA group and trying to generate interest from people outside the group.
Thanks again for your comment! Edit: Here’s the previous study suggesting CE info influences effective donations: http://journal.sjdm.org/20/200504/jdm200504.pdf
We should be paying Intro Fellows
EA Claremont Winter 21/22 Intro Fellowship Retrospective
Random journaling and my predictions: Pre-Retrospective on the Campus Specialist role.
Applications for the Campus Specialist role at CEA close in like 5 days. Joan Gass’s talk at EAG about this was really good, and it has led to many awesome, talented people believing they should do Uni group community building full time. 20-50 people are going to apply for this role, of which at least 20 would do an awesome job.Because the role is new, CEA is going to hire like 8-12 people for this role; these people are going to do great things for community building and likely have large impacts on the EA community in the next 10 years. Many of the other people who apply will feel extremely discouraged and led on. I’m not sure what they will do, but for the ~10 (or more) who were great fits for the Campus Specialist program but didn’t get it, they will do something much less impactful in the next 2 years.
I have no idea what the effects longer-term will be, but definitely not good. Probably some of these people will leave the EA community temporarily because they are confused, discouraged, and don’t think their skill set fits well with what employers in the EA community care about right now.
This is avoidable if CEA expands the number of people they hire and the system for organizing this role. I think the strongest argument against doing so is that the role is fairly experimental and we don’t know how it will work out. I think that the upside of having more people in this role totally overshadows the downsides. The downsides seem to mainly be money (as long as you hire competent, agentic people). The role description suggests an impact of counterfactually moving ~10 people per year into high impact careers. I think even if the number were only 5, this role would be well worth it, and my guess is that the next 10 best applicants would still have such an effect (even at less prestigious universities).
Disclaimer: I have no insider knowledge. I am applying for the Campus Specialist role (and therefore have a personal preference for more people getting the job). I think there is about a 2⁄3 chance of most of the above problem occurring, and I’m least confident about paragraph 3 (what the people who don’t get the role do instead).
Yes, I agree that this is unclear. Depending on AI timelines, the long-term might not matter too much. To add to your list:
- What do you or others view as talent/skill gaps in the EA community; how can you build those skills/talents in a job that you’re more likely to get? (I’m thinking person/project management, good mentoring, marketing skills, as a couple examples)
Thanks for your response! I don’t think I disagree with anything you’re saying, but I definitely think it’s hard. That is, the burden of proof for 1, 2, and 3 is really high in progressive circles, because the starting assumption is charity does not do 1, 2, or 3. To this end, simplified messages are easily mis-interpreted.
I really like this: “The reason being that they redistribute power, not just resources.”
Hey Ed, thanks for your response. I have no disagreement on 1 because I have no clue what the upper end of people applying is – simply that it’s much higher than the number who will be accepted and the number of people (I think) will do a good job.
2. I think we do disagree here. I think these qualities are relatively common in the CBers and group organizers I know (small sample). I agree that short app timeline will decrease the number of great applicants applying, also unsure about b, c seems like the biggest factor to me.
Probably the crux here is what proportion of applicants have the skills you mention, and my guess is ⅓ to ⅔, but this is based on the people I know which may be higher than in reality.
Great post Mauricio! I’m a senior undergrad this year and this is the first semester I have deliberately taken fewer classes and focussed on things I find more important/interesting (mostly EA organizing). Best decision I’ve made in a while, and I’m getting much more out of my college experience now than before.
In regard to caveat 3 and people who benefit from structure/oversight, I would suggest the following:
Participate in or facilitate fellowships/reading groups for EA if EA is something you want to do. Having other people depend on you or expect things from you can be really motivating.