You can send me a message anonymously here: https://www.admonymous.co/will
WilliamKiely
The challenge for us is this: How can we ensure that, when we try to help others, we do so as effectively as possible?
-- William MacAskill, Doing Good Better
“It is more difficult to give money away intelligently than to earn it.”—Andrew Carnegie
“Not any more.”—Fake GiveWell quote
When it comes to doing good, fat-tailed distributions seem to be everywhere. It’s not always true that exactly 80 percent of the value comes from the top 20 percent of activities—sometimes things are even more extreme than that, and sometimes less. But the general rule that most of the value generated comes from the very best activities is very common.
-- William MacAskill, Doing Good Better
I just met several EAs in person for the first time last night after following the community online for over a year. Here’s the process:
I was travelling and knew I’d be in the Chicago area for two weeks. Upon arrival I searched for Chicago EA groups on Facebook, found one and asked if there were any meetups happening. There weren’t, but people were interested and we set one up (Facebook event). 8-10 people showed and we had some good discussions. Pretty simple.
Takeaways:
(1) Create EA Facebook group for your area if there isn’t one already
(2) Join the EA Facebook group for your area
(3) Attend meetups when travelers or new people express interest in meeting up
The 10 karma requirement to make your own posts on this forum makes it difficult for new people to share their own ideas here. Perhaps reducing it to 2 would be better.
I’ve been aware of this forum for a few months and have checked back to it a couple dozen times, but still don’t have 10 karma because searching out for posts where you can make insightful comments to get 10 upvotes isn’t actually that easy or quick.
Cool, thanks. Some people just brought me up to 10 Karma, so I’m going to write a post on one idea tonight and publish it here.
Awesome, I’m glad to know someone is working on this. I’m definitely going to check it out and see if it makes sense for me to get involved.
Do you have any idea why it isn’t widely believed in the EA movement that donating to Charity Science is better than donating to GiveWell’s recommendations directly? (Or maybe most EAs do know this, and I just for some reason never heard people emphasizing this.)
Thanks.
Okay, point taken. I don’t mean to criticize GiveWell. Rather, I meant to point out that it seemed to me that they were more focused on the function of identifying top giving opportunities than the function of directing as much money as possible to said charities. Is this not true? Is GiveWell the best at both, or just the former?
Thanks, this is helpful.
I read your organization breakdown page (as well as several of the linked documents) and will be submitting an application in the next week for an internship. Hopefully I can d something to help out.
Great post, Peter.
You helped change my perspective from my post yesterday.
I hadn’t considered your point:
Meta Trap #5: Sometimes, Well Executed Object-Level Action is What Best Grows the Movement
It makes sense. For example, set an example for others as someone who thoughtfully and altruistically donates a significant portion of their income to charity and many of the others may follow.
Also:
The more steps away from impact an EA plan is, the more additional scrutiny it should get.
This is a very important point. While I think it’s quite possible to identify effective meta-level work that avoids your Meta Traps #1-#4, I think it’s probably harder than most people (including myself) would initially think, due to many initial ideas falling into one or more of the meta traps.
Where has the list of speakers been released? Thanks.
Seconded. I just recommended the same thing to Kyle before reading your comment.
Thank you! This just changed where I intend to donate tremendously.
Specifically, I intend to give (100% or nearly 100%) to existential risk rather than (mostly) poverty alleviation (this due to how much I value future lives (a lot) relative to the quality of currently-existing lives).
Upon trying to think of counter-arguments to change my view back in favor of donating to poverty alleviation charities, the best I can come up with right now:
Maybe the best “poverty alleviation” charities are also the best “existential risk” charities. That is, maybe they are more effective at reducing existential risk than than are charities typically thought of as (the best) existential risk charities. How likely is this to be true? Less than 1%?
That is a great question you posted on Reddit!
There are so many important unanswered questions relevant to EA charitable giving. Maybe an effective meta-EA charity idea would be a place where EAs could pose research questions they want answered, and they offer money based on how much they would be willing to give to have their question answered with a certain quality.
I wasn’t thinking that the money would go towards hiring experts. Rather, something like: “I’ll donate $X to GiveDirectly if someone changes my view on this important question that will decide whether I want to donate my money to Org 1 or Org 2.”
I don’t have a specific charity in mind yet. 2. I’m not very confident in my answer.
I should also mention that I probably won’t be donating much more for at least a couple years, so it probably shouldn’t be my highest priority to try to answer all of these questions. They are good questions though, so thanks.
Thanks for posting this here. I hadn’t heard of your organization Intentional Insights and am glad to learn of it since I believe intentionality is critical to effective altruism and the mission of doing the most good possible.
Speaking of the possibility that individuals can have much bigger impact than is often considered and speaking of the fact that this is very exciting:
Consider a story that very well might be true: “Vlad the Astrophysicist”: https://www.youtube.com/watch?v=l9bCFNN67wg
The thought that each of our actions now might be able to change whether our civilization is a common psht![1] or a rare psshht!!![2] or even conceivably a psssssssssssssssssshhhh...[3] is incredibly exciting.
[1] Psht! = A civilization that quickly (say, in another 5,000, 50,000 or 5 million years after the level of development of our present civilization) causes itself to go extinct.
[2] Psshht! = A civilization that continues to thrive for many million years, long enough to spread across an entire galaxy or even meet another civilization arising elsewhere in the universe).
[3] Psssssssssssssssssshhhh… = A civilization that continues to thrive indefinitely, until the end of time if there is an end, or literally indefinitely if there is no end.
And don’t forget the possibility that our actions not only may be able to vastly increase the amount of time our civilization flourishes, but also may be able to vastly increase the quality of that flourishing. That’s incredibly exciting too.