It’s almost like you’re actually allowed to live your life!
I’ve been really interested by the amount of times I’ve found myself and/or others surprised by seeing that the EA community does something that nearly all other communities do (e.g., infight, unfairly exclude an outgroup, unfairly prefer something or someone high status). I think better awareness of this could be valuable and we may be able to learn a good deal more from the successes and failures of other communities.
Going vegan seems much harder again, though—maybe as hard as giving 20% or 25%?
I agree that this also feels the same to me (I give ~20% but I’m not vegan). Though, again, probably feels much different to other people.
This post tickled my brain in a funny/good way, so thanks for that. I’m still left a little dumbfounded. My thought is if you think of taking EA actions from a units-of-good-accomplished-per-unit-of-personal-sacrifice perspective, donating 10% of my income and giving up meat feel about equally hard to me (and I do both), though I imagine this feels very different for other people and I don’t know what norms are appropriate to make/enforce around how much and what kinds of sacrifices people should make.
Right now everything I mentioned is in https://forum.effectivealtruism.org/posts/6cgRR6fMyrC4cG3m2/rethink-priorities-plans-for-2019
We’re working on writing up an update.
Speaking as one of the judges, I read a lot of the forum anyway because I find a broad selection of content to be relevant/interesting, and I find judging to be a trivial additional time burden (maybe ~10min a month).
I’d refer you to the comments of https://forum.effectivealtruism.org/posts/AChFG9AiNKkpr3Z3e/who-is-working-on-finding-cause-x#Jp9J9fKkJKsWkjmcj
We’re also working on understanding invertebrate sentience and wild animal welfare—maybe not “cause X” because other EAs are aware of this cause already, but I think will help unlock important new interventions.
Additionally, we’re doing some analysis of nuclear war scenarios and paths toward non-proliferation. I think this is understudied in EA, though again maybe not “cause X” because EAs are already aware of it.
Lastly, we’re also working on examining ballot initiatives and other political methods of achieving EA aims—maybe not cause X because it isn’t a new cause area, but I think it will help unlock important new ways of achieving progress on our existing causes.
It’s a rather weak consideration though. I think I’d most rather invest in more research to figure out these comparisons.
Could we host the annual survey on the hub? Or base some of the responses on the hub index? That could both lead more people to visit and register on the hub and also reduce the effort to fill the survey
Historically it has flowed the opposite direction—the Survey has been an exceptional way to get people to populate data on the Hub.
I actually think that as long as you communicate potential downside risks, there is a lot of value in having independent granting bodies look over the same pool of applications.
Yes, this is a great idea to help reduce bias in grantmaking.
Thanks for the transparent answers.
The plan seemed good, but I had no way of assessing the applicant without investing significant amounts of time that I had not available (which is likely why you see a skew towards people the granting team had some past interactions with in the grants above)
This in particular strikes me as understandable but very unfortunate. I’d strongly prefer a fund where happening to live near or otherwise know a grantmaker is not a key part of getting a grant. Are there any plans or any way progress can be made on this issue?
In some cases the applicant asked for less than our minimum grant amount of $10,000
This also strikes me as unfortunate and may lead to inefficiently inflated grant requests in the future, though I guess I can understand why the logistics behind this may require it. It feels intuitively weird though that it is easier to get $10K than it is to get $1K.
Thanks Habryka for raising the bar on the amount of detail given in grant explanations.
This comment strikes me as quite uncharitable, but asks really good questions that I do think would be good to see more detail on.
Your opinions might change as you take into account the full ranges of possible estimates, relative robustness, and longer-term effects. I’m pretty uncertain about the relative value of global poverty work vs. animal work, even given a non-speciesist account. See “Global poverty could be more cost-effective than animal advocacy (even for non-speciesists)” for a sketch of what I’m talking about.
You may also enjoy “Can we apply start-up investing principles to non-profits?” (Answer: not really)
To be clear, I’m quite glad you attempted the model and I agree there’s no need to apologize for it.