I do community building with Effective Altruism at Georgia Tech. My primary focus areas are animal welfare and artificial intelligence.
Pete Rowlett
I think the website is already quite good. It includes almost everything that somebody new to the community might find useful without overcrowding. If I had to come up with a couple comments:
“For the first couple of weeks, I’ll be testing how the current site performs against these goals, then move on to the redesign, which I’ll user-test against the same goals.” For the testing methodology, it sounds like you’re planning to gather metrics on this version, switch to V2, and gather metrics again. I think A/B testing might be a better option if it’s not too inconvenient, since that might get you more similarity between the groups on which you gather data.
You could add a section on stories of people in effective altruism in video or text form. Learning about how other people got involved, their pasts, and their motivations, might inspire people to join in-person groups and EAVP more than reading or listening to podcasts. Ideally the people would be diverse (country of origin, gender, race, primary cause area, type of contribution, etc.).
Hope that helps!
Hello Altar! As far as I know, there is no Seattle area EA-focused charity evaluator. Generally speaking, EA organizations do not engage in such work for a couple reasons.
1. EAs focus on impartial altruism, meaning that they try to give equal priority to everyone’s interests, regardless of their location.
2. The difference in impact between the least and most cost-effective organizations in Seattle is small relative to the difference in impact between the least and most cost-effective organizations globally. This means that getting local-only donors to switch between local charities is significantly less valuable than getting people to switch from local to international charities. It would have to be vastly easier to get local-only donors to switch for that work to end up being cost-effective. More info here.There have been some smaller efforts to do local priorities research from local or national groups. Effective Altruism Israel ran their “Maximum Impact” program (details here and here). This post discusses in more detail how local research is useful and links to a few other efforts in Singapore, Brazil, and the Philippines.
Sometimes local efforts from wealthier countries can identify globally cost-effective charities, particularly in cause areas besides global health, but I think another key reason they are created is to develop members’ evaluation skills, which can later be applied on a broader scale. Local prioritization efforts in low income countries may also have success in identifying top global health organizations.
I hope this was helpful. Let me know if you have any more questions!
Thanks Nathan!
These are interesting ideas. It seems like there’s still a lack of clarity about the magnitude of the effects of each issue on the nonhuman animal side, and therefore their relative cost-effectiveness. But as more research is done, say on ITNs in later stages of their lifecycle and the effects of tapeworms on pigs, maybe trades could be made based on these issues!
Moral Trade Proposal with 95-100% Surplus
Developing Counterfactual Trust in Moral Trade
Generating More Surplus in Moral Trades
Wow, this is amazing! Thank you for putting in the time and effort to write it. I just ordered a copy for the Effective Altruism at Georgia Tech library. Can’t wait to read it!
I think it would be really useful for someone with a mathematical background to develop this further. The flexibility/dedication tradeoff seems about the same as the explore/exploit tradeoff, which I understand to have been studied a fair amount. I’d imagine there’s a lot of theory that could be applied and would allow us to make better decisions as a community, especially now that lots of people are thinking about specializing or funding specialization. I bet we could avoid significant mistakes at a low cost by quantifying investments in each area and comparing them to theoretical ideals.
Effective Giving: Best Practices, Key Considerations, and Resources
Modeling Moral Trade in Antibiotic Resistance and Alternative Proteins
Congratulations on your first post! I think this is a really cool and interesting idea. The team at Basefund has started doing something similar, so you may want to reach out to them if you’re interested in working on it!
I quite like how you distinguish approaches at the individual level! I think focusing on which area they support makes sense. One lingering question I have is the relative value a donor’s donations vs. the value of their contribution toward building a culture of effective giving. I also think it’s at least somewhat common for people to get into other areas of EA after starting out in effective giving.
Agreed on the intro fellowship point as well! Long-term it supports field-building since plenty of participants filter through, but it’s more directly movement support.
I’m a little less sure on the networking point. I notice that because I’m exploring lots of EA-related areas in relatively low depth, I haven’t hit diminishing returns from talking to people in the community. I do imagine that people who have committed more strongly to an area would get more value from exploring more. I do agree that lots of people outside the traditional EA geographical areas could do fantastic work. Enabling this doesn’t seem very resource-intensive though. I would guess that EA Virtual Programs is relatively cheap, and it allows anyone to get started in EA. Maybe you’d like to see more traditional local groups, though, which would be more costly but could make sense.
I think the uptake of practices category can be separated into two areas. Area one would be promoting the uptake of EA-style thinking in existing foundations and the other work you list under “How I would describe EA’s current approach to social change”. Area two would be pushing for the implementation of policies that have come out of EA research in existing organizations, which is what LEEP and lots of animal welfare orgs do (and I suppose more biosecurity and AI people are getting into the regulatory space as well now). I only question the tractability of area one work, area two work seems to be going quite well! The main challenge in that domain is making sure the policy recommendations are good.
Thank you for the detailed response!
Just messaged you!
It’s great that you’re doing what you can on this front, despite all the challenges! I don’t have specific nutritional advice, though maybe the writer of the first post you linked would.
You may have already considered this (some of your ideas hinted in this direction), but I think it’s important to focus on suffering intensity, which you could measure in terms of suffering per calorie or suffering per pound of food. Doing so will minimize your overall suffering footprint. My understanding is that the differences in capacity for suffering between large and small animals (such as cows and shrimp) aren’t large enough to outweigh the difference in the number of animals you have to eat to get the same number of calories. Additionally, cows seem to be kept in some of the least awful conditions of any factory-farmed animal.
This website, foodimpacts.org, shows this difference in a useful graphic. It also lets you weight the importance you place on welfare vs. climate impacts (though I would set climate to 0%, it may be helpful for you if you prioritize differently).
Brian Tomasic’s How Much Direct Suffering Is Caused by Various Animal Foods? could also be a useful guide, and Meghan Barrett’s work on insect sentience is worth a read if you want to decide whether it’s better to eat insects or other animals.
Great post, thanks for writing it! Healthy and active vegans sharing their stories helps change the narrative, bit by bit.
Destroying viruses in at-risk labs
Thanks to Garrett Ehinger for feedback and for writing the last paragraph.
Military conflict around or in the vicinity of biological research laboratories could substantially increase the risk of releasing a dangerous pathogen into the environment. The fighting and mass movement of refugees combine with other risk factors to magnify the potential ramifications of this risk. Garrett Ehinger elaborates on this issue in his excellent Chicago Tribune piece, and proposes the creation of nonaggression treaties for biological labs in war zones as additional pillars to shore up biosecurity norms.
This seems like a great option, but I think there may be a more prompt technical solution as well. Viruses, bacteria, and other dangerous materials in at-risk labs could be stored in containers that have built-in methods to destroy their contents. A strong heating element could be integrated into the storage compartment of each virus and activated by scientists at the lab if a threat seems imminent. Vibration sensors could also automatically activate the system in case of a bombing or an earthquake. This solution would require funding and engineering expertise. I don’t know how much convincing labs would need to integrate it into their existing setups.
If labs might consider the purchase and implementation of entirely new heating elements with their existing containers to be too tall of an order, there are other alternatives. For example, “autoclaves” (the chemist’s equivalent of a ceramic kiln or furnace) are already commonplace in many biological laboratories for purposes such as medium synthesis or equipment sterilization. There could be value for these labs in developing SOPs and recommendations for the safe disposal of risky pathogens via autoclaves. This solution would be quicker and easier to implement, but in an emergency situation, could require slightly more time to safely destroy all the lab’s pathogens.
Rawls’ veil of ignorance supports maximizing expected value
One common topic in effective altruism introductory seminars is expected value, specifically the idea that we should usually maximize it. It’s intuitive for some participants, but others are less sure. Here I will offer a simple justification for expected value maximization using a variation of the veil of ignorance thought experiment. This line of thinking has helped make my introductory seminar participants (and me) more confident in the legitimacy of expected value.
The thought experiment begins with a group of rational agents in the “original position”. Here they have no knowledge of who or what they will be when they enter the world. They could be any race, gender, species, or thing. Because they don’t know who or what they will be, they have no unfair biases, and should be able to design a just society and make just decisions.
Now for two expected value thought experiments from the Cambridge EA introductory seminar discussion guide. Suppose that a disease, or a war, or something, is killing people. And suppose you only have enough resources to implement one of the following two options:
Version A…
Save 400 lives, with certainty [EV +400/-100]
Save 500 lives, with 90% probability; save no lives, 10% probability [EV +450/-50]
Version B…
100 people die, with certainty [EV +400/-100]
90% chance no one dies; 10% chance 500 people die [EV +450/-50]
Now imagine that you’re an agent behind the veil of ignorance. You could enter the world as any of the 500 individuals. What do you want the decision-maker to choose? In both versions of the thought experiment, option 1 gives you an 80% chance of surviving, but option 2 gives you a 90% chance. The clear choice is option 2.
This framework bypasses the common objection that it’s wrong to take risks with others’ lives by turning both options into a risk. In my experience, part of this objection often has to do with understandable feelings of discomfort with risk-taking in high-stakes scenarios. But here risk-taking is the altruistic approach, so a refusal to accept risk would ultimately be for the emotional benefit of the decider. This topic can also lead to discussion about the meaning of altruism, which is a highly relevant idea for intro seminar participants.
This argument isn’t new (reviewers noted that John Harsanyi was the first to make this argument, and Holden Karnofsky discusses it in his post on one-dimensional ethics), but I hope you find this short explanation useful for your own thinking and for your communication of effective altruism.
I appreciate how this post adds dimension to community building, and I think the four examples you used are solid examples of each approach. I’m not sure what numbers I’d put on each area as current or ideal numbers, but I do have some other thoughts.
I think it’s a little hard to distinguish between movement support and field building in many community building cases. When someone in a university group decides to earn to give instead of researching global priorities, does that put them in movement support instead of the field? To what extent do they need to be involved in evaluating their giving to count as being part of the field? And when a group runs an intro fellowship, is that movement support or field building?
I’m still very excited about network development and wouldn’t change its fraction of the portfolio. I personally tend to get a lot of value out of meeting other people within EA and understanding EA orgs better. Networks facilitate field building and movement support. I’m also less excited about promoting the uptake of our practices by outside organizations. I think we’re at a pretty low percentage and should stay there. A project or two like this would be great, but I don’t think we need enough of it to round away from 5%, mostly because of tractability concerns. These projects are also supported by field building work.
Thanks for the post!
There are a few possible sources of funding that I’m aware of. These first two are managed funds that accept applications:
Effective Altruism Funds Long-Term Future Fund (Application)
Founders Pledge Global Catastrophic Risks Fund (Application)
Manifund may be a good fit since your request is small and urgent. You can list your project there, and anyone can fund it.
It doesn’t sound like you’re doing anything related to antimicrobial resistance, but if you are, there’s the AMR Funding Circle.
Do you already know what sort of power system you need and where to purchase it? If so, I might explain specific plans and expected costs in your forum post. That information will be helpful for your grant applications and for anyone trying to identify sources of support. If not, and you need help, I might reach out to someone at High Impact Engineers. There may be more support in the EA Anywhere Slack (perhaps cause-biosecurity).
I hope this was helpful. Let me know if you have any more questions!