Super late response, but it’s hella cool that you’re doing all that!
EA Johns Hopkins looks inactive based on its Facebook page, unfortunately. It doesn’t look like there’s a general EA Baltimore meetup group either. Still, there are always online places like Discord and Reddit.
It can be frustrating that social activities often revolve around spending money, especially if your friends aren’t accommodating about your frugality. There might be Meetup groups in your area about doing things for cheap, or you can suggest some free events to your friends. Overall though, I think that if you can afford it, it might be worthwhile spending a bit of money to hang out with your friends, if that’s something you’d enjoy – it should help with your mental health, and investing in relationships could plausibly boost your earnings enough to compensate for what you’re spending.
“it seems like [updating the EA Handbook] would be less work than updating DGB” → do you mean that updating the EA handbook would be more work? That would make more sense given the rest of your comment.
I think EAs focus on the survival of the human species above that of nonhumans because nonhumans can’t prevent astronomical waste the way a flourishing, advanced human (or posthuman) civilization can. (That, or they don’t think nonhumans lower than chimpanzees are sentient, though I think that is a minority view.)
I agree that biodiversity is good, but only in terms of its impact on the welfare of humans and animals. Although not many EAs seem to value biodiversity for its own sake in the way deep ecologists do, many EAs are concerned about wild animal welfare. There is a lot of suffering experienced by wild animals, either due to nature or from human activity, such as starvation, predation, disease, or infant mortality or r-selection. It’s very difficult to have interventions with an expected positive impact for much of this – for instance, if you eradicate a parasite or disease from a species, might that contribute to overpopulation, and consequently starvation and infant mortality instead? As such, WAW organizations like to focus on things like humane insecticides.
Links (floating formatting bar doesn’t show up on iPad sorry):
I don’t have issues with the EA Handbook’s emphasis on the far future, but I do think Doing Good Better is much more beautifully written and emotionally compelling, so I’d probably still recommend it over the EA Handbook.
Here are some comments I have on individual articles in the EA Handbook:
Introduction to Effective Altruism:
Places too much emphasis on “tested solutions,” which seems to advocate against high risk high reward interventions.
Overall, covers a lot of topics pretty decently.
Efficient Charity – Do Unto Others:
Written in 2010 (although the EA Handbook says 2013), so it has out-of-date cost-effectiveness figures. There is a note at the top which says that and recommends looking at GiveWell, but I think it’d be better to just directly edit the article, so that people who don’t look up GiveWell’s figures don’t walk away with the impression that $5,000 to save a life is ineffective, or that we can save a life from fatal tuberculosis with $100 if we can’t. In addition, while the article claims that it costs $5,000 to save a life from diarrheal disease, I haven’t seen any figures from GiveWell which could provided an updated view.
SHIC uses the following lines in their excerpt of “Efficient Charity”: “According to the World Bank’s analysis, immunising children for dengue fever saves one child’s life for $25,000, but we know that by donating to malaria prevention we could save about five lives for the same cost. If you want to save children, donating bed nets instead of immunising against dengue fever is the objectively right answer, the same way buying a nice car instead of a broken-down one for the same price is the right answer.”
I really like this article though and I think it does well in terms of emotional impact. It might be good to put this before Introduction to Effective Altruism to get readers hooked.
Prospecting for Gold:
Feels kind of long-winded, and at some points the gold metaphor is a slog to read through rather than an actually helpful metaphor. I feel like it’s a lot better for watching as a presentation than reading. We might be able to rewrite this to make it more succinct.
Cites data from DCP2, which has some pretty unreliable figures, and there’s a DCP3 now which we can use instead. I don’t think this point is too important though.
This does cover some important concepts like long-tailed distributions, marginal utility, and comparative advantage.
I’m not going to read/review the rest of the EA Handbook right now, but I think overall, lightly edited transcripts of talks don’t make for great reading material, and we’d want to edit them a lot more to be more succinct and easier to read.
I was a SHIC ambassador at my high school, which is fairly selectie, in contrast to Jessica’s “[high schoolers] will usually just believe you and accept what you say,” I felt that the students at my high school were much more skeptical than I expected. Even some of the eighth graders were like if you decrease your demand for factory farmed products, won’t that just make it cheaper and have no net effect on supply? What about that fish farm that I visited in Israel where the fish seemed to be doing pretty great? With my actual club, one person raised the issue of harms caused by farming plants, and I wasn’t able to navigate that very well. (I’m not a great presenter, fyi. Also, the Cognitive Quirks level didn’t work out very well, since for the 2-4-8 they were like what about −3? π? and it turns out that they’re not actually scope-insensitive.)
Of course, it’s great to have a critically thinking audience, but it raises the risk of getting into thorny issues that you’re not fully prepared to explain well and so your presentation falls apart.
It’s by Jiwoon Hwang, and it’s also posted on his blog: http://jiwoonhwang.org/physical-punishment-of-children-the-neglected-3-6-trillion-year-problem/ . He hasn’t posted anything later on the EA Forum. There are three usernames on the current EA Forum containing the word “Jiwoon”, but they seem to be deleted.
I like to make the following taxonomy of cause areas:
near-term humans: global poverty, mental health, child abuse?
near-term animals: factory farming, wild animals
long-term future: AI alignment, biosecurity, nuclear weapon security, alternative foods, s-risks
This was posted on the EA Forum about a year and a half ago: https://web.archive.org/web/20171105140307/http://effective-altruism.com/ea/19m/reduction_and_abolition_of_physical_punishment_of/ (not sure why non-archived link brings up “Sorry, we couldn’t find what you were looking for.” now).
I thought the “moral nonrealism, therefore egoism” part was purely satire. (I felt like the other points, besides the cultural value one, actually seems seemed quite serious.) I’m not really sure how moral nonrealism works, but I haven’t seen it used within EA to argue for maximizing your personal pleasure and for nothing else mattering. I think it’s very unlikely you’d be an EA if you believed that.
There are definitely a lot of figures (either empirical or subjective) which EAs disagree about, and so there’s a lot of variation in people’s beliefs of what’s most impactful to work on.
Founders Pledge did some research into effective climate change charities (see https://founderspledge.com/research/Cause Report—Climate Change.pdf), and it estimates that $100 to Coalition for Rainforest Nations averts “~857 tonnes of CO2e with a range of ~138 tonnes to ~4,600 tonnes”. For reference, apparently the average person in the US has a footprint of 16 tons, but I’m not sure if that’s CO2-equivalents or just CO2. Now, how valuable is averting a ton of CO2? The WHO had an old figure of 5,000 tons/DALY, but apparently that’s not reliable anymore (https://www.givingwhatwecan.org/report/modelling-climate-change-cost-effectiveness/#1-the-old-estimates). If anyone has a better figure, please post a comment.
Makes sense. FYI, I’m not currently interested in writing such a post, so if anyone else wants to, please do!
Does sharing personal experiences that contribute to better guidelines about whether to pursue direct work, or counterbalancing an excessive emphasis on work at an EA organization, not further the objectives of EA? It’s certainly at a more meta level, but hey, meta EA is still one of the “four cause areas” and one of the EA Funds. I’m not saying it necessarily is more valuable than the winners of the prize, but I don’t think it should disqualified on that basis.
I also don’t think we should shy away from incentivizing posts that reflect disagreements within EA or are critical of EA as it is. That’s not too far off from disincentivizing disagreement (something like if you write about that topic you have zero chance of winning the prize), and that feels wrong on an open forum.
I’m confused by what the World Values Survey means when it says that secular-rational societies see suicide as relatively acceptable. Aside from the case of terminal illnesses that cause great suffering, saying that suicide is okay would definitely be a fringe view. My steelperson would be that it’s saying that while traditional societies see it as personally damaging to one’s reputation and their family, secular-rational societies see it as an issue with society – but I’m not sure that it is saying that.
I would encourage you to expand on your point “I feel that people whose attitudes fall below common Western baselines of tolerance are less deserving of wealth and prosperity.” It reads to me as something like ethnocentric or parochial, and it seems to run counter to the common EA principle that everyone is equally deserving of welfare, at least before we take into account instrumental effects. While we might want to incentivize greater tolerance, I wouldn’t phrase it as that people who are less tolerant are less deserving of prosperity.
Just donated! For others’ convenience, the link is https://go.johndelaney.com/page/content/this-is-about-america/.
I was going to link to EA Concepts: The Meat Eater Problem, as I thought it had been successfully argued that the meat eater problem was not much of a issue, but after re-reading those posts, it does seems that the meat eater problem is a reasonable concern so as long as farm animals have on expectation net negative lives.
Having read much of Brian Tomasik’s work, I think the idea that wild animals have net negative lives is plausible, and I don’t think habitat destruction would be ludicrous. However, that does seem to be a more extreme position than most wild animal welfare organizations are willing to commit to, and I suggest that the framework proposed here is not well-suited for answering those sorts of questions.
To clarify, are you asserting that wild rats, fish, and bugs have net negative lives, on the order of half of the suffering of a factory farmed animal? That seems like a fairly controversial point, since it suggests that, e.g., habitat destruction is a good thing wherever the damage to the ecosystem would not be catastrophic.
Although you’ve said that a score of 0 is supposed to represent uncertainty about whether the animal’s life is net positive or net negative, it doesn’t seem to me that the metrics are well-designed for that. Most of them seem best for capturing negative utility, rather than positive. For instance, when a score of “5 to 15” is assigned to a death with “quick or low pain,” I assume that doesn’t mean that the act of dying itself has positive utility, so where does the positive utility come from? It seems you’d have to implicitly weigh the suffering from death with the lifespan of the animal and its welfare over the course of its life, but it seems wrong to include that all in a quality of death metric. For instance, if we had two groups of animals that were had the same scores on all of these metrics, including how painful their death was, but one had a much shorter lifespan than the other, then the shorter-lived group would have much more pain, even though their scores under this system would be equal. (This might be captured by the death rate figure – if so, could you explain what a “10%” or “50%” death rate means?)
I’d add in a bit about browser extensions that automatically redirect you from Amazon to Amazon Smile, like Smile Always (Chrome) and Amazon Smiley (Firefox).
The impact is honestly depressing low: over the past years of Amazon shopping, I’ve only generated $4.27 (apparently after $854 of purchases).
Just a few comments on the website:
Clicking on the “Feel better. Fast”, “Science-based”, or “Free & easy to use” link to a ”/undefined” page, which leads to a 404 error.
The “Science” link the navigation bar scrolls down to “See how you’re doing, develop over time”, which isn’t really about science.
Overall, claiming being “scientifically proven” without references to actual studies and the use of first name–only testimonials pattern-matches to the sample pseudoscientific websites that my Psych 101 textbook presents. If I had not read this post, I would be quite hesitant to try out the app. I think it would be helpful to have a page about the scientific support for meditation, progressive muscle relaxation, etc., and Mind Ease itself as you did here. It might be difficult to have quotes that are less anonymous (e.g., by providing their full name, photo, and occupation), given the stigma surrounding anxiety, but if it’s feasible, I think it would increase the credibility.
If I were suffering intensely, it wouldn’t be comforting to me that there are other people who were just like me at one point but are now very happy – that feels like a completely different person to me. I’d rather there be someone completely happy than someone who had to undergo unnecessary suffering just to be more similar to me. Insofar as I care about personal identity, I care about whether it is a continuation of my brain, not whether it has similar experiences as me.
Also, “saving” people using this method and having “benevolent AIs [...] distribute parts of the task between each other using randomness” seems indistinguishable from randomly torturing people, and that’s very unappealing for me.
For what it’s worth, I felt a bit alienated by the other Discord, not because I don’t support far-future causes or that it was even discussing the far future, but because I didn’t find the conversation interesting. I think this Discord might help me engage more with EAs, because I find the discourse more interesting, and I happen to like the way Thing of Thing discusses things. I think it’s good to have a variety of groups with different cultures and conversation styles, to appeal to a broader base of people. That said, I do have some reservations about fragmenting EA along ideological lines.