I’d be in favor of creating subsections for more clarity. Presumably different cause areas could be subsections, there could be a philosophy subsection, and so on. And I think a “Introduce yourself” section could be good as it would make the forum more approachable.
Daniel_Eth
Yeah, I’m potentially interested but would be curious what direction you’re thinking of going here.
I have no personal knowledge of these specific charities, nor strong opinions on the effectiveness of criminal justice reform. I do think, however, that there are good reasons to consider political issues in EA circles. The size of governments are huge, and the effects of their actions can be gigantic. Even a minor improvement in the function of the US government can have an impact far greater than what almost any other organization can accomplish.
In 2016, all of my donations were to Hillary’s election campaign (my logic can mostly be found here: http://thinkingofutils.com/2016/11/value-one-vote/). The gwwc pledge (which I’ve taken) states giving should be “to the organisations that you think can do the most good with it.” Had I given anywhere else instead, I would have been breaking the pledge since I thought her campaign was the most effective use of my money on the margin.
Thanks for this, I found it useful. In addition to funding, I think things like the Partnership on AI (https://www.partnershiponai.org), which includes Facebook, Google, and Apple, show that industry is taking this more seriously.
Agreed that we should be doing more exploration. I think one reason there hasn’t been as much is it’s a harder sell. “Give me money that I can use to save lives—I’ve already found a method that works” is a more convincing plea than “give me money so I can sit around and think of an altruistic way to spend other people’s money—I swear I’ll work effectively at this.” Of course, big established organizations like OPP can do this, but I think the hard sell creates a barrier to entry.
Yeah. Probably the fact that it caries an uncertain risk of failure, and that the level of success is also unknowable, makes it more off-putting. Especially since EA has a quantitative bent.
This. As a meat-eating EA who personally does think animal suffering is a big deal, I’ve found the attitude from some animal rights EAs to be quite annoying. I personally believe that the diet I eat is A) healthier than if I was vegan and B) allows me to be more focussed and productive than if I was vegan, allowing me to do more good overall. I’m more than happy to debate that with anyone who disagrees (and most EAs who are vegan are civil and respect this view), but I have encountered some EAs who refuse to believe that there’s any possibility of either A) or B) being true, which feels quite dismissive.
Contrast that attitude to what happened recently at a Los Angeles EA meetup where we went for dinner. Before ordering, I asked around if anyone was vegan since if there was anyone who was, I didn’t want to eat meat in front of them and offend them. The person next to me said he was vegan, but that if I wanted meat I should order it since “we’re all adults and we want the community to be as inclusive as it can.” I decided to get a vegan dish anyway, but having him say that made me feel more welcome.
Perhaps one implication of this is it’s better to target movement growing efforts at students (particularly undergrads), since they’re less likely to have already made up their minds?
Another possibility is that most people in EA are still pretty young, so they might not feel like they’re really in a position to mentor anyone.
Where are all these crazy EA parties that I keep reading about? The only EA parties I’ve heard of were at EA Global.
Yes, that was my general impression of EA global. I feel like most of the people who do get upset about meat eaters in EA are only nominally in EA, and largely interact with the community via Facebook.
I like the idea!
Yeah, I can see how this could come off poorly. I’d recommend using the word “focus” instead (i.e. “I focus mostly on X-risks”)
I’m assuming people who donated to the fund would get periodic notifications about where the money’s being used.
Obviously different people have different motivations for their donations. I disagree that it’s a straw man, though, because I wasn’t trying to misrepresent any views and I think risk aversion actually is one of the main reasons that people tend to support causes such as AMF that help people “one at a time” over causes that are larger scale but less likely to succeed. MIRI’s chance of success wasn’t central to my argument—if you think it has basically zero net positive then substitute in whatever cause you think actually is positive (in-vitro meat research, CRIPSR research, politics, etc). Perhaps you’ve already done that and think that AMF still has higher expected value, in which case I would say you’re not risk averse (per se), but then I’d also think that you’re in the minority.
Yeah, I agree it doesn’t just apply to where to donate, but also how to get money to donate, founding non-profits, etc. Which, taken to it’s logical conclusion, means maybe I should angle to run for president?
Thanks for the link—hopefully 80000hours is able to convince some EAs to go into politics.
I think it’s true that many outsource their thinking to GW, but I think there could still be risk aversion in the thought process. Many of these people have also been exposed to arguments for higher risk higher reward charities such as X-risks or funding in-vitro meat research, and I think a common thought process is “I’d prefer to go with the safer and more established causes that GW recommends.” Even if they haven’t explicitly done the EV calculation themselves, qualitatively similar thought processes may still occur.
My 2 cents, as a scientist, currently in a PhD program: Scientists will largely resist this. They don’t want all their data to be out in the open, mostly from fear that they made a mistake that will be picked up on. “Imposter syndrome” is very common in science (especially for new scientists, who run most of the actual experiments—more established scientists spend more time writing grants for more funding). It’s also just a pain in the ass to gather all your data and format it, etc.
That said, I think this would be a very good thing (for scientific progress, not for scientists themselves). In particular, I think it would be very useful for building off other work. There have been tons of times where I’ve wanted to know exactly how some group gathered some data, and their paper didn’t quite specify.
Since this seems like something very good that vested interests will likely oppose, I agree it is a great cause to push for—it likely won’t happen on its own but if we can build the proper incentive structures then we could, in theory, alter how the game is played.
Regarding political activism, I’m not sure that the value is less when there’s not a Presidential election. Congress and state legislature races are way more neglected than the presidency, and fewer people vote in midterm elections. Activism might be more impactful on the margin there.
2018 congressional races will likely determine how much Trump can fulfill his agenda in his 3rd and 4th year, and state legislature races will have a large effect for redistricting.