Very interesting, valuable, and thorough overview!
I notice you mentioned providing grants of 30k and 16k that were or are likely to be turned down. Do you think this might have been due to the amounts of funding? Might levels of funding an order of magnitude higher have caused a change in preferences?
Given the amount of funding in longtermist EA, if a project is valuable, I wonder if amounts closer to that level might be warranted. Obviously the project only had 300k in funding, so that level of funding might not have been practical here. However, from the perspective of EA longtermist funding as a whole, routinely giving away this level of funding for projects would be practical.
Michael_S
An Effective Altruist Message Test
I think this is rather weak and mostly arguing against a straw-man. I don’t see Effective Altruists arguing that you should refrain from investments in your human capital. It makes sense to cut down on consumption (eg. eat out less). But I don’t know of any EAs arguing that you should refrain from say buying books.
I understand the importance of reducing burnout, but I wonder if, as a movement, we aren’t placing too much emphasis on reducing burnout compared to pushing ourselves to do more. Anecdotally, I see more EA articles about self care than pushing oneself to do more. I can see why there are some publicity benefits of reducing burden when it comes to attracting new people to the movement, but when it comes to discussing within the community, my guess is the EV of pushing oneself to do more is positive for most people in the movement.
As an example how we may be doing too little on the average, only around 23% of those who revealed their donations in the 2014 EA survey. donated at least 10% of their income. Obviously there’s more ways to be an EA than donating and many of these individuals are students, but it does suggest that many people can push themselves a lot harder. I would be surprised if most people needed more than 90% of their salaries for adequate self care. I think we need to strike a balance between self care/pushing ourselves harder, but my suspicions are that we should move in the latter direction. I would love to find more concrete evidence either way though.
In general, I’m a big fan of approaches that are optimized around Value of Information. Given EA/longtermism’s rapidly growing resources (people and $), I expect that acquiring information to make use of resources in the future is a particularly high EV use of resources today.
Congrats!
I think part of this is about EAs recalibrating what is “crazy” within the community. In general, I think the right assumption is that if you want $ to do basically anything, there’s a good chance (honestly >50%) you can get it.
Hey; I work in US politics (in Data Analytics for the Democratic Party). Would love to chat if you think it would be useful for you.
I’m not arguing for arguing for false arguments; I’m just saying that if you have a point you can make around racial bias, you should make that argument, even if it’s not an important point for EAs, because it is an important one for the audience.
On this topic, I similarly do still believe there’s a higher likelihood of creating hedonium; I just have more skepticism about it than I think is often assumed by EAs.
This is the main reason I think the far future is high EV. I think we should be focusing on p(Hedonium) and p(Delorium) more than anything else. I’m skeptical that, from a hedonistic utilitarian perspective, byproducts of civilization could come close to matching the expected value from deliberately tiling the universe (potentially multiverse) with consciousness optimized for pleasure or pain. If p(H)>p(D), the future of humanity is very likely positive EV.
If you don’t want someone to do something, makes sense not to offer a large amount of $. For the second case, I’m a bit confused by this statement:
“the uncertainty of what the people would do was the key cause in giving a relatively small amount of money”
What do you mean here? That you were uncertain in which path was best?
Choice situation 3: We can either save Al, and four others each from a minor headache or Emma from one major headache. Here, I assume you would say that we should save Emma from the major headache
I think you’re making a mistaken assumption here about your readers. Conditional on agreeing 5 minor headaches in one person is worse than 1 major headache in one person, I would feel exactly the same if it were spread out over 5 people. I expect the majority of EAs would as well.
Thanks for the write up. I think you make a compelling case that this is more effective than canvassing, which can be over 1000 dollars for votes at the margin in a competitive election like 2016. I do think there are a few ways your estimate may be an overestimate though.
Of those who claimed they would follow through with vote trading, some may not have. You mention that there wouldn’t have been much value to defecting. However, much of the value of a vote for individual comes from tribal loyalties rather than affecting the outcome. That’s why turnout is higher in safe presidential states in a presidential election than midterm elections, even when the midterm election is competitive. Some individuals may still have defected because of this.
Secondly, many of the 3rd party folks who made the trade could have voted for Clinton anyway. People who sign up for these sites are necessarily strategic thinkers. If they wanted more total votes for Stein/Johnson, but recognized that a vote for Clinton was more important in a swing state, they might have signed up for the site to gain the Stein/Johnson voter, but planned to vote for Clinton even if they didn’t get a match. Additionally, even if they were acting in good faith when they signed up, they may have changed their mind as the election approached. 3rd parties are historically over estimated in polling compared to the election results, and 2016 was no exception: http://www.realclearpolitics.com/epolls/2016/president/us/general_election_trump_vs_clinton_vs_johnson_vs_stein-5952.html.
I don’t think these problems are enough to reduce the value by an order of magnitude, but it is worth keeping in mind.
Additionally, while vote trading may be high EV now, I am skeptical that it is easy to scale. It’s even more difficult to apply outside of presidential elections, so, unlike other potential political interventions, it will mostly be confined to every 4 years in one race. Furthermore, the individuals who signed up now may be lower cost to acquire than additional potential third party traders. They are likely substantially more strategic than the full population of 3rd party voters; in many years, the full population isn’t that large to begin with. The cost per additional vote may be larger than your current estimates.
Nevertheless, I agree that right now it’s probably more valuable than traditional canvassing and I’m glad people are putting resources into it.
I work in Democratic data analytics in the US and I agree that there’s potentially a lot of value to EAs getting involved in the partisan side rather than just the civil service side to advance EA causes. If anyone is interested in becoming more involved in US politics, I’d love to talk to them. You can shoot me a message.
Independent of the desirability of spending resources on Andrew Yang’s campaign, it’s worth mentioning that this overstates the gains to Steyer. Steyer is running ads with little competition (which makes ad effects stronger), but the reason there is little competition is because decay effects are large; voters will forget about the ads and see new messaging over time. Additionally, Morning Consult shows higher support than all other pollsters. The average for Steyer in early states is considerably less favorable.
It might make a lot of sense to test the risk vs. accidents framing on the next survey of AI researchers.
Yes. People aren’t spending much money yet because people will mostly forget about it by the election.
Hey; I made some comments on this on the doc, but I thought it was worth bringing them to the main thread and expanding.
First of all, I’m really happy to sea other EAs looking at ballot measures. They’re a potentially very high EV method of passing policy/raising funding. They’re particularly high value per dollar when spending on advertising is limited/nothing since the increased probability of passage from getting a relatively popular measure on the ballot is far more than the increased probability from spending the same amount advertising for it.
Also, am I correct in interpreting that you assume 100% chance of passage in your model conditional on good polling? Polling can help, but ballot measure polling does have a lot of error (in both directions). So even a popular measure in polling is hardly guarantee of passage (http://themonkeycage.org/2011/10/when-can-you-trust-polling-about-ballot-measures/).
Finally, in your EV estimates, you seem to be focus on the individual treatment cost of the intervention, which overwhelms the cost of the ballot measure. I don’t think this is getting at the right question when it comes to running a ballot measure. I believe the gains from the ballot measure should be the estimated sum of the utility gains from people being able to purchase the drugs multiplied by the probability of passage; the costs should be how much it would cost to run the campaign. On the doc, you made the point that Givewell doesn’t include leverage on other funding in their estimates, but when it comes to ballot measures, leverage is exactly what you’re trying to produce, so I think an estimate is important.
I agree that the modal outcome of a Trump presidency is that he changes little and the Democrats come out stronger at the end of his presidency than they entered. However, I still think it would have been better that Clinton had won (even if we assume the same congress).
The most important reason is tail risk. As others have commented, the risk of nuclear war may be greater under Trump than it would have been under Clinton. So far, he seems to be pursuing a more conventional foreign policy than I feared, but I still believe the risk is higher than with Clinton. Additionally, I’m worried that the Trump presidency is increasing the salience of Russian hostility among Democrats and could increase the chance of conflict in the future even when a Democrat takes office.
Another are of concern is pandemics. Trump has expressed anti-Vaccine sentiments and submitted budgets which cut pandemic preparedness. Furthermore, the overall level of incompetence in his administration and many of his appointees leaves me worried that the response of the US to a major pandemic could be diminished.
None of the above is likely to happen, but I’d much rather play it safe with a Clinton presidency. Additionally, even the modal outcome of a presidency isn’t all good for the liberals. Most notably, he’ll almost certainly be able to move at least one conservative into the supreme court and has a high chance of moving at least one more. If Trump replaces a liberal with a conservative on the court, the court will move to the right and it will likely be quite a while until Democrats retake it. With a Clinton presidency, liberals would have been able to achieve a majority on the court that would likely have lasted a long time itself.
I don’t reject the argument that the GWWC pledge may not make sense for every single person. There are always exceptions. But I think it’s quite small, and it’s much more beneficial for us as a community to try to get as many people to pledge as possible.
In addition to what it might do for yourself, signing the pledge allows you to influence others. The more people sign the pledge and the more public they are, the more we spread giving a large portion of your income to effective charities as a cultural institution. I think that’s very valuable in itself.
Additionally, some of the items you listed as conflicting with donations, eg. wanting a comfortable retirement, seem like items for which donation should take the higher priority from a utilitarian standpoint. I understand that’s very difficult for people and many EAs will not be able to do this. That’s reality. However, if the pledge gets you to cut back on these luxuries in favor of utilitarian actions, only because you feel obligated to keep the pledge, I think that’s a good thing. If you face a conundrum like in the over justification effect, it may be more productive to try and rethink 5) than rethink 2).
“My view is that—for the most part—people who identify as EAs tend to have unusually high integrity. But my guess is that this is more despite utilitarianism than because of it.”
This seems unlikely to me. I think utilitarianism broadly encourages pro-social/cooperative behaviors, especially because utilitarianism encourages caring about collective success rather than individual success. Having a positive community and trust helps achieve these outcomes. If you have universalist moralities, it’s harder for defection to make sense.
Broadly, I think that worries about utilitarianism/consequentialism should lead to negative outcomes are often self defeating, because the utilitarians/consequentiality see the negative outcomes themselves. If you went around killing people for their organs, the consequences would obviously be negative; it’s the same for going around lying or being an asshole toe people all the time.