In general, I’m a big fan of approaches that are optimized around Value of Information. Given EA/longtermism’s rapidly growing resources (people and $), I expect that acquiring information to make use of resources in the future is a particularly high EV use of resources today.
Michael_S
Congrats!
I think part of this is about EAs recalibrating what is “crazy” within the community. In general, I think the right assumption is that if you want $ to do basically anything, there’s a good chance (honestly >50%) you can get it.
If you don’t want someone to do something, makes sense not to offer a large amount of $. For the second case, I’m a bit confused by this statement:
“the uncertainty of what the people would do was the key cause in giving a relatively small amount of money”
What do you mean here? That you were uncertain in which path was best?
Very interesting, valuable, and thorough overview!
I notice you mentioned providing grants of 30k and 16k that were or are likely to be turned down. Do you think this might have been due to the amounts of funding? Might levels of funding an order of magnitude higher have caused a change in preferences?
Given the amount of funding in longtermist EA, if a project is valuable, I wonder if amounts closer to that level might be warranted. Obviously the project only had 300k in funding, so that level of funding might not have been practical here. However, from the perspective of EA longtermist funding as a whole, routinely giving away this level of funding for projects would be practical.
I work in Democratic data analytics in the US and I agree that there’s potentially a lot of value to EAs getting involved in the partisan side rather than just the civil service side to advance EA causes. If anyone is interested in becoming more involved in US politics, I’d love to talk to them. You can shoot me a message.
Hey; I work in US politics (in Data Analytics for the Democratic Party). Would love to chat if you think it would be useful for you.
Yes. People aren’t spending much money yet because people will mostly forget about it by the election.
Independent of the desirability of spending resources on Andrew Yang’s campaign, it’s worth mentioning that this overstates the gains to Steyer. Steyer is running ads with little competition (which makes ad effects stronger), but the reason there is little competition is because decay effects are large; voters will forget about the ads and see new messaging over time. Additionally, Morning Consult shows higher support than all other pollsters. The average for Steyer in early states is considerably less favorable.
I’d be curious which initiatives CSER staff think would have the largest impact in expectation. The UNAIRO proposal in particular looks useful to me for making AI research less of an arms race and spreading values between countries, while being potentially tractable in the near term.
There’s also other counterfactual matching opportunities that tend to arise around the same time though.
Yeah, I don’t think filling the finite universe we know about is where the the highest expected value is. It’s likely some form of possible infinite value, since it’s not implausible that this could exist. But ultimately, I agree that the implications of this are minor and our response should basically be the same as if we lived in a finite universe (keep humanity alive, move values towards total hedonic utilitarianism, and build safe AI).
I’m not arguing for arguing for false arguments; I’m just saying that if you have a point you can make around racial bias, you should make that argument, even if it’s not an important point for EAs, because it is an important one for the audience.
I think this is rather weak and mostly arguing against a straw-man. I don’t see Effective Altruists arguing that you should refrain from investments in your human capital. It makes sense to cut down on consumption (eg. eat out less). But I don’t know of any EAs arguing that you should refrain from say buying books.
In general, I’m glad that it was included because it ads legitimacy to the overall argument with Vox’s center-left audience.
I found this really helpful, and gave me what I expect to be actionable information I can use in my own work (I work in Democratic politics). Much appreciated!
I agree that limitations on RCTs are a reason to devalue them relative to other methodologies. They still add value over our priors, but I think the best use cases for RCTs are when they’re cheap and can be done at scale (Eg. in the context of online surveys) or when you are randomizing an expensive intervention that would be provided anyway such that the relative cost of the RCT is cheap.
When costs of RCTs are large, I think there’s reason to favor other methodologies, such as regression discontinuity designs, which have faired quite well compared to RCTs (https://onlinelibrary.wiley.com/doi/abs/10.1002/pam.22051).
FYI, I’m pretty busy over the next few days, but I’d like to get back to this conversation at one point. If I do, it may be a bit though.
To your first comment, I disagree. I think it’s the same thing. Experiences are the result of chemical reactions. Are you advocating a form of dualism where experience is separated from the physical reactions in the brain?
I think there is more total pain. I’m not counting the # of headaches. I’m talking about the total amount of pain.
Can you define S1?
We may not, as these discussions tend to go. I’m fine calling it.
I think we have to get closer to defining a subject of experience, (S1); I think I would need this to go forward. But here’s my position on the issue: I think moral personhood doesn’t make sense as a binary concept (the mind from a brain is different at different times, sometimes vastly different such as in the case of a major brain injury) The matter in the brain is also different over time (ship of Theseus). I don’t see a good reason to call these the same person in a moral sense in a way that two minds of two coexisting brains wouldn’t be. The consciousness experiences are different between at different times and different brains; I see this as a matter of degree of similarity.
Of course, it is possible that within the cow’s physical system’s life span, multiple subjects-of-experience are realized. This would be the case if not all of the experiences realized by the cow’s physical system are felt by a single subject.
That’s what I’m interested in a definition of. What makes it a “single subject”? How is this a binary term?
I am making a greater than/less than comparison. That comparison is with pain which results from the neural chemical reactions. There is more pain (more of these chemical reactions based experiences) in the 5 headaches than there is in the 1 whether or not they occur in a single subject. I don’t see any reason to treat this differently then the underlying chemical reactions.
No problem on the caps.
“My view is that—for the most part—people who identify as EAs tend to have unusually high integrity. But my guess is that this is more despite utilitarianism than because of it.”
This seems unlikely to me. I think utilitarianism broadly encourages pro-social/cooperative behaviors, especially because utilitarianism encourages caring about collective success rather than individual success. Having a positive community and trust helps achieve these outcomes. If you have universalist moralities, it’s harder for defection to make sense.
Broadly, I think that worries about utilitarianism/consequentialism should lead to negative outcomes are often self defeating, because the utilitarians/consequentiality see the negative outcomes themselves. If you went around killing people for their organs, the consequences would obviously be negative; it’s the same for going around lying or being an asshole toe people all the time.