I’m a theoretical CS grad student at Columbia specializing in mechanism design. I write a blog called Unexpected Values which you can find here: https://ericneyman.wordpress.com/. My academic website can be found here: https://sites.google.com/view/ericneyman/.
Eric Neyman
Hi Karthik,
Your comment inspired me to write my own quick take, which is here. Quoting the first paragraph as a preview:
I feel pretty disappointed by some of the comments (e.g. this one) on Vasco Grilo’s recent post arguing that some of GiveWell’s grants are net harmful because of the meat eating problem. Reflecting on that disappointment, I want to articulate a moral principle I hold, which I’ll call non-dogmatism. Non-dogmatism is essentially a weak form of scope sensitivity.
I decided to spin off a quick take rather than replying here, because I think it would be interesting to have a discussion about non-dogmatism in a context that’s somewhat separated from this particular context, but I wanted to mention the quick take as a reply to your comment, since it’s relevant.
I feel pretty disappointed by some of the comments (e.g. this one) on Vasco Grilo’s recent post arguing that some of GiveWell’s grants are net harmful because of the meat eating problem. Reflecting on that disappointment, I want to articulate a moral principle I hold, which I’ll call non-dogmatism. Non-dogmatism is essentially a weak form of scope sensitivity.[1]
Let’s say that a moral decision process is dogmatic if it’s completely insensitive to the numbers on either side of the trade-off. Non-dogmatism rejects dogmatic moral decision processes.
A central example of a dogmatic belief is: “Making a single human happy is more morally valuable than making any number of chickens happy.” The corresponding moral decision process would be, given a choice to spend money on making a human happy or making chickens happy, spending the money on the human no matter what the number of chickens made happy is. Non-dogmatism rejects this decision-making process on the basis that it is dogmatic.
(Caveat: this seems fine for entities that are totally outside one’s moral circle of concern. For instance, I’m intuitively fine with a decision-making process that spends money on making a human happy instead of spending money on making sure that a pile of rocks doesn’t get trampled on, no matter the size of the pile of rocks. So maybe non-dogmatism says that so long as two entities are in your moral circle of concern—so long as you assign nonzero weight to them—there ought to exist numbers, at least in theory, for which either side of a moral trade-off could be better.)
And so when I see comments saying things like “I would axiomatically reject any moral weight on animals that implied saving kids from dying was net negative”, I’m like… really? There’s no empirical facts that could possibly cause the trade-off to go the other way?
Rejecting dogmatic beliefs requires more work. Rather than deciding that one side of a trade-off is better than the other no matter the underlying facts, you actually have to examine the facts and do the math. But, like, the real world is messy and complicated, and sometimes you just have to do the math if you want to figure out the right answer.
- ^
Per the Wikipedia article on scope neglect, scope sensitivity would mean actually doing multiplication: making 100 people happy is 100 times better than making 1 person happy. I’m not fully sold on scope sensitivity; I feel much more strongly about non-dogmatism, which means that the numbers have to at least enter the picture, even if not multiplicatively.
- Dec 24, 2024, 10:00 PM; 9 points) 's comment on GiveWell may have made 1 billion dollars of harmful grants, and Ambitious Impact incubated 8 harmful organisations via increasing factory-farming? by (
- ^
I haven’t looked at your math, but I actually agree, in the sense that I also got about 1 in 1 million when doing the estimate again a week before the election!
I think my 1 in 3 million estimate was about right at the time that I made it. The information that we gained between then and 1 week before the election was that the election remained close, and that Pennsylvania remained the top candidate for the tipping point state.
Could you say more about “practically possible”? What steps do you think one could have taken to have reached, say, a 70% credence?
Oh cool, Scott Alexander just said almost exactly what I wanted to say about your #2 in his latest blog post: https://www.astralcodexten.com/p/congrats-to-polymarket-but-i-still
I don’t have time to write a detailed response now (might later), but wanted to flag that I either disagree or “agree denotatively but object connotatively” with most of these. I disagree most strongly with #3: the polls were quite good this year. National and swing state polling averages were only wrong by 1% in terms of Trump’s vote share, or in other words 2% in terms of margin of victory. This means that polls provided a really large amount of information.
(I do think that Selzer’s polls in particular are overrated, and I will try to articulate that case more carefully if I get around to a longer response.)
Looks like it’ll be about 250,000 votes.
I just want to register that, because the election continues to look extremely close, I now think the probability that the election is decided by fewer than 100,000 votes is more like 60%.
I wanted to highlight one particular U.S. House race that Matt Yglesias mentions:
Amish Shah (AZ-01): A former state legislator, Amish Shah won a crowded primary in July. He faces Rep. David Schweikert, a Republican who supported Trump’s effort to overturn the 2020 presidential election. Primaries are costly, and in Shah’s pre-primary filing, he reported just $216,508.02 cash on hand compared to $1,548,760.87 for Schweikert.
In addition to running in a swing district, Amish Shah is an advocate for animal rights. See my quick take about him here.
Yeah, it was intended to be a crude order-of-magnitude estimate. See my response to essentially the same objection here.
Thanks for those thoughts! Upvoted and also disagree-voted. Here’s a slightly more thorough sketch of my thought in the “How close should we expect 2024 to be” section (which is the one we’re disagreeing on):
I suggest a normal distribution with mean 0 and standard deviation 4-5% as a model of election margins in the tipping-point state. If we take 4% as the standard deviation, then the probability of any given election being within 1% is 20%, and the probability of at least 3⁄6 elections being within 1% is about 10%, which is pretty high (in my mind, not nearly low enough to reject the hypothesis that this normal distribution model is basically right). If we take 5% as the standard deviation, then that probability drops from 10% to 5.6%.
I think that any argument that actually elections are eerily close needs to do one of the following:
Say that there was something special about 2008 and 2012 that made them fall outside of the reference class of close elections. I.e. there’s some special ingredient that can make elections eerily close and it wasn’t present in 2008-2012.
I’m skeptical of this because it introduces too many epicycles.
Say that actually elections are eerily close (maybe standard deviation 2-3% rather than 4-5%) and 2008-2012 were big, unlikely outliers.
I’m skeptical of this because 2008 would be a quite unlikely outlier (and 2012 would also be reasonably unlikely).
Say that the nature of U.S. politics changed in 2016 and elections are now close, whereas before they weren’t.
I think this is the most plausible of the three. However, note that the close margins in 2000 and 2004 are not evidence in favor of this hypothesis. I’m tempted to reject this hypothesis on the basis of only having two datapoints in its favor.
(Also, just a side note, but the fact that 2000 was 99.99th percentile is definitely just a coincidence. There’s no plausible mechanism pushing it to be that close as opposed to, say, 95th percentile. I actually think the most plausible mechanism is that we’re living in a simulation!)
Yeah I agree; I think my analysis there is very crude. The purpose was to establish an order-of-magnitude estimate based on a really simple model.
I think readers should feel free to ignore that part of the post. As I say in the last paragraph:
So my advice: if you’re deciding whether to donate to efforts to get Harris elected, plug in my “1 in 3 million” estimate into your own calculation—the one where you also plug in your beliefs about what’s good for the world—and see where the math takes you.
- Sep 3, 2024, 5:33 PM; 2 points) 's comment on The value of a vote in the 2024 presidential election by (
The value of a vote in the 2024 presidential election
The page you linked is about candidates for the Arizona State House. Amish Shah is running for the U.S. House of Representatives. There are still campaign finance limits, though ($3,300 per election per candidate, where the primary and the general election count separately; see here).
Amish Shah is a Democratic politician who’s running for congress in Arizona. He appears to be a strong supporter of animal rights (see here).
He just won his primary election, and Cook Political Report rates the seat he’s running for (AZ-01) as a tossup. My subjective probability that he wins the seat is 50% (Edit: now 30%). I want him to win primarily because of his positions on animal rights, and secondarily because I want Democrats to control the House of Representatives.
You can donate to him here.
- Sep 24, 2024, 10:44 PM; 11 points) 's comment on How to give effectively to US Dems by (
Eric Neyman’s Quick takes
It looks like Amish Shah will probably (barely) win the primary!
(Comment is mostly cross-posted comment from Nuño’s blog.)
In “Unflattering aspects of Effective Altruism”, you write:
Third, I feel that EA leadership uses worries about the dangers of maximization to constrain the rank and file in a hypocritical way. If I want to do something cool and risky on my own, I have to beware of the “unilateralist curse” and “build consensus”. But if Open Philanthropy donates $30M to OpenAI, pulls a not-so-well-understood policy advocacy lever that contributed to the US overshooting inflation in 2021, funds Anthropic13 while Anthropic’s President and the CEO of Open Philanthropy were married, and romantic relationships are common between Open Philanthropy officers and grantees, that is ¿an exercise in good judgment? ¿a good ex-ante bet? ¿assortative mating? ¿presumably none of my business?
I think the claim that Open Philanthropy is hypocritical re: the unilateralist’s curse doesn’t quite make sense to me. To explain why, consider the following two scenarios.
Scenario 1: you and 999 other people smart, thoughtful people have a button. You know there’s 1000 people with such a button. If anyone presses the button, all mosquitoes will disappear.
Scenario 2: you and you alone have a button. You know that you’re the only person with such a button. If you press the button, all mosquitoes will disappear.
The unilateralist’s curse applies to Scenario 1 but *not* Scenario 2. That’s because, in Scenario 1, your estimate of the counterfactual impact of pressing the button should be your estimate of the expected utility of all mosquitoes disappearing, *conditioned on no one else pressing the button*. In Scenario 2, where no one else has the button, your estimate of the counterfactual impact of pressing the button should be your estimate of the (unconditional) expected utility of all mosquitoes disappearing.
So, at least the way I understand the term, the unilateralist’s curse refers to the fact that taking a unilateral action is worse than it naively appears, *if other people also have the option of taking the unilateral action*.
This relates to Open Philanthropy because, at the time of buying the OpenAI board seat, Dustin was one of the only billionaires approaching philanthropy with an EA mindset (maybe the only?). So he was sort of the only one with the “button” of having this option, in the sense of having considered the option and having the money to pay for it. So for him it just made sense to evaluate whether or not this action was net positive in expectation.
Now consider the case of an EA who is considering launching an organization with a potentially large negative downside, where the EA doesn’t have some truly special resource or ability. (E.g., AI advocacy with inflammatory tactics—think DxE for AI.) Many people could have started this organization, but no one did. And so, when deciding whether this org would be net positive, you have to condition on this observation.
I just did a BOTEC, and if I’m not mistaken, 0.0000099999999999999999999999999999999999999999988% is incorrect, and instead should be 0.0000099999999999999999999999999999999999999999998%. This is a crux, as it would mean that the SWWM pledge is actually 2x less effective than the GWWC pledge.
I tried to write out the calculations in this comment; in the process of doing so, I discovered that there’s a length limit to EA Forum comments, so unfortunately I’m not able to share my calculations. Maybe you could share yours and we could double-crux?