I’m a theoretical CS grad student at Columbia specializing in mechanism design. I write a blog called Unexpected Values which you can find here: https://ericneyman.wordpress.com/. My academic website can be found here: https://sites.google.com/view/ericneyman/.
Eric Neyman
I feel pretty disappointed by some of the comments (e.g. this one) on Vasco Grilo’s recent post arguing that some of GiveWell’s grants are net harmful because of the meat eating problem. Reflecting on that disappointment, I want to articulate a moral principle I hold, which I’ll call non-dogmatism. Non-dogmatism is essentially a weak form of scope sensitivity.[1]
Let’s say that a moral decision process is dogmatic if it’s completely insensitive to the numbers on either side of the trade-off. Non-dogmatism rejects dogmatic moral decision processes.
A central example of a dogmatic belief is: “Making a single human happy is more morally valuable than making any number of chickens happy.” The corresponding moral decision process would be, given a choice to spend money on making a human happy or making chickens happy, spending the money on the human no matter what the number of chickens made happy is. Non-dogmatism rejects this decision-making process on the basis that it is dogmatic.
(Caveat: this seems fine for entities that are totally outside one’s moral circle of concern. For instance, I’m intuitively fine with a decision-making process that spends money on making a human happy instead of spending money on making sure that a pile of rocks doesn’t get trampled on, no matter the size of the pile of rocks. So maybe non-dogmatism says that so long as two entities are in your moral circle of concern—so long as you assign nonzero weight to them—there ought to exist numbers, at least in theory, for which either side of a moral trade-off could be better.)
And so when I see comments saying things like “I would axiomatically reject any moral weight on animals that implied saving kids from dying was net negative”, I’m like… really? There’s no empirical facts that could possibly cause the trade-off to go the other way?
Rejecting dogmatic beliefs requires more work. Rather than deciding that one side of a trade-off is better than the other no matter the underlying facts, you actually have to examine the facts and do the math. But, like, the real world is messy and complicated, and sometimes you just have to do the math if you want to figure out the right answer.
- ^
Per the Wikipedia article on scope neglect, scope sensitivity would mean actually doing multiplication: making 100 people happy is 100 times better than making 1 person happy. I’m not fully sold on scope sensitivity; I feel much more strongly about non-dogmatism, which means that the numbers have to at least enter the picture, even if not multiplicatively.
- 24 Dec 2024 22:00 UTC; 9 points) 's comment on GiveWell may have made 1 billion dollars of harmful grants, and Ambitious Impact incubated 8 harmful organisations via increasing factory-farming? by (
- ^
I haven’t looked at your math, but I actually agree, in the sense that I also got about 1 in 1 million when doing the estimate again a week before the election!
I think my 1 in 3 million estimate was about right at the time that I made it. The information that we gained between then and 1 week before the election was that the election remained close, and that Pennsylvania remained the top candidate for the tipping point state.
Could you say more about “practically possible”? What steps do you think one could have taken to have reached, say, a 70% credence?
Oh cool, Scott Alexander just said almost exactly what I wanted to say about your #2 in his latest blog post: https://www.astralcodexten.com/p/congrats-to-polymarket-but-i-still
I don’t have time to write a detailed response now (might later), but wanted to flag that I either disagree or “agree denotatively but object connotatively” with most of these. I disagree most strongly with #3: the polls were quite good this year. National and swing state polling averages were only wrong by 1% in terms of Trump’s vote share, or in other words 2% in terms of margin of victory. This means that polls provided a really large amount of information.
(I do think that Selzer’s polls in particular are overrated, and I will try to articulate that case more carefully if I get around to a longer response.)
Looks like it’ll be about 250,000 votes.
I just want to register that, because the election continues to look extremely close, I now think the probability that the election is decided by fewer than 100,000 votes is more like 60%.
I wanted to highlight one particular U.S. House race that Matt Yglesias mentions:
Amish Shah (AZ-01): A former state legislator, Amish Shah won a crowded primary in July. He faces Rep. David Schweikert, a Republican who supported Trump’s effort to overturn the 2020 presidential election. Primaries are costly, and in Shah’s pre-primary filing, he reported just $216,508.02 cash on hand compared to $1,548,760.87 for Schweikert.
In addition to running in a swing district, Amish Shah is an advocate for animal rights. See my quick take about him here.
Yeah, it was intended to be a crude order-of-magnitude estimate. See my response to essentially the same objection here.
Thanks for those thoughts! Upvoted and also disagree-voted. Here’s a slightly more thorough sketch of my thought in the “How close should we expect 2024 to be” section (which is the one we’re disagreeing on):
I suggest a normal distribution with mean 0 and standard deviation 4-5% as a model of election margins in the tipping-point state. If we take 4% as the standard deviation, then the probability of any given election being within 1% is 20%, and the probability of at least 3⁄6 elections being within 1% is about 10%, which is pretty high (in my mind, not nearly low enough to reject the hypothesis that this normal distribution model is basically right). If we take 5% as the standard deviation, then that probability drops from 10% to 5.6%.
I think that any argument that actually elections are eerily close needs to do one of the following:
Say that there was something special about 2008 and 2012 that made them fall outside of the reference class of close elections. I.e. there’s some special ingredient that can make elections eerily close and it wasn’t present in 2008-2012.
I’m skeptical of this because it introduces too many epicycles.
Say that actually elections are eerily close (maybe standard deviation 2-3% rather than 4-5%) and 2008-2012 were big, unlikely outliers.
I’m skeptical of this because 2008 would be a quite unlikely outlier (and 2012 would also be reasonably unlikely).
Say that the nature of U.S. politics changed in 2016 and elections are now close, whereas before they weren’t.
I think this is the most plausible of the three. However, note that the close margins in 2000 and 2004 are not evidence in favor of this hypothesis. I’m tempted to reject this hypothesis on the basis of only having two datapoints in its favor.
(Also, just a side note, but the fact that 2000 was 99.99th percentile is definitely just a coincidence. There’s no plausible mechanism pushing it to be that close as opposed to, say, 95th percentile. I actually think the most plausible mechanism is that we’re living in a simulation!)
Yeah I agree; I think my analysis there is very crude. The purpose was to establish an order-of-magnitude estimate based on a really simple model.
I think readers should feel free to ignore that part of the post. As I say in the last paragraph:
So my advice: if you’re deciding whether to donate to efforts to get Harris elected, plug in my “1 in 3 million” estimate into your own calculation—the one where you also plug in your beliefs about what’s good for the world—and see where the math takes you.
- 3 Sep 2024 17:33 UTC; 2 points) 's comment on The value of a vote in the 2024 presidential election by (
The value of a vote in the 2024 presidential election
The page you linked is about candidates for the Arizona State House. Amish Shah is running for the U.S. House of Representatives. There are still campaign finance limits, though ($3,300 per election per candidate, where the primary and the general election count separately; see here).
Amish Shah is a Democratic politician who’s running for congress in Arizona. He appears to be a strong supporter of animal rights (see here).
He just won his primary election, and Cook Political Report rates the seat he’s running for (AZ-01) as a tossup. My subjective probability that he wins the seat is 50% (Edit: now 30%). I want him to win primarily because of his positions on animal rights, and secondarily because I want Democrats to control the House of Representatives.
You can donate to him here.
- 24 Sep 2024 22:44 UTC; 11 points) 's comment on How to give effectively to US Dems by (
Eric Neyman’s Quick takes
It looks like Amish Shah will probably (barely) win the primary!
(Comment is mostly cross-posted comment from Nuño’s blog.)
In “Unflattering aspects of Effective Altruism”, you write:
Third, I feel that EA leadership uses worries about the dangers of maximization to constrain the rank and file in a hypocritical way. If I want to do something cool and risky on my own, I have to beware of the “unilateralist curse” and “build consensus”. But if Open Philanthropy donates $30M to OpenAI, pulls a not-so-well-understood policy advocacy lever that contributed to the US overshooting inflation in 2021, funds Anthropic13 while Anthropic’s President and the CEO of Open Philanthropy were married, and romantic relationships are common between Open Philanthropy officers and grantees, that is ¿an exercise in good judgment? ¿a good ex-ante bet? ¿assortative mating? ¿presumably none of my business?
I think the claim that Open Philanthropy is hypocritical re: the unilateralist’s curse doesn’t quite make sense to me. To explain why, consider the following two scenarios.
Scenario 1: you and 999 other people smart, thoughtful people have a button. You know there’s 1000 people with such a button. If anyone presses the button, all mosquitoes will disappear.
Scenario 2: you and you alone have a button. You know that you’re the only person with such a button. If you press the button, all mosquitoes will disappear.
The unilateralist’s curse applies to Scenario 1 but *not* Scenario 2. That’s because, in Scenario 1, your estimate of the counterfactual impact of pressing the button should be your estimate of the expected utility of all mosquitoes disappearing, *conditioned on no one else pressing the button*. In Scenario 2, where no one else has the button, your estimate of the counterfactual impact of pressing the button should be your estimate of the (unconditional) expected utility of all mosquitoes disappearing.
So, at least the way I understand the term, the unilateralist’s curse refers to the fact that taking a unilateral action is worse than it naively appears, *if other people also have the option of taking the unilateral action*.
This relates to Open Philanthropy because, at the time of buying the OpenAI board seat, Dustin was one of the only billionaires approaching philanthropy with an EA mindset (maybe the only?). So he was sort of the only one with the “button” of having this option, in the sense of having considered the option and having the money to pay for it. So for him it just made sense to evaluate whether or not this action was net positive in expectation.
Now consider the case of an EA who is considering launching an organization with a potentially large negative downside, where the EA doesn’t have some truly special resource or ability. (E.g., AI advocacy with inflammatory tactics—think DxE for AI.) Many people could have started this organization, but no one did. And so, when deciding whether this org would be net positive, you have to condition on this observation.
Dialogue on Donation Splitting
Thanks for asking! The first thing I want to say is that I got lucky in the following respect. The set of possible outcomes isn’t the interior of the ellipse I drew; rather, it is a bunch of points that are drawn at random from a distribution, and when you plot that cloud of points, it looks like an ellipse. The way I got lucky is: one of the draws from this distribution happened to be in the top-right corner. That draw is working at ARC theory, which has just about the most intellectually interesting work in the world (for my interests) and is also just about the most impactful place for me to work (given my skills and my models of what sort of work is impactful). I interned there for 4-5 months and I’ll be starting there full-time soon!
Now for my report card, as for how well I checked in (in the ways listed in the post):
Writing the above post was useful in an interesting way: I formed some amount of identity around “I care about things besides impact” in a way that somewhat decreased value drift. (I endorse this, I think.) This manifested as me thinking a lot over the last year about whether I’m happy. Sometimes the answer was “not really”! But I noticed this and took steps toward fixing it. In particular, I noticed when I was in Berkeley last summer that I had a need for a social group that doesn’t talk about maximizing impact all the time. This was super relevant to my criteria for choosing a living situation when I came back to Berkeley in October. I ended up choosing a “chill” group house, and I think that was the right choice.
I had the goal of keeping a monthly diary about my values. I updated it four times—in June, July, October, and March—and I think that captured most of the value. (I’m not sure that this was a particularly valuable intervention.)
Regarding the four specific non-EA things I cared about that I listed above:
Family and non-EA friends: I continue to be close with my family and remain similarly close with the non-EA friends I had at the time.
Puzzles and puzzle hunts: I continue caring about this. Empirically I haven’t done many puzzle hunts over the last year, but that was more for a lack of good opportunities. But I recently joined a new puzzle hunt team, so I might have more opportunities ahead!
Spending time in nature: yup, I continue to care about this. I went to Alaska for a few weeks last month and it was great.
Random statistical analyses: honestly, much less? Which I’m a bit sad about.
One interested that I had not listed because I had mixed feelings about how much I endorsed the interest was politics. I indeed care less about politics now (though still do a decent amount).
I also picked up an interest—I’m part of the Bayesian Choir! I’ve also been playing some small amount of tennis, for the first time since high school.
I didn’t do any of the CFAR techniques, like focusing or internal double crux.
I’d say that this looks pretty good.
I do think that there are a couple of yellow flags, though:
I currently believe that the Berkeley EA community is unhealthy (I’m not sure whether to add the caveat “for me” or whether I think it’s unhealthy, period). The main reason for this, I think, is that there’s a status hierarchy. The way I sometimes put this is: if you asked me which of my friends in college are highest status, I would’ve been like ”...what does that even mean, that question doesn’t make sense”. But unfortunately I think if you asked about people’s status in this community, I’d often have thoughts. I have a theory that this comes out of having a large group of people with really similar values and goals. To elaborate on this: in college, everyone was pursuing their own thing and had their own values, which means that different people had very different standards for what it meant for someone to be cool. (There would have been way more status if, say, everyone were trying to be a member of some society; my impression is that this caused status dynamics in parts of my college that I didn’t interact with.) In the Berkeley EA community, most people have pretty similar goals (such as furthering AI safety or having interesting conversations). If people agree on what’s important then naturally they’ll agree more on who’s good at the important things (who’s good at AI safety research, or who’s good at having interesting conversations—and by the way, there’s way more agreement in the Berkeley EA community about what constitutes an interesting conversation than there is in college).
This theory would predict that political party organizations (the Democratic and Republican parties) have a strong social status hierarchy, since they mostly share the same goals (get the party into a position of power). If I learn that actually these organizations mostly don’t have strong social status hierarchies, I’ll retract my diagnosis.
I weakly think that something about the Berkeley EA community makes it harder for me to have original thoughts. Maybe it’s that there’s so much stuff going on that I don’t spend very much time alone with my thoughts. Or maybe it’s that there’s more of a “party line” about the right takes, in a way that discourages free-thinking. Or maybe it’s that people in this community really like talking about some things but not other things, and this implicitly discourages thinking about the “other things”.
I haven’t figured out how to navigate this. These may be genuine trade-offs—a case where I can’t both work at ARC and be immune from these downsides—or maybe I’ll learn to deal with the downsides over time. I do think that the benefits of my decision to work at ARC are worth the costs for me, though.
Hi Karthik,
Your comment inspired me to write my own quick take, which is here. Quoting the first paragraph as a preview:
I decided to spin off a quick take rather than replying here, because I think it would be interesting to have a discussion about non-dogmatism in a context that’s somewhat separated from this particular context, but I wanted to mention the quick take as a reply to your comment, since it’s relevant.