I suppose an example would be that increasing economic growth in a country doesn’t matter if the country later gets blown up or something.
Like how would I know if the world was more absorber-y or more sensitive to small changes?
I’m not sure; that’s a pretty interesting question.
Here’s a tentative idea: using the evolution of brains, we can conclude that whatever sensitivity the world has to small changes, it can’t show up *too* quickly. You could imagine a totally chaotic world, where the whole state at time t+(1 second) is radically different depending on minute variations in the state at time t. Building models of such a world that were useful on 1 second timescales would be impossible. But brains are devices for modelling the world that are useful on 1 second timescales. Brains evolved; hence they conferred some evolutionary advantage. Hence we don’t live in this totally chaotic world; the world must be less chaotic than that.
It seems like this argument gets less strong the longer your timescales are, as our brains perhaps faced less evolutionary pressure to be good at prediction on timescales of like 1 year, and still less to be good at prediction on timescales of 100 years. But I’m not sure; I’d like to think about this more.
Hey, glad this was helpful! : )
To apply this to conception events—imagine we changed conception events so that girls were much more likely to be conceived than boys (say because in the near-term that had some good effects eg. say women tended to be happier at the time). My intuition here is that there could be long-term effects of indeterminate sign (eg. from increased/decreased population growth) which might dominate the near-term effects. Does that match your intuition?
Yes, that matches my intuition. This action creates a sweeping change a really complex system; I would be surprised if there were no unexpected effects.
But I don’t see why we should believe all actions are like this. I’m raising the “long-term effects don’t persist” objection, arguing that it seems true of *some* actions.
I’d maybe give a 10% probability to long-termism just being wrong.
What could you observe that would cause you to think that longtermism is wrong? (I ask out of interest; I think it’s a subtle question.)
Florence Nightingale? Martin Luther King Jr. ? Leaders of social movements? It seems to me that a lot of “standard examples of good people” are like this; did you have something else in mind?
Sweet links, thanks!
(Focusing on a subtopic of yours, rather than engaging with the entire argument.)
All actions we take have huge effects on the future. One way of seeing this is by considering identity-altering actions. Imagine that I pass my friend on the street and I stop to chat. She and I will now be on a different trajectory than we would have been otherwise. We will interact with different people, at a different time, in a different place, or in a different way than if we hadn’t paused. This will eventually change the circumstances of a conception event such that a different person will now be born because we paused to speak on the street.
I’m not so sure “all actions we take have huge effects on the future.” It seems like a pretty interesting empirical question. I don’t find this analogy supremely convincing; it seems that life contains both “absorbers” and “amplifiers” of randomness, and I’m not sure which are more common.
In your example, I stop to chat with my friend vs. not doing so. But then I just go to my job, where I’m not meeting any new people. Maybe I always just slack off until my 9:30am meeting, so it doesn’t matter whether I arrive at 9am or at 9:10am after stopping to chat. I just read the Internet for ten more minutes. It looks like there’s an “absorber” here.
Re: conception events — I’ve noticed that discussion of this topic tends to use conception as a stock example of an amplifier. (I’m thinking of Tyler Cowen’s Stubborn Attachments.) Notably, it’s an empirical fact that conception works that way (e.g. with many sperm, all with different genomes, competing to fertilize the same egg). If conception did not work that way, would we lower our belief in “all actions we take have huge effects on the future” ? What sort of evidence would cause us to lower our beliefs in that?
Now, when the person who is conceived takes actions, I will be causally responsible for those actions and their effects. I am also causally responsible for all the effects flowing from those effects.
Sure, but what about the counterfactual? How much does it matter to the wider world what this person’s traits are like? You want JFK to be patient and levelheaded, so he can handle the Cuban Missile Crisis. JFK’s traits seem to matter. But most people aren’t JFK.
You might also have “absorbers,” in the form of selection effects, operating even in the JFK case. If we’ve set up a great political system such that the only people who can become President are patient and levelheaded, it matters not at all whether JFK in particular has those traits.
Looking at history with my layman’s eyes, it seems like JFK was groomed to be president by virtue of his birth, so it did actually matter what he was like. At the extreme of this, kings seem pretty high-variance. So affecting the conception of a king matters. But now what we’re doing looks more like ordinary cause prioritization.
I don’t know — sounds like you might have stronger views on this than me! : )
This is gonna vary a lot because there’s not a “typical EA organization” — salary is determined in large part by what the market rate for a position is, so I’d expect e.g. a software engineer at an EA organization to be paid about the same as a software engineer at any organization.
Is there a more specific version of your question to ask? Why do you want to know / what’s the context?
Gotcha. So your main concern is not that EA defecting will make us miss out on good stuff that we could have gotten via the climate change movement deciding to help us on our goals, but rather that it might be bad if EA-type thinking became very popular?
I don’t buy your example on 80k’s advice re: climate change. You want to cooperate in prisoner’s dilemmas if you think that it will cause the agent you are cooperating with to cooperate more with you in the future. So there needs to a) be another coherent agent, which b) notices your actions, c) takes actions in response to yours, and d) might plausibly cooperate with you in the future. In the climate change case, what is the agent you’d be cooperating with here and does it meet these criteria?
Is it the climate change movement? It doesn’t seem to me that “the climate change movement” is enough of a coherent agent to do things like decide “let’s help EA with their goals.”
Or is it individual people who care about climate change? Are they able to help you with your goals? What is it you want from them?
I’m interested in the $10 million per minute number. What is the model? Is that for the whole world?
Quick check is that U.S. GNP for one year is $10^12 ( source: https://www.google.com/search?q=us+gnp ), $10 million = $10^7 and there are about 10^6 minutes in a year, so we’re saying that the shutdown would be equivalent to turning off the entire US economy.
Sweet, better than I could have hoped for!
Any sense of what organizations/people are working on it this year? I wasn’t able to find an email address for Steve Hull so I posted an issue — https://github.com/sdhull/strategic_voting/issues/20 — no response yet.
I’ll also contact Ben.
Thanks. I realized it should have been a Question but too late — was there a way for me to upgrade it myself after posting?
Thanks for the pointer to “independence of irrelevant alternatives.”
I’m curious to know how you think about “some normative weight.” I think of these arguments as being about mathematical systems that do not describe humans, hence no normative weight. Do you think of them as being about mathematical systems that *somewhat* describe humans, hence *some* normative weight?
Link to discussion on Facebook: https://www.facebook.com/groups/eahangout/permalink/2845485492205023/
I think this math is interesting, and I appreciate the good pedagogy here. But I don’t think this type of reasoning is relevant to my effective altruism (defined as “figuring out how to do the most good”). In particular, I disagree that this is an “argument for utilitarianism” in the sense that it has the potential to convince me to donate to cause A instead of donating to cause B.
(I really do mean “me” and “my” in that sentence; other people may find that this argument can indeed convince them of this, and that’s a fact about them I have no quarrel with. I’m posting this because I just want to put a signpost saying “some people in EA believe this,” in case others feel the same way.)
Following Richard Ngo’s post https://forum.effectivealtruism.org/posts/TqCDCkp2ZosCiS3FB/arguments-for-moral-indefinability, I don’t think that human moral preferences can be made free of contradiction. Although I don’t like contradictions and I don’t want to have them, I also don’t like things like the repugnant conclusion, and I’m not sure why the distaste towards contradictions should be the one that always triumphs.
Since VNM-rationality is based on transitive preferences, and I disagree that human preferences can or “should” be transitive, I interpret things like this as without normative weight.
What is meant by “not my problem”? My understanding is that what is meant is “what I care about is no better off if I worry about this thing than if I don’t.” Hence the analogy to salary; if all I care about is $$, then getting paid in Facebook stock means that my utility is the same if I worry about the value of Google stock or if I don’t.
It sounds like you’re saying that, if I’m working at org A but getting paid in impact certificates from org B, the actual value of org A impact certificates is “not my problem” in this sense. Here obviously I care about things other than $$.
This doesn’t seem right at all to me, given the current state of the world. Worrying about whether my org is impactful is my problem in that it might indeed affect things I care about, for example because I might go work somewhere else.
Thinking about this more, I recalled the strength of the assumption that, in this world, everyone agrees to maximize impact certificates *instead of* counterfactual impact. This seems like it just obliterates all of my objections, which are arguments based on counterfactual impact. They become arguments at the wrong level. If the market is not robust, that means more certificates for me *which is definitionally good*.
So this is an argument that if everyone collectively agrees to change their incentives, we’d get more counterfactual impact in the long run. I think my main objection is not about this as an end state — not that I’m sure I agree with that, I just haven’t thought about it much in isolation — but about the feasibility of taking that kind of collective action, and about issues that may arise if some people do it unilaterally.