I used to work in finance. I am interested in effective altruism. Highest impact for minimum effort is good.
Brian Lui
I think “respectable” is kind of a loaded term that gives longtermism a slightly negative connotation. I feel like a more accurate term would be how “galaxy brain” the cause area is—how much effort and time do you need to explain it to a regular person, or what percentage of normal people would be receptive to a pitch.
The base rate I have in mind is that FTX had access to a gusher of easy money, run by young energetic people with minimal oversight and a limited usage of formalized hiring systems. That produced a situation where top management’s opinion was the critical factor in who got promoted or hired into influential positions. The more that other EA organizations resemble FTX, the stronger I would think this.
A few months ago I would have easily agreed with “the view that EA employers are so fragile as to deny job opportunities based on EA Forum hot takes is hopefully greatly exaggerated and very disturbing if not.”
However, then I read about the hiring practices at FTX, and significantly updated on this. It’s now hard for me to believe that at least some EA employers would not deny job opportunities based on EA forum hot takes!
Proposal: Create A New Longtermism Organization
Thank you, this is a great example of longtermism thinking working out, that would have been unlikely to happen without it!
What do you think would be a good way to word it?
One of the ideas is that longtermism probably does not increase the EV of decisions made for future people. Another is that we increase the EV of future people as a side effect of normal doing things. The third is that increasing the EV of future people is something we should care about.
If all of these are true, then it should be true that we don’t need longtermism, I think?
Agreed, “probability discounting” is the most accurate term for this. Also, I struck out the part about Cleopatra in the original post, now that I understand the point behind it!
I just found this forum post which is talking about the same ballpark of things! Mostly agree with the forum post too.
Effective Altruism Movements in the past could have a wide range of results. For example, the Fabian Society might be an example of a positive impact. In the same time period, Communism would be another output of such a movement.
I think past performance is generally indicative of future results. Unless you have a good reason to think that ‘this time is different’, and you have a thesis for why the differences will lead to a materially changed outcome, it’s better to use the past as the base case.
Point is, I think people have always tended to be significantly more right than wrong about how to change the world. It’s not too too hard to understand how one person’s actions might contribute to an overriding global goal. The problem is in the choice of such an overriding paradigm. The first paradigm was that the world was stagnant/repetitive/decaying and just a prelude to the afterlife. The second paradigm was that the world is progressing and things will only get steadily better via science and reason. Today we largely reject both these paradigms, and instead we have a view of precarity—that an incredibly good future is in sight but only if we proceed with caution, wisdom, good institutions and luck. And I think the deepest risk is not that we are unable to understand how to make our civilization more cautious and wise, but that this whole paradigm ends up being wrong.
I like this description of your viewpoint a lot! The entire paradigm for “good outcomes” may be wrong. And we are unlikely to be aware of our paradigm due to “fish in water” perspective problems.
Valid—basically I was doing a two part post. First part is “longtermism isn’t a necessary condition”, because I thought there would be pushback to that. If we accept this, then we consider the second part, “longtermism may not have a positive effect as assumed”. If I knew the first part was uncontroversial I would have cut it out.
I think the slavery example is a strong example of longtermism having good outcomes, and it probably increased the amount of urgency to reduce slavery.
My base rate for “this time it’s different” arguments are low, except for ones that focus on extinction risk. Like if you mess up and everyone dies, that’s unrecoverable. But for other things I am skeptical.
Thanks for adding this as an additional example—the US constitution is a very good example of how longtermism can achieve negative results! There’s a growing body of research from political scientists that the constitution is a major cause of a lot of US governance problems, for example here.
Yes, exactly—it’s grounded in concern about human extinction, not longtermism. The section “We can achieve longtermism without longtermism” in my posts talks about the difference.
Thanks ThomasWoodside! I noticed the forum has relatively low throughput so I decided to “learn in public” as it were :)
I understand the Cleopatra paragraph now and I’ve edited my post. I wasn’t able to understand his point before, so I got it wrong. Thanks for explaining it!
Obviously, it wasn’t. But of course it wasn’t! There wasn’t even longtermism at all, so it wasn’t a significant factor in anyone’s decisions. Maybe you are trying to say “people can make long term changes without being motivated by longtermism.” But that doesn’t say anything about whether longtermism might make them better at creating long term changes than they otherwise would be.
This is a good point. I wanted to show “longtermism is not necessary for long term changes”, which I think is pretty likely. The more venturesome idea is “longtermism would not make better long term changes”, and those examples don’t address that point.
My intuition is that a longtermism mindset likely would not have a significant positive impact (such as the imaginary examples I wrote), but it’s pretty hard to “prove” that because we don’t have a counterfactual history. We could go through historical examples of people with longterm views (in journals and diaries?), and see whether they had positive or negative impact. That might be a big project though.
I generally agree with this and so do many others. For instance see here and here.
These are really good links, thank you!
In terms of whether historical “long term” interventions have been negative, you’ve asserted it but you haven’t really shown it. I would be very interested in research on this; I’m not aware of any. If this were true, I do think that would be a knock against longtermism as a theory of action (though not decisive, and not against longtermism as a theory of value). Though it maybe could still be argued that we live at “the hinge of history” where longtermism is especially useful.
Same! I agree this is a weakness of my post. Theory of action vs theory of value is a good concept—I don’t have a strong view on longtermism as a theory of value, I mostly care about the theory of action.
I might have entered at a different vector (all online) so I experienced a different introduction to the idea! If my experience is atypical, and most people get the “gentle” introduction you described, that is great news.
Against longtermism
Excellent! This is the core of the point that I wanted to communicate. Thanks for laying it out so clearly.
Great! Yes. The key part I think is this:
Advanced nanotechnology, often self-replicating
Recursive self-improvement (e.g. an intelligence explosion)
Superhuman manipulation skills (e.g. it can convince anyone of anything)
There are exceptions to this, like the example I discuss in Appendix C.
I found that trying to reason about AGI risk scenarios that rely on these is hard because I keep thinking that these possibly run into physical limitations that deserve more thought before thinking they are plausible enough to substantially affect my thinking. It occurred to me it would be fruitful to reason about AGI risk taking these options off the table to focus on other reasons one might suspect AGIs would have overwhelming power:
Speed1 (The system has fast reaction times)
Memory (The system could start with knowledge of all public data at the time of its creation, and any data subsequently acquired would be remembered perfectly)
Superior strategic planning (There are courses of actions that might be too complex for humans to plan in a reasonable amount of time, let alone execute)
My view is that normal people are unreceptive to arguments that focus on the first three (advanced nanotechnology, recursive self-improvement, superhuman manipulation skills). Leave aside whether these are probable or not. Just talking about it is not going to work, because the “ask” is too big. It would be like going to rural Louisiana and talking at them about intersectionality.
Normal people are receptive to arguments based on the last three (speed, memory, superior strategic planning). Nintil then goes on to make an argument based only on these ideas. This is persuasive. The reason is that it’s easy for people to accept all three premises:
Computers are very fast. This accords with people’s experience.
Computers can store a lot of data. People can understand this, too.
Superior strategic planning might be slightly trickier, but it’s still easy to grasp, because people know that computers can beat the strongest humans at chess and go.
One of the quotes is:
I think the implication here is that if you are working on global poverty or animal welfare, you must not be smart enough or quantitative enough. I’m not deeply involved so I don’t know if this quote is accurate or not.