N N
Past and Future Trajectory Changes
This is not about us. A bunch of retail investors just completely lost their shirts due to, I don’t know what exactly, but let’s say “apparent bad behavior”. If possible, we should try to provide some kind of support to them.
MacAskill (who I believe coined the term?) does not think that the present is the hinge of history. I think the majority view among self-described longtermists is that the present is the hinge of history. But the term unites everyone who cares about things that are expected to have large effects on the long-run future (including but not limited to existential risk).
I think the term’s agnosticism about whether we live at the hinge of history and whether existential risk in the next few decades is high is a big reason for its popularity.
What happens if the money was donated to a charity that is subject to clawbacks, but the charity then spent the money? Do they try to claw it back from the suppliers or employees or whoever? Can it trigger a cascade of bankruptcies?
I think I am missing something here. Does the book purport to mention every collapse? Why does WWOTF need to mention the Bronze Age Collapse?
My understanding is Dustin has already diversified out of Meta to some large degree (though I have no insider information).
Is the argument here that nobody should criticize effective altruism on websites that are not EA forum, because then outsiders might get a negative impression? And if so, what kind of impression would outsiders get if they knew about this proposed rule?
A small thing, but citing a particular person seems less culty to me than saying “some well-respected figures think X because Y”. Having a community orthodoxy seems like worse optics than valuing the opinions of specific named people.
Basically, you can treat fraction of worlds as equivalent to probability, so there is little apparent need to change anything if MWI turns out to be true.
This is a great comment, you may want to consider making it a top level post on the forum so more people will see it.
FWIW, when I first saw that I wondered “what’s the difference between the A-aesthetic and the B-aesthetic?” It might be clearer to say “non-aesthetic” or just something like “no frills”.
A lot of people got into EA after reading a book, and a lot of people find new topics to investigate by reading newspaper articles.
The original EA materials (at least the ones that I first encountered in 2015 when I was getting into EA) promoted evidence-based charity, that is making donations to causes with very solid evidence. But the the formal definition of EA is equally or more consistent with hits based charity, making donations with limited or equivocal evidence but large upside with the expectation that you will eventually hit the jackpot.
I think the failure to separate and explain the difference between these things leads to a lot of understandable confusion and anger.
This does sound like fun.
I appreciated the link to the hardscrapple frontier, which I had not heard of, FWIW.
The content of this comment seems reasonable to me. How is it “LARPing”?
I meant broad sense existential risk, not just extinction. The first graph is supposed to represent a specific worldview where the relevant form of existential risk is extinction, and extinction is reasonably likely. In particular I had Eliezer Yudkowsky’s views about AI in mind. (But I decided to draw a graph with the transition around 50% rather than his 99% or so, because I thought it would be clearer.) One could certainly draw many more graphs, or change the descriptions of the existing graphs, without representing everyone’s thoughts on the function mapping percentile performance to total realized value.
Thanks for explaining how you think about this issue, I will have to consider that more. My first thought is that I’m not utilitarian enough to say that a universe full of happy biological beings is ~0.0% as good as if they were digital, even conditioning on being biological being the wrong decision. But maybe I would agree on other possible disjunctive traps.
Thanks for this post. I wonder if it would be good to somehow target different outside of EA subcultures with messaging corresponding to their nearest-neighbor EA subculture. To some extent I guess this already happens, but maybe there is an advantage to explicitly thinking about it in these terms.
What category would you put ideas like the unilateralist’s curse or Bostrom’s vulnerable world hypothesis? They seem like philosophical theories to me, but not really moral theories (and I think they attract a disproportionate amount of criticism).
I strongly agree with your comment, but I want to point out in defense of this trend that nuclear weapons policy seems to be unusually insulated from public input and unusually likely to be highly sensitive/not good to discuss in public.