Important to note: I archived the Washington Post homepage here and it showed Robinson’s op-ed, but when I went to https://www.washingtonpost.com itself immediately after, at ~5:38 pm San Francisco time, it was nowhere to be found! (I was not signed in for either case).
trevor1
This entire thing is just another manifestation of academic dysfunction
(philosophy professors using their skills and experience to think up justifications for their pre-existing lifestyle, instead of the epistemic pursuit that justified the emergence of professors in the first place).
It started with academia’s reaction to Peter Singer’s Famine, Affluence, Morality essay in 1972, and hasn’t changed much since. The status quo had already hardened, and the culture became so territorial that whenever someone has a big idea, everyone with power (who already optimized for social status) had an allergic reaction to the memetic spread rather than the epistemics behind the idea itself.
The Dark Forest Problem implies that people centralizing power might face strong incentives to hide, act through proxies, and/or disguise their centralized power as decentralized power. The question is to what extent high-power systems are dark forests vs. the usual quid-pro-quo networks and stable factions.
Changing technology and applications for power, starting in the 1960s, implies that factions would not be stable and iterative trust is less reliable, and therefore a dark forest system was more likely to emerge.
[Linkpost] Vague Verbiage in Forecasting
Yep, that’s the way it goes!
Also, figuring out what’s original and what’s memetically downstream, is an art. Even more so when it comes to dangerous technologies that haven’t been invented yet.
Ah, I didn’t know about the EA handbook and would not have found out if not for this post, thanks! It looks pretty good and along with the CFAR handbook, I wish I had known about it many years ago.
Transformative trustbuilding via advancements in decentralized lie detection
Yeah, a lot of them are not openly advertised for good reasons. One example that’s probably fine to talk about is NunoSempere’s claim that EAforum is shifting towards catering to new or marginal users.
The direct consequence is reducing the net quality of content on EAforum, but it also allows it to steer people towards events as they get more interested in various EA topics, where they can talk more freely without worrying about saying things controversial, or get involved directly with people working on those areas via face-to-face interaction. And it doesn’t stop EAforum from remaining a great bulletin board for orgs to publish papers and updates and get feedback.
But at first glance, catering towards marginal users normally makes you think that they’re just trying to do classic user retention. That’s not what’s happening; this is not a normal forum and that’s the wrong way to think about it.
My thinking about EAforum over the years has typically been “Jesus, why on earth would they people deliberately set things like that” and then maybe a couple months later, maybe a couple years later, I start to notice a possible explanation, and I’m like “oooooooooooohhhhhh, actually, that might make a lot of sense, I wish I had noticed that immediately”.
Large multi-human systems tend to be pretty complicated and counterintuitive, but it becomes way, way more so when most of the people are extremely thoughtful. Plus, the system changes in complicated and unprecedented ways as the world changes around it, or as someone here or there discovered a game-changing detail about the world, meaning that EAforum is entering uncharted territory and tearing down Schelling fences rather frequently.
Sinocism is like Zvi’s blog, except for China Watchers instead of AI safety. It leans a little towards open source, but it’s free and the guy knows the space (though doesn’t know everything).
I would like to add that certain types of people might be predisposed towards power seeking (and succeeding at power seeking), rather than just being corrupted by power, status, money, or fame.
Social Dark Matter offers some interesting takes on this; it’s more nuanced than it appears e.g. neurotic people might even be more reputation-obsessed but also potentially more likely than the median human to internalize moral values (or, in the case of EA, commit to internalizing moral values in a lasting way). This is purely speculative food for thought to illustrate the complexity of this situation (empirically researching the psychology of different kinds of powerful people is difficult due to nonresponse bias making samples disproportionately stacked towards people who aren’t as powerful as they look).
Oh, sorry, by profiteers I was referring to people like forum lurkers and hostile open source researchers, not you at all.
My thinking was that this plan works fine with or without funding so long as someone (e.g. you) coordinates it, but it can’t be open-source on EAforum or Lesswrong because the bad guys (not journalists, the other bad guys) would get too much information out of it.
My current thinking about this is that EAforum and Lesswrong have confused, mentally ill, or profiteering people trying to do open source research and find ways to maximize damage to EA.
As a result, aggregating criticism in an open and decentralized way will boost the adversary’s epistemics in parallel, and is thus better done in an closed, in-person networked, and centralized way (I made the same mistake a couple years ago).
Raemon, a moderator on Lesswrong, recommends Scott Alexander’s Superintelligence FAQ.
(4 min read) An intuitive explanation of the AI influence situation
I’m not a scholar, but is it alright if I ask what the best source is to explain wild animal welfare to laypeople? I’m looking for something similar to the Superintelligence FAQ, but selected based on success at explaining wild animal welfare instead of AGI. I know a couple scholars but haven’t introduced them yet and want to make sure I do it right. It’s plausibly a valuable thing to standardize too.
The only sources I’m aware of are the home page of wildanimalsuffering.org, the 80,000 hours page on the topic, and Dylan Matthew’s Vox article, and I have no idea which one has a higher success rate of explaining the concept in a way laypeople are able to take seriously. For example, the 80,000 hours page debunks the naturalistic fallacy quickly and efficiently, which indicates that the authors were serious about writing it well, but otherwise it’s kinda sparse (maybe the authors put a lot of effort into making it short so it’s easier to read and recommend?) and even tries to redirect people to farmed animal welfare instead.
If cryopreservation becomes mainstream, then that’s literally it. Nobody dies, and all of humanity logrolls itself into raising the next generations to be friendly and create aligned AGI.
Even the total sociopaths participate to some degree (e.g. verbally support, often avoid obstruction if they are very powerful). If they don’t have preserved loved ones to protect, they still need a friendly long-term future for themselves to be unfrozen into. They’ll spend many more years alive in the future than the present anyway, because unfreezing a person is orders of magnitude harder than reversing aging or generating a new body for an unfrozen person.
Many other people have probably thought of this already. What am I missing?
Oops! I’m off my groove today, sorry. I’m going to go read up on some of the conflict theory vs. mistake theory literature on my backlog in order to figure out what went wrong and how to prevent it (e.g. how human variation and inferential distance causes very strange mistakes due to miscommunication).
Strong downvoted. This isn’t a laughing matter.
I understand what it’s like to think of a really funny joke and not want to waste it. But this isn’t an appropriate environment to substitute charisma for substance.
If EA grows by, say, 30% per year, then that means at any given time there’s going to be a large number of people on the forum who will see this, think it’s normal, and upvote it (reinforcing that behavior). Even if professional norms hold strong, it will still make the onboarding process that much harder and more confusing for the new people, as they are misled into making serious social-status-damaging faux passes, and that reputation might follow them around in the community for years regardless of how talented or valuable they become.
That’s interesting, it still doesn’t show anywhere on my end. I took this screenshot around 7:14 pm, maybe it’s a screen size or aspect ratio thing.