Hmm… often I think it is nice to have a standard term for a phenomenon so that people don’t have to figure out how to express a certain concept each time and then hope that everyone else can follow. Language also has the advantage that insofar as we convince people to adopt our language, we draw them into our worldview.
This should really be a Wiki page instead since these lists (I even made one myself in the past) always become outdated.
This is a really challenging situation—I could honestly see myself leaning either way on this kind of scenario. I used to lean a lot more towards saying whatever I thought was true and ignoring the consequences, but lately I’ve been thinking that it’s important to pick your battles.
I think the key sentence is this one—“On many subjects EAs rightfully attempt to adopt a nuanced opinion, carefully and neutrally comparing the pros and cons, and only in the conclusion adopting a tentative, highly hedged, extremely provisional stance. Alas, this is not such a subject.”
What seems more important to me is not necessarily these kinds of edge cases, but that we talk openly about the threat potentially posed. Replacing the talk with a discussion about cancel culture instead seems like it could have been a brilliant Jiu Jitsu move. I’m actually much more worried about what’s been going on with ACE than anything else.
Thanks, that was useful. I didn’t realise that his argument involved 1+2 and not just 1 by itself. That said, if the hinge of history was some point in the past, then that doesn’t affect our decisions as we can’t invest in the past. And perhaps it’s a less extraordinary coincidence that the forward-looking hinge of history (where we restrict the time period from now until the end of humanity) could be now, especially if in the average case we don’t expect history to go on much longer.
I’ve never found Will’s objections to the hinge of history argument persuasive. Convincing me that there was a greater potential impact in past times than I thought, ie. that it would have been very influential to prevent the rise of Christianity, shouldn’t make me disbelieve that arguments that AI or bio risks are likely to lead to catastrophe in the next few decades if we don’t do anything about it. But maybe I just need to reread the argument again.
DNA engineering has some positive points, but imagine the power that having significant control its citizens personalities would give the government. That shouldn’t be underestimated.
The real hinge here is how much we should expect the future to be a continuation of the past and how much we update based on our best predictions. Given what we know about about existential risk and the likelihood that AI will dramatically change our economy, I don’t think this idea makes sense in the current context.
I agree that such a system would be terrifying. But I worry that its absence would be even more terrifying. Limited surveillance systems work decently for gun control, but when we get to the stage where someone can kill tens of thousands or even millions instead of a hundred I suspect it’ll break down.
Thanks for posting such a detailed answer!
It’s great to hear that you are setting this up. However, the current post seems light on details. Why are these areas of particular interest? What kind of commitment are you hoping for from participants?
I think people are quite reasonably deciding that this post isn’t worth taking the time to engage with. I’ll just make three points even though I could make more:
“A good rule of thumb might be that when InfoWars takes your side, you probably ought to do some self-reflection on whether the path your community is on is the path to a better world.”—Reversed Stupidity is Not Intelligence
“In response, the Slate Star Codex community basically proceeded to harass and threaten to dox both the editor and journalist writing the article. Multiple individuals threatened to release their addresses, or explicitly threatened them with violence.”—The author is completely ignoring the fact that Scott Alexander specifically told people to be nice, not to take it out on them and didn’t name the journalist. This seems to suggest that the author isn’t even trying to be fair.
“I have nothing to say to you — other people have demonstrated this point more clearly elsewhere”—I’m not going to claim that such differences exist, but if the author isn’t open to dialog on one claim, it’s reasonable to infer that they mightn’t be open to dialog on other claims even if they are completely unrelated.
Quite simply this is a low quality post and “I’m going to write a low quality post on topic X and you have to engage with me because topic X is important regardless of the quality” just gives a free pass on low quality content. But doesn’t it spur discussion? I’ve actually found that most often low quality posts don’t even provide the claimed benefit. They don’t change people’s minds and tend to lead to low quality discussion.
It’s worth remembering though that when people who paid for the book are much more likely to have read it
What do you think about the fact that many in the field are pretty open that they are pursuing enquiry on how to achieve an ideology and not neutral enquiry (using lines like all fields are ideological whether they know it or not)?
“It’s a bit concerning that the community level of knowledge of the bodies of work that deal with these issues is just average”—I do think there are valuable lessons to be drawn from the literature, unfortunately a) lots of the work is low quality or under-evidenced b) discussion of these issues often ends up being highly divisive, whilst not changing many people’s minds
“If the self-oriented reasons for action leave it largely underdetermined how personal flourishing would look like”—If we accept pleasure and pain, then we can evaluate other actions in high likely they are to lead to pleasure/pain in the long term, so I don’t see how actions are underdetermined.
I’m surprised that you put moral realism on the same tier as self-oriented reasons for action. It would seem much more astounding to claim that pain and pleasure are neither good nor bad *for me*, then to claim that there’s no objective stance by which others should consider my pain good or bad. The Pascal’s wager argument is also much stronger too.
I think percentages are misleading. In terms of influencing demographic X, what matters isn’t so much how many people of demographic X there are in these organisations, but how well-respected they are.