What would you say is the core message of the Sequences? Naturalism is true? Bayesianism is great? Humans are naturally very irrational and have to put effort if they want to be rational?
I’ve read the Sequences almost twice, first time was fun because Yudkowsky was optimistic back then, but during the second time I was constantly aware that Yudkowsky believes along the lines of his ‘Death with dignity’ post that our doom is virtually certain and he has no idea how to even begin formulate a solution. If Yudkowsky, who wrote the Sequences on his own, who founded the modern rationalist movement on his own, who founded MIRI and the AGI alignment movement on his own, has no idea where to even begin looking for a solution, what hope do I have? I probably couldn’t do anything comparable to those things on my own even if I tried my hardest for 30 years. I could thoroughly study everything Yudkowsky and MIRI have studied, which would be a lot, and after all that effort I would be in the same situation Yudkowsky is right now—no idea where to even begin looking for a solution and only knowing which approaches don’t work. The only reason to do it is to gain a fraction of a dignity point, to use Yudkowsky’s way of thinking.
To be clear, I don’t have a fixed model in my head about AI risk, I think I can sort of understand what Yudkowsky’s model is and I can understand why he is afraid, but I don’t know if he’s right because I can also sort of understand the models of those who are more optimistic. I’m pretty agnostic when it comes to this subject and I wouldn’t be particularly surprised by any specific outcome.
What would you say is the core message of the Sequences? Naturalism is true? Bayesianism is great? Humans are naturally very irrational and have to put effort if they want to be rational?
I’ve read the Sequences almost twice, first time was fun because Yudkowsky was optimistic back then, but during the second time I was constantly aware that Yudkowsky believes along the lines of his ‘Death with dignity’ post that our doom is virtually certain and he has no idea how to even begin formulate a solution. If Yudkowsky, who wrote the Sequences on his own, who founded the modern rationalist movement on his own, who founded MIRI and the AGI alignment movement on his own, has no idea where to even begin looking for a solution, what hope do I have? I probably couldn’t do anything comparable to those things on my own even if I tried my hardest for 30 years. I could thoroughly study everything Yudkowsky and MIRI have studied, which would be a lot, and after all that effort I would be in the same situation Yudkowsky is right now—no idea where to even begin looking for a solution and only knowing which approaches don’t work. The only reason to do it is to gain a fraction of a dignity point, to use Yudkowsky’s way of thinking.
To be clear, I don’t have a fixed model in my head about AI risk, I think I can sort of understand what Yudkowsky’s model is and I can understand why he is afraid, but I don’t know if he’s right because I can also sort of understand the models of those who are more optimistic. I’m pretty agnostic when it comes to this subject and I wouldn’t be particularly surprised by any specific outcome.