That would be a valid reply if I had said it’s all about priors. All I said was that I think priors make up a significant implicit source of the disagreement – as suggested by some people thinking 5% risk of doom seems “high” and me thinking/reacting with “you wouldn’t be saying that if you had anything close to my priors.”
Or maybe what I mean is stronger than “priors.” “Differences in underlying worldviews” seems like the better description. Specifically, the worldview I identify more with, which I think many EAs don’t share, is something like “The Yudkowskian worldview where the world is insane, most institutions are incompetent, Inadequate Equilibria is a big deal, etc.” And that probably affects things like whether we anchor way below 50% or above 50% on what the risks should be that the culmination of accelerating technological progress will go well or not.
In general I’m skeptical of arguments of disagreement which reduce things to differing priors. It’s just not physically or predictively correct, and it feels nice because now you no longer have an epistemological duty to go and see why relevant people have differing opinions.
That’s misdescribing the scope of my point and drawing inappropriate inferences. Last time I made an object-level argument about AI misalignment risk was just 3h before your comment. (Not sure it’s particularly intelligible, but the point is, I’m trying! :) )
So, evidently, I agree that a lot of the discussion should be held at a deeper level than the one of priors/general worldviews.
Quintin has lots of information, I have lots of information, so if we were both acting optimally according to differing priors, our opinions likely would have converged.
I’m a fan of Shard theory and some of the considerations behind it have already updated me towards a lower chance of doom than I had before starting to incorporate it more into my thinking. (Which I’m still in the process of doing.)
I watched most of a youtube video on this topic to see what it’s about.
I think I agree that “coordination problems are the biggest issue that’s facing us” is an underrated perspective. I see it as a reason for less optimism about the future.
The term “crisis” (in “metacrisis”) makes it sound like it’s something new and acute, but it seems that we’ve had coordination problems for all of history. Though maybe their effects are getting worse because of accelerating technological progress?
In any case, in the video I watched, Schmachtenberger mentioned the saying, “If you understand a problem, you’re halfway there toward solving it.” (Not sure that was the exact wording, but something like that.) Unfortunately, I don’t think the saying holds here. I feel quite pessimistic about changing the dynamics about why earth is so unlike Yudkwosky’s “dath ilan.” Maybe I stopped the Schmachtenberger video before he got to the solution proposals (but I feel like if he had great solution proposals, he should lead with those). In my view, the catch-22 is that you need well-functioning (and sane and compassionate) groups/companies/institutions/government branches to “reform” anything, which is challenging when your problem is that groups/companies/institutions/government branches don’t work well (or aren’t sane or compassionate).
I didn’t watch the entire video by Schmachtenberger, but I got a sense that he thinks something like, “If we can change societal incentives, we can address the metacrisis.” Unfortunately, I think this is extremely hard – it’s swimming upstream, and even if we were able to change some societal incentives, they’d at best go from “vastly suboptimal” to “still pretty suboptimal.” (I think it would require god-like technology to create anything close to optimal social incentives.)
Of course, that doesn’t mean making things better is not worth trying. If I had longer AI timelines, I would probably think of this as the top priority. (Accordingly, I think it’s weird that this isn’t on the radar of more EAs, since many EAs have longer timelines than me?)
My approach is mostly taking for granted that large parts of the world are broken, so I recommend working with the groups/companies/institutions/government branches that still function, expanding existing pockets of sanity and creating new ones.
Of course, if someone had an idea for changing the way people consume news, or making a better version of social media, trying to create more of a shared reality and shared priority about what matters in the world, improving public discourse, I’d be like “this is very much worth trying!.” But it seems challenging to compete for attention against clickbait and outrage amplification machinery.
EA already has the cause area “improving institutional decision-making.” I think things like approval voting are cool and I like forecasting just like many EAs, but I’d probably place more of a focus on “expanding pockets of sanity” or “building new pockets of sanity from scratch.” “Improving” suggests that things are gradual. My cognitive style might be biased towards black-and-white thinking, but to me it really feels like a lot of institutions/groups/companies/government branches mostly fall into two types, “dysfunctional” and “please give us more of that.” It’s pointless to try to improve the ones with dysfunctional leadership or culture (instead, those have to be reformed or you have to work without them). Focus on what works and create more of it.