People shared so many bad experiences with debate…
I had a great time debating (BP style) in Russia a few years ago. I clearly remember some moments which helped me to become better at thinking/speaking and world modeling:
-
The initial feedback I got during the practice session is basically don’t be a guy from the terrible video you shared :-). Make it easy for a judge to understand your arguments: improve the structure and speak slower. Focus on one core argument during your speech: don’t squeeze multiple half-baked ideas in; deliver one but prove it fully.
-
At my first tournament for newbies, an experienced debater gave a lecture on playing something-something resolutions and concluded with strongly recommending reading up on game theory (IIRC, The Strategy of Conflict and Governing the Commons).
-
My second tournament was in Jedi format: I, an inexperienced Padawan, played with a skilled Jedi. I matched with a person because both of us liked LessWrong. I think we even managed to use “belief should pay rent” as part of an argument in a debate on the tyranny of the majority. I think it’s plausible that we referred to Moloch at least once.
-
Later on, improvement came from managing inferential distances during speeches; and grounding arguments in reality: being specific about harms and benefits, delivering appropriate ~examples to support intermediate claims.
I think the experience was worth it. It helped me to think more in-depth and about much more issues than I would have overwise (kinda like forecasting now). I quit because (a) tournaments are time-consuming; (b) I got bored playing social issues & identity politics.
While competitive debating is not about collaborative truth-seeking, in my experience, debtors are high cognitive decouplers. Arguing with them (outside of the game) felt good, and we were able to touch topics far outside of the default Overtone window (like taking the perspective of ISIS).
The culture was healthy because most people were just passionate about debating/grokking complex issues (like investor-state dispute settlements), and their incentives were not screwed up because the only upside to winning debate tournaments in Russia is internet points.
Upd: I feel that one of your main concerns is Goodharting. I think the BP system as we played it basically encouraged maximizing the expected utility of impacts of arguments you brought to the table i.e. harm/benefit to individual × scale × probability occurring × how well you proved it (which can be seen as the probability that your reasoning is correct). It’s a bit harder to fit the importance of framing the issue and principled arguments into my simplification. But the first can be seen as prioritizing based on relative tractability (e.g. in almost all of the debate arguing that “we will save money by not implementing a policy” is a bad move because there are multiple other ways to save money and the benefits of the policy might be unique). The second is about the importance of metagame, incentive structures, commitments, and so on.
I think your comment (and particularly the first point) has much more to do with the difficulty of defining causality than with x-risks.
It seems natural to talk about force causing the mass to accelerate: when I push a sofa, I cause it to start moving. but Newtonian mechanics can’t capture casualty basically because the equality sign in →F=m→a lacks direction. Similarly, it’s hard to capture causality in probability spaces.
Following Pearl, I come to think that causality arises from manipulator/manipulated distinction.
So I think it’s fair to speak about factors only with relation to some framing:
If you are focusing on bio policy, you are likely to take great-power conflict as an external factor.
Similarly, if you are focusing on preventing nuclear war between India and Pakistan, you are likely to take bioterrorism as an external factor.
Usually, there are multiple external factors in your x-risk modeling. The most salient and undesirable are important enough to care about them (and give them a name).
Calling bio-risks an x-factor makes sense formally; but doesn’t make sense pragmatically because bio-risks are very salient (in our community) on their own because they are a canonical x-risk. So for me, part of the difference is that I started to care about x-risks first; and that I started to care about x-risk factors because of their relationship to x-risk.