I think the double crux game can be good for dispute resolution. But I think generating disagreement even in a sandbox environment can be counterproductive. It’s similar to how having public debates on its face appears seems like it can better resolve a dispute, but if one isn’t willing to debate entirely in good faith, they can ruin the debate to the point it shouldn’t have happened in the first place. Even if a disagreement isn’t socially bad in that it will persist as a conflict after a failed double crux game, it could limit effective altruists to black-and-white thinking after the fact. This lends itself to an absence of the creative problem-solving EA needs.
Perhaps even more than collaborative truth-seeking, the EA community needs individual EAs to learn to think for themselves more to generate possible solutions that the community’s core can’t solve themselves. There are a lot of EAs who have spare time on their hands that could be better used without something to put it towards. I think starting independent projects an be a valuable use of that time. Here are some of these questions reframed to prompt effective altruists to generate creative solutions.
Imagine you’ve been given discretion of 10% of the Open Philanthropy Project’s annual grantmaking budget. How would you distribute it?
How would solve what you see as the biggest cultural problem in EA?
Under what conditions do you think the EA movement would be justified in deliberately deceiving or misleading the public?
How should EA address our outreach blindspots?
At what rate should EA be growing? How should that be managed?
These questions are reframed to be more challenging. But that’s my goal. I think many individual EAs should be challenged to generate less confused models on these topics, and from there between models is when deliberation like double crux should start. Especially if they start from a place of ignorance on current thinking on these issues in EA[1], I don’t think in the span of only a couple minutes either side of a double crux game will generate an excellent but controversial hypothesis worth challenging.
The examples in the questions provided are open questions in EA EA organizations don’t themselves have good answers to, and I’m sure they’d appreciate additional thinking and support building off their ideas. These aren’t binary questions with just one of two possible solutions. I think using EA examples in the double crux game may be a bad idea because it will inadvertently lead EAs to come away with a more simplistic impression of these issues than they should. There is no problem with the double crux game, but maybe EAs should learn it without using EA examples.
[1] This sounds callous, but I think it’s a common coordination problem we need to fix. It isn’t hard, as it’s actually quite easy to miss important theoretical developments that make the rounds among EA orgs but aren’t broadcast to the broader movement.
The reason why the original formulations are what they are is to get out of the trap of everyone agreeing that “good things are good”, and to draw out specific disagreements.
The intention is that each of these has some sort of crisp “yes or no” or “we should or shouldn’t prioritize X”. But also the crisp “yes or no” is rooted in a detailed, and potentially original, model.
I strongly agree that more EAs doing independent thinking really important, and I’m very interested in interventions that push in that direction. In my capacity as a CFAR instructor and curriculum developer, figuring out ways to do this is close to my main goal.
I think many individual EAs should be challenged to generate less confused models on these topics, and from there between models is when deliberation like double crux should start.
Strongly agree.
I don’t think in the span of only a couple minutes either side of a double crux game will generate an excellent but controversial hypothesis worth challenging.
I think this misses the point a little. People at EAG have some implicit model that they’re operating from, even if it isn’t well-considered. The point of the exercise in this context is not to get all the way to the correct belief, but rather to engage with what one thinks and what would cause them to change their mind.
This Double Crux is part of the de-confusion and model building process.
I think using EA examples in the double crux game may be a bad idea because it will inadvertently lead EAs to come away with a more simplistic impression of these issues than they should.
I mostly teach Double Crux and related at CFAR workshops (the mainline, and speciality / alumni workshops). I’ve taught it at EAG 4 times (twice in 2017), and I can only observe a few participants in a session. So my n is small, and I’m very unsure.
But it seems to me that using EA examples mostly has the effect of fleshing out understanding of other EA’s views, more than flattening and simplifying. People are sometimes surprised by their partner’s cruxes are, at least (which suggests places where a straw model is getting updated).
But, participants could also be coming away with too much of an either-or perspective on these questions.
Yeah, reading your comments has assuaged my concerns since based on your observations the sign of the consequences of double-cruxing on EA example questions seems more unclear than clearly negative, and likely slightly positive. In general it seems like a neat exercise that is interesting but just doesn’t provide enough time to leave EAs with any impression of these issues much stronger than the one they came in with. I am still thinking of making a Google Form with my version of the questions, and then posing them to EAs, to see what kind of responses are generated as an (uncontrolled) experiment. I’ll let you know if I do so.
I think the double crux game can be good for dispute resolution. But I think generating disagreement even in a sandbox environment can be counterproductive. It’s similar to how having public debates on its face appears seems like it can better resolve a dispute, but if one isn’t willing to debate entirely in good faith, they can ruin the debate to the point it shouldn’t have happened in the first place. Even if a disagreement isn’t socially bad in that it will persist as a conflict after a failed double crux game, it could limit effective altruists to black-and-white thinking after the fact. This lends itself to an absence of the creative problem-solving EA needs.
Perhaps even more than collaborative truth-seeking, the EA community needs individual EAs to learn to think for themselves more to generate possible solutions that the community’s core can’t solve themselves. There are a lot of EAs who have spare time on their hands that could be better used without something to put it towards. I think starting independent projects an be a valuable use of that time. Here are some of these questions reframed to prompt effective altruists to generate creative solutions.
Imagine you’ve been given discretion of 10% of the Open Philanthropy Project’s annual grantmaking budget. How would you distribute it?
How would solve what you see as the biggest cultural problem in EA?
Under what conditions do you think the EA movement would be justified in deliberately deceiving or misleading the public?
How should EA address our outreach blindspots?
At what rate should EA be growing? How should that be managed?
These questions are reframed to be more challenging. But that’s my goal. I think many individual EAs should be challenged to generate less confused models on these topics, and from there between models is when deliberation like double crux should start. Especially if they start from a place of ignorance on current thinking on these issues in EA[1], I don’t think in the span of only a couple minutes either side of a double crux game will generate an excellent but controversial hypothesis worth challenging.
The examples in the questions provided are open questions in EA EA organizations don’t themselves have good answers to, and I’m sure they’d appreciate additional thinking and support building off their ideas. These aren’t binary questions with just one of two possible solutions. I think using EA examples in the double crux game may be a bad idea because it will inadvertently lead EAs to come away with a more simplistic impression of these issues than they should. There is no problem with the double crux game, but maybe EAs should learn it without using EA examples.
[1] This sounds callous, but I think it’s a common coordination problem we need to fix. It isn’t hard, as it’s actually quite easy to miss important theoretical developments that make the rounds among EA orgs but aren’t broadcast to the broader movement.
I like these modified questions.
The reason why the original formulations are what they are is to get out of the trap of everyone agreeing that “good things are good”, and to draw out specific disagreements.
The intention is that each of these has some sort of crisp “yes or no” or “we should or shouldn’t prioritize X”. But also the crisp “yes or no” is rooted in a detailed, and potentially original, model.
I strongly agree that more EAs doing independent thinking really important, and I’m very interested in interventions that push in that direction. In my capacity as a CFAR instructor and curriculum developer, figuring out ways to do this is close to my main goal.
Strongly agree.
I think this misses the point a little. People at EAG have some implicit model that they’re operating from, even if it isn’t well-considered. The point of the exercise in this context is not to get all the way to the correct belief, but rather to engage with what one thinks and what would cause them to change their mind.
This Double Crux is part of the de-confusion and model building process.
I mostly teach Double Crux and related at CFAR workshops (the mainline, and speciality / alumni workshops). I’ve taught it at EAG 4 times (twice in 2017), and I can only observe a few participants in a session. So my n is small, and I’m very unsure.
But it seems to me that using EA examples mostly has the effect of fleshing out understanding of other EA’s views, more than flattening and simplifying. People are sometimes surprised by their partner’s cruxes are, at least (which suggests places where a straw model is getting updated).
But, participants could also be coming away with too much of an either-or perspective on these questions.
Yeah, reading your comments has assuaged my concerns since based on your observations the sign of the consequences of double-cruxing on EA example questions seems more unclear than clearly negative, and likely slightly positive. In general it seems like a neat exercise that is interesting but just doesn’t provide enough time to leave EAs with any impression of these issues much stronger than the one they came in with. I am still thinking of making a Google Form with my version of the questions, and then posing them to EAs, to see what kind of responses are generated as an (uncontrolled) experiment. I’ll let you know if I do so.