My impression is EAGx Prague 22 managed to balance 1:1s with other content simply by not offering SwapCard 1:1s slots part of the time, having a lot of spaces for small group conversations, and suggesting to attendees they should aim for something like balanced diet. (Turning off SwapCard slots does not prevent people from scheduling 1:1, just adds a little friction; empirically it seems enough to prevent the mode where people just fill their time by 1:1s).
As far as I understand this will most likely not happen, because weight given to / goodharting on metrics like people reporting 1:1s is the most valuable use of time, metrics tracking “connections formed” and weird psychological effect of 1:1 fests. (People feel stimulated, connected, energized,… Part of the effect is superficial). Also the counterfactual value lost from lack of conversational energy at scales ~3 to 12ppl is not visible and likely not tracked in feedback (I think this has predictable effects on what types of collaborations do start and which do not, and the effect is on the margin bad.) The whole is downstream of problems like Don’t Over-Optimize Things / We can do better than argmax.
Btw I think you are too apologetic / self-deprecating (“inexperienced event organisers complaining about features of the conference”). I have decent experience running events and all what you wrote is spot on.
Thanks Jan, I appreciate this comment. I’m on the EAG team, but responding with my personal thoughts.
While it’s true that we weight 1:1s heavily in assessing EAG, I don’t think we’re doing ‘argmax prioritisation’—we still run talks, workshops, meetups, and ~1/4 of our team time goes to this. My read of your argument is that we’re scoring things wrong and should give more consideration to the impact of group conversation. You’re right that we don’t currently explicitly track the impact of group conversations, which could mean we’re missing significant value.
I do plan to think more about how we can make these group conversations happen and measure their success. I haven’t yet heard a suggestion (in this thread or elsewhere) that I believe would sufficiently move the needle, but maybe this is because we’re over-optimising for better feedback survey scores in the short term (e.g., we’ll upset some attendees if we turn off specific 1:1 slots).
My impression is EAGx Prague 22 managed to balance 1:1s with other content simply by not offering SwapCard 1:1s slots part of the time, having a lot of spaces for small group conversations, and suggesting to attendees they should aim for something like balanced diet. (Turning off SwapCard slots does not prevent people from scheduling 1:1, just adds a little friction; empirically it seems enough to prevent the mode where people just fill their time by 1:1s).
As far as I understand this will most likely not happen, because weight given to / goodharting on metrics like people reporting 1:1s is the most valuable use of time, metrics tracking “connections formed” and weird psychological effect of 1:1 fests. (People feel stimulated, connected, energized,… Part of the effect is superficial). Also the counterfactual value lost from lack of conversational energy at scales ~3 to 12ppl is not visible and likely not tracked in feedback (I think this has predictable effects on what types of collaborations do start and which do not, and the effect is on the margin bad.) The whole is downstream of problems like Don’t Over-Optimize Things / We can do better than argmax.
Btw I think you are too apologetic / self-deprecating (“inexperienced event organisers complaining about features of the conference”). I have decent experience running events and all what you wrote is spot on.
Thanks Jan, I appreciate this comment. I’m on the EAG team, but responding with my personal thoughts.
While it’s true that we weight 1:1s heavily in assessing EAG, I don’t think we’re doing ‘argmax prioritisation’—we still run talks, workshops, meetups, and ~1/4 of our team time goes to this. My read of your argument is that we’re scoring things wrong and should give more consideration to the impact of group conversation. You’re right that we don’t currently explicitly track the impact of group conversations, which could mean we’re missing significant value.
I do plan to think more about how we can make these group conversations happen and measure their success. I haven’t yet heard a suggestion (in this thread or elsewhere) that I believe would sufficiently move the needle, but maybe this is because we’re over-optimising for better feedback survey scores in the short term (e.g., we’ll upset some attendees if we turn off specific 1:1 slots).