To address the nit: Before changing it to “impossible-to-optimize variables,” I had “things where it is impossible to please everyone.” I think that claim is straightforwardly true, and maybe I should have left it there, but it doesn’t seem to communicate everything I was going for. It’s not just that attendees come in with mutually exclusive preferences, but from the organizers perspective it is practically impossible to chase optimality. We don’t have control over everything in presenters’ talks, and don’t have intimate knowledge of every attendees’ preferences, so complaints are, IMHO, inevitable (and that’s what I wanted to communicate to future organizers).
That said, I think we could have done somewhat better with our content list, mostly via getting feedback from applicants earlier so we could try to match cause-area supply and demand. For content depth, we aimed for some spread but for the majority of talks to be clustered on the medium-to-high side of EA familiarity (i.e. if a “1” was “accessible to anyone even if they’ve never heard of EA” and “10” was “only useful to a handful of professional EA domain experts,” then we aimed for a distribution centered around 7. We only included talks at the low end if we considered them uniquely useful, like a “How to avoid burnout” talk that, while being geared towards EAs, did not require lots of EA context).
I think, given that we selected for attendees with demonstrated EA activity, that this heuristic was pretty solid. Nothing in the feedback data would have me change it for the next go-around or advise other organizers to use a different protocol (unless, of course, they were aiming for a different sort of audience). But I’m happy for anyone to offer suggestions for improvement!
Thanks for the kind words!
To address the nit: Before changing it to “impossible-to-optimize variables,” I had “things where it is impossible to please everyone.” I think that claim is straightforwardly true, and maybe I should have left it there, but it doesn’t seem to communicate everything I was going for. It’s not just that attendees come in with mutually exclusive preferences, but from the organizers perspective it is practically impossible to chase optimality. We don’t have control over everything in presenters’ talks, and don’t have intimate knowledge of every attendees’ preferences, so complaints are, IMHO, inevitable (and that’s what I wanted to communicate to future organizers).
That said, I think we could have done somewhat better with our content list, mostly via getting feedback from applicants earlier so we could try to match cause-area supply and demand. For content depth, we aimed for some spread but for the majority of talks to be clustered on the medium-to-high side of EA familiarity (i.e. if a “1” was “accessible to anyone even if they’ve never heard of EA” and “10” was “only useful to a handful of professional EA domain experts,” then we aimed for a distribution centered around 7. We only included talks at the low end if we considered them uniquely useful, like a “How to avoid burnout” talk that, while being geared towards EAs, did not require lots of EA context).
I think, given that we selected for attendees with demonstrated EA activity, that this heuristic was pretty solid. Nothing in the feedback data would have me change it for the next go-around or advise other organizers to use a different protocol (unless, of course, they were aiming for a different sort of audience). But I’m happy for anyone to offer suggestions for improvement!