To address the nit: Before changing it to āimpossible-to-optimize variables,ā I had āthings where it is impossible to please everyone.ā I think that claim is straightforwardly true, and maybe I should have left it there, but it doesnāt seem to communicate everything I was going for. Itās not just that attendees come in with mutually exclusive preferences, but from the organizers perspective it is practically impossible to chase optimality. We donāt have control over everything in presentersā talks, and donāt have intimate knowledge of every attendeesā preferences, so complaints are, IMHO, inevitable (and thatās what I wanted to communicate to future organizers).
That said, I think we could have done somewhat better with our content list, mostly via getting feedback from applicants earlier so we could try to match cause-area supply and demand. For content depth, we aimed for some spread but for the majority of talks to be clustered on the medium-to-high side of EA familiarity (i.e. if a ā1ā was āaccessible to anyone even if theyāve never heard of EAā and ā10ā was āonly useful to a handful of professional EA domain experts,ā then we aimed for a distribution centered around 7. We only included talks at the low end if we considered them uniquely useful, like a āHow to avoid burnoutā talk that, while being geared towards EAs, did not require lots of EA context).
I think, given that we selected for attendees with demonstrated EA activity, that this heuristic was pretty solid. Nothing in the feedback data would have me change it for the next go-around or advise other organizers to use a different protocol (unless, of course, they were aiming for a different sort of audience). But Iām happy for anyone to offer suggestions for improvement!
Thanks for the kind words!
To address the nit: Before changing it to āimpossible-to-optimize variables,ā I had āthings where it is impossible to please everyone.ā I think that claim is straightforwardly true, and maybe I should have left it there, but it doesnāt seem to communicate everything I was going for. Itās not just that attendees come in with mutually exclusive preferences, but from the organizers perspective it is practically impossible to chase optimality. We donāt have control over everything in presentersā talks, and donāt have intimate knowledge of every attendeesā preferences, so complaints are, IMHO, inevitable (and thatās what I wanted to communicate to future organizers).
That said, I think we could have done somewhat better with our content list, mostly via getting feedback from applicants earlier so we could try to match cause-area supply and demand. For content depth, we aimed for some spread but for the majority of talks to be clustered on the medium-to-high side of EA familiarity (i.e. if a ā1ā was āaccessible to anyone even if theyāve never heard of EAā and ā10ā was āonly useful to a handful of professional EA domain experts,ā then we aimed for a distribution centered around 7. We only included talks at the low end if we considered them uniquely useful, like a āHow to avoid burnoutā talk that, while being geared towards EAs, did not require lots of EA context).
I think, given that we selected for attendees with demonstrated EA activity, that this heuristic was pretty solid. Nothing in the feedback data would have me change it for the next go-around or advise other organizers to use a different protocol (unless, of course, they were aiming for a different sort of audience). But Iām happy for anyone to offer suggestions for improvement!