To address the nit: Before changing it to âimpossible-to-optimize variables,â I had âthings where it is impossible to please everyone.â I think that claim is straightforwardly true, and maybe I should have left it there, but it doesnât seem to communicate everything I was going for. Itâs not just that attendees come in with mutually exclusive preferences, but from the organizers perspective it is practically impossible to chase optimality. We donât have control over everything in presentersâ talks, and donât have intimate knowledge of every attendeesâ preferences, so complaints are, IMHO, inevitable (and thatâs what I wanted to communicate to future organizers).
That said, I think we could have done somewhat better with our content list, mostly via getting feedback from applicants earlier so we could try to match cause-area supply and demand. For content depth, we aimed for some spread but for the majority of talks to be clustered on the medium-to-high side of EA familiarity (i.e. if a â1â was âaccessible to anyone even if theyâve never heard of EAâ and â10â was âonly useful to a handful of professional EA domain experts,â then we aimed for a distribution centered around 7. We only included talks at the low end if we considered them uniquely useful, like a âHow to avoid burnoutâ talk that, while being geared towards EAs, did not require lots of EA context).
I think, given that we selected for attendees with demonstrated EA activity, that this heuristic was pretty solid. Nothing in the feedback data would have me change it for the next go-around or advise other organizers to use a different protocol (unless, of course, they were aiming for a different sort of audience). But Iâm happy for anyone to offer suggestions for improvement!
Thanks for the kind words!
To address the nit: Before changing it to âimpossible-to-optimize variables,â I had âthings where it is impossible to please everyone.â I think that claim is straightforwardly true, and maybe I should have left it there, but it doesnât seem to communicate everything I was going for. Itâs not just that attendees come in with mutually exclusive preferences, but from the organizers perspective it is practically impossible to chase optimality. We donât have control over everything in presentersâ talks, and donât have intimate knowledge of every attendeesâ preferences, so complaints are, IMHO, inevitable (and thatâs what I wanted to communicate to future organizers).
That said, I think we could have done somewhat better with our content list, mostly via getting feedback from applicants earlier so we could try to match cause-area supply and demand. For content depth, we aimed for some spread but for the majority of talks to be clustered on the medium-to-high side of EA familiarity (i.e. if a â1â was âaccessible to anyone even if theyâve never heard of EAâ and â10â was âonly useful to a handful of professional EA domain experts,â then we aimed for a distribution centered around 7. We only included talks at the low end if we considered them uniquely useful, like a âHow to avoid burnoutâ talk that, while being geared towards EAs, did not require lots of EA context).
I think, given that we selected for attendees with demonstrated EA activity, that this heuristic was pretty solid. Nothing in the feedback data would have me change it for the next go-around or advise other organizers to use a different protocol (unless, of course, they were aiming for a different sort of audience). But Iâm happy for anyone to offer suggestions for improvement!