Question number 2: Did you have any processes in place to keep the epistemic standards/content quality high? If so, what were those? Or were these concerns implemented in the selection process itself (if there was any)?
I’ve heard at least from one AIS expert (researcher & field-builder) that they were concerned about a lot of new low-quality AIS research emerging in the field, creating more “noise”.
Even if that is not a concern, one can risk distracting people from pursuing high-impact research paths if persuasive lower-quality ones are presented (and there is an added social pressure/dynamics at play).*
From your results, it seems like this was not the case, so I’m very curious to hear the secret recipe to potentially adapt a similar structure :) In theory, I’m all for the unconference format (and other democratic initiatives) but I can’t help to worry about the risks related to giving up quality control.
*- I know it can be hard to assess promising vs dead-end directions in AIS.
notice confusion, ask dumb questions and encourage others when they ask questions (to decrease deferring of the type “This doesn’t make sense to me but the other person seems so confident so it must be true”)
regularly think about solving the whole problem (to keep the big picture in mind and stay on track)
reflect on their experience. We had a reflection time each morning with a sheet to fill out which had prompts like “what are key confusions I would like to resolve” “What is my priority for today” etc
Content Quality: It depends on what you mean by content quality, but I think having a high bar for session content can actually be a bit detrimental. For example one of the sessions I learned most from wasn’t “high quality”—the person wasn’t an expert and also hadn’t prepared a polished presentation. But they knew more than I did and because the group was small I could just ask them lots of questions and learn a lot that way. We also encouraged people to leave sessions when they don’t find them the best use of their time in order to ensure that people only listen to content they find valuable. We got the feedback that people found that norm really helpful.
Those seem like great practices and I’m happy people actually applied them! (E.g. I’ve experienced people not leaving sessions despite encouragement if there were no role models to model because it goes so much against social norms. This was the case of high school and university students, maybe more senior people are better at this :) )
Question number 2: Did you have any processes in place to keep the epistemic standards/content quality high? If so, what were those? Or were these concerns implemented in the selection process itself (if there was any)?
I’ve heard at least from one AIS expert (researcher & field-builder) that they were concerned about a lot of new low-quality AIS research emerging in the field, creating more “noise”.
Even if that is not a concern, one can risk distracting people from pursuing high-impact research paths if persuasive lower-quality ones are presented (and there is an added social pressure/dynamics at play).*
From your results, it seems like this was not the case, so I’m very curious to hear the secret recipe to potentially adapt a similar structure :) In theory, I’m all for the unconference format (and other democratic initiatives) but I can’t help to worry about the risks related to giving up quality control.
*- I know it can be hard to assess promising vs dead-end directions in AIS.
Epistemic standards: We encouraged people to
notice confusion, ask dumb questions and encourage others when they ask questions (to decrease deferring of the type “This doesn’t make sense to me but the other person seems so confident so it must be true”)
regularly think about solving the whole problem (to keep the big picture in mind and stay on track)
reflect on their experience. We had a reflection time each morning with a sheet to fill out which had prompts like “what are key confusions I would like to resolve” “What is my priority for today” etc
Content Quality: It depends on what you mean by content quality, but I think having a high bar for session content can actually be a bit detrimental. For example one of the sessions I learned most from wasn’t “high quality”—the person wasn’t an expert and also hadn’t prepared a polished presentation. But they knew more than I did and because the group was small I could just ask them lots of questions and learn a lot that way.
We also encouraged people to leave sessions when they don’t find them the best use of their time in order to ensure that people only listen to content they find valuable. We got the feedback that people found that norm really helpful.
Those seem like great practices and I’m happy people actually applied them! (E.g. I’ve experienced people not leaving sessions despite encouragement if there were no role models to model because it goes so much against social norms. This was the case of high school and university students, maybe more senior people are better at this :) )