I’m pretty curious about any tools/events that could be built to help such events run better. To give a short but not exhaustive list:
Do people write short statements of their views on topics beforehand? Are these useful?
Is there any attempt to specifically channel people who disagree together and get them to talk? Does this work?
Do people in general change their minds (the recent Xrisk Prediction Tournament was kind of a downer in that many superforecasters and experts seemingly don’t)
Would it be worth having a live voting session that people could submit to through the event to anonymously judge sentiment
Has there ever been on-site mediation to try and resolve longstanding differences? I bet some of these people don’t trust one another and that hampers information sharing
I sort of guess the main value add is having a load of decision makers in one space for an extended period of time to develop trust. But in turn it seems surprising to me that we can’t do better than that. It would be really interesting to understand problems you folks have because I guess those apply to political and non-profit decision makers too.
Finally a confusion
Invitation to this event is not proof of trustworthiness.
and
We value collaboration and learning among people, taking place in an environment of well-calibrated trust.
These two statements read as contradictory to me? These people aren’t necessarily trustworthy but you value well-calibrated trust? Perhaps I’m meant to understand that you lot know how much to trust each other but that we shouldn’t necessarily do so? I don’t really understand what you mean here.
Thanks for those thoughts—we’re planning to do some of those (e.g. have people write memos on important topics before the event), and I think we’ve considered doing all of those things. (Not sure if we made the right decision on how to handle each of these, and not explaining our stance on all of them because of time.)
Re trust: Sorry, that second sentence is rather confusing. What I mean is that: we’re not guaranteeing that everyone attending the event is 100% trustworthy. And I hope that the event will allow attendees to understand each other’s motivations/strengths/weaknesses/etc in more depth, so that attendees can get a better understanding of when/how to trust each other and collaborate. I think that non-attendees won’t get these benefits, and shouldn’t make big updates from the fact that someone is invited/not. I hope that’s a bit clearer.
I do very much agree with Nathan’s sentiment here.
I appreciate the original post announcing this forum is aims to expectation manage and temper potential concerns people will have about this group producing a ‘grand strategy’ for EA or similarly agree solutions to all the big problems. However, there is also acknowledgment that the event is aiming to help plan the next two years and set the trajectory going forward.
These are important topics and issues (as reflected by the significant senior time involved in the event), and pretty much all of them require a lot of individual and group reasoning under uncertainty. As such I do think there is a very beneficial role for robust methods to help facilitate discussion and decision-making.
I don’t know what things you may already be planning to implement, so I’m mostly just putting a flag down to say if you haven’t already, it’d be worth investing in such methods. So I’m not entirely ‘talk and no suggestion’, some very basic things to introduce (if not already) at low/cost and effort could be:
Clear framework for all attendees on how uncertainty and predictions should be communicated during the event to ensure consistency and transparency of reasoning between attendees, to help reduce misinterpretation errors which is always a risk in such forums/events.
External/third party (to the attendees but could still be EA) to provide a mediation and challenge function (similar to what Nathan suggested).
Collection of prior positions on key topics, with confidence %’s provided before the event. With updating rounds during and at the event of the event, with short notes on what contributed to any % change (one to show impact of the event, but also helps identify where and with whom the more intractable differences lie—which can help more focused action/discussion later).
I’m pretty curious about any tools/events that could be built to help such events run better. To give a short but not exhaustive list:
Do people write short statements of their views on topics beforehand? Are these useful?
Is there any attempt to specifically channel people who disagree together and get them to talk? Does this work?
Do people in general change their minds (the recent Xrisk Prediction Tournament was kind of a downer in that many superforecasters and experts seemingly don’t)
Would it be worth having a live voting session that people could submit to through the event to anonymously judge sentiment
Has there ever been on-site mediation to try and resolve longstanding differences? I bet some of these people don’t trust one another and that hampers information sharing
I sort of guess the main value add is having a load of decision makers in one space for an extended period of time to develop trust. But in turn it seems surprising to me that we can’t do better than that. It would be really interesting to understand problems you folks have because I guess those apply to political and non-profit decision makers too.
Finally a confusion
and
These two statements read as contradictory to me? These people aren’t necessarily trustworthy but you value well-calibrated trust? Perhaps I’m meant to understand that you lot know how much to trust each other but that we shouldn’t necessarily do so? I don’t really understand what you mean here.
[Brief comment, sorry!]
Thanks for those thoughts—we’re planning to do some of those (e.g. have people write memos on important topics before the event), and I think we’ve considered doing all of those things. (Not sure if we made the right decision on how to handle each of these, and not explaining our stance on all of them because of time.)
Re trust: Sorry, that second sentence is rather confusing. What I mean is that: we’re not guaranteeing that everyone attending the event is 100% trustworthy. And I hope that the event will allow attendees to understand each other’s motivations/strengths/weaknesses/etc in more depth, so that attendees can get a better understanding of when/how to trust each other and collaborate. I think that non-attendees won’t get these benefits, and shouldn’t make big updates from the fact that someone is invited/not. I hope that’s a bit clearer.
I do very much agree with Nathan’s sentiment here.
I appreciate the original post announcing this forum is aims to expectation manage and temper potential concerns people will have about this group producing a ‘grand strategy’ for EA or similarly agree solutions to all the big problems. However, there is also acknowledgment that the event is aiming to help plan the next two years and set the trajectory going forward.
These are important topics and issues (as reflected by the significant senior time involved in the event), and pretty much all of them require a lot of individual and group reasoning under uncertainty. As such I do think there is a very beneficial role for robust methods to help facilitate discussion and decision-making.
I don’t know what things you may already be planning to implement, so I’m mostly just putting a flag down to say if you haven’t already, it’d be worth investing in such methods. So I’m not entirely ‘talk and no suggestion’, some very basic things to introduce (if not already) at low/cost and effort could be:
Clear framework for all attendees on how uncertainty and predictions should be communicated during the event to ensure consistency and transparency of reasoning between attendees, to help reduce misinterpretation errors which is always a risk in such forums/events.
External/third party (to the attendees but could still be EA) to provide a mediation and challenge function (similar to what Nathan suggested).
Collection of prior positions on key topics, with confidence %’s provided before the event. With updating rounds during and at the event of the event, with short notes on what contributed to any % change (one to show impact of the event, but also helps identify where and with whom the more intractable differences lie—which can help more focused action/discussion later).
I vastly prefer a brief comment to none, thanks for your time.