It points out an interesting thing, which is that without a mechanism to bring the best information forward, it’s hard to get good information. A main reason science has better results than most attempts ar getting knowledge is that it has a mechanism of making experiments, and validating by peers the results.
Of course, these mechanisms can be flawed, leading to the current reproductibility crisis (research funded by companies trying to protect its products, incentives to make remarkable results to be published, increasing specialization that means your knowledge get less accessible to other fields).
It also reminds me about this research that tried to know why our election system puts forward so many incompetent leaders, that fail to resolve our biggest crises. One of the reason was that the “selection mechanism” favored those that propagated false information.
I don’t really know how we can implement such a mechanism in EA, though.
I don’t really know how we can implement such a mechanism in EA, though.
Individuals can work on creating rationality policies for themselves and create a culture where such things are normal, encouraged, expected, etc. An IMO particularly important type is a debate policy. Debate polices basically mean if someone knows you’re wrong, and wants to help, there’s a way they can correct your error. Without debate policies, there’s often no way for that to happen. I just posted an article related to my debate policy:
I’ve also written a lot more about this kind of mechanism previously and there are some links to more of that at the end of the new article.
Besides individual action, an organization could also have a debate policy or other rationality policies. E.g. if someone wants to debate something and meets the right conditions then the organization assigns someone to do the debate. There would need to be some kinda escalation mechanism in case the assigned debater loses, breaks the rules, etc.
One simplified way to look at a group having a rationality policy is a single leader with a policy plus proxies who help with some of the work. A CEO, founder or whatever of some organization can personally want to be rational and have policies, but delegate some of the implementation like actually doing debates, and have escalation mechanisms for when she or he needs to personally get involved.
This is interesting, I can get your reasoning behind this. I’m also sure this helps you.
However, I would have liked a few more things :
While your debate policy works for you, it sets a very high bar. Debating with people making 20 articles… well this greatly restrains the pool of people to talk to, so I don’t see this applied on the EA Forum.
It could be interesting, for instance, to propose a debate policy that could work for the EA Forum.
Your post is interesting, but I feel like it could improve by having examples (same in your debate policy I think). There are one or two at the beginning but only at the beginning. I like seeing examples to anchor in my mind what that these rules would look like in a real conversation.
Maybe put your debate policy more prominently in this post, otherwise we have trouble grasping what the concept means.
Thanks for the post !
It points out an interesting thing, which is that without a mechanism to bring the best information forward, it’s hard to get good information. A main reason science has better results than most attempts ar getting knowledge is that it has a mechanism of making experiments, and validating by peers the results.
Of course, these mechanisms can be flawed, leading to the current reproductibility crisis (research funded by companies trying to protect its products, incentives to make remarkable results to be published, increasing specialization that means your knowledge get less accessible to other fields).
It also reminds me about this research that tried to know why our election system puts forward so many incompetent leaders, that fail to resolve our biggest crises. One of the reason was that the “selection mechanism” favored those that propagated false information.
I don’t really know how we can implement such a mechanism in EA, though.
Individuals can work on creating rationality policies for themselves and create a culture where such things are normal, encouraged, expected, etc. An IMO particularly important type is a debate policy. Debate polices basically mean if someone knows you’re wrong, and wants to help, there’s a way they can correct your error. Without debate policies, there’s often no way for that to happen. I just posted an article related to my debate policy:
https://forum.effectivealtruism.org/posts/o8ps3uC68AFkGivTJ/my-experience-with-my-debate-policy
I’ve also written a lot more about this kind of mechanism previously and there are some links to more of that at the end of the new article.
Besides individual action, an organization could also have a debate policy or other rationality policies. E.g. if someone wants to debate something and meets the right conditions then the organization assigns someone to do the debate. There would need to be some kinda escalation mechanism in case the assigned debater loses, breaks the rules, etc.
One simplified way to look at a group having a rationality policy is a single leader with a policy plus proxies who help with some of the work. A CEO, founder or whatever of some organization can personally want to be rational and have policies, but delegate some of the implementation like actually doing debates, and have escalation mechanisms for when she or he needs to personally get involved.
This is interesting, I can get your reasoning behind this. I’m also sure this helps you.
However, I would have liked a few more things :
While your debate policy works for you, it sets a very high bar. Debating with people making 20 articles… well this greatly restrains the pool of people to talk to, so I don’t see this applied on the EA Forum.
It could be interesting, for instance, to propose a debate policy that could work for the EA Forum.
Your post is interesting, but I feel like it could improve by having examples (same in your debate policy I think). There are one or two at the beginning but only at the beginning. I like seeing examples to anchor in my mind what that these rules would look like in a real conversation.
Maybe put your debate policy more prominently in this post, otherwise we have trouble grasping what the concept means.
I replied to the other partial-copy of this message at https://forum.effectivealtruism.org/posts/o8ps3uC68AFkGivTJ/my-experience-with-my-debate-policy?commentId=LkcMac7rX2hboSnaZ#LxQdSK7nnyMezGbCm