How to Launch an Experiment in the Effective Altruism Community to Confirm that General Collective Intelligence can Significantly Increase the Effectiveness of Altruism

I’m trying to figure out a way to survey the EA community as part of an experiment validating the capacity for General Collective Intelligence to significantly increase the effectiveness of altruism. General Collective Intelligence or GCI is a system that combines a group into a single collective intelligence with the potential capacity for vastly greater problem solving ability than any individual in the group, and therefore vastly greater ability to solve the general problem of achieving more effective altruism. It was defined using the same model of human cognition that was recently used to define a model for Artificial General Intelligence or AGI https://​​www.youtube.com/​​watch?v=PdPt_NPJ7RcThe theory behind this model suggests that when considering the functions of any system, including groups of people, there are problems that involve Nth order interactions between the functions of that system, where N is too high for individual human cognition to conceive. These problems can’t then reliably be defined or solved. An argument has been made that the sustainable development goals or SDGs are Nth order problems that can’t reliably be resolved by current individual or group decision-making systems. Therefore, though one might be working very hard trying to do good, one is likely to be attempting to solve the wrong problem with the wrong solution. However, a GCI is predicted to increase collective problem solving ability so that these problems CAN reliably be resolved.

One way is that GCI removes the barriers to scaling cooperation so that where the value of cooperation is positive, that value can reliably be scaled to the point that the cooperation is self-sustaining. Applied to project financing, for example, conceptual case studies show the potential to make social, economic, environmental, or other collective impact sustainably self-funding at the scale required to make global transformation reliably achievable. This self-funding property of GCI has been used to define a GCI based SDGs program that in the first three phases is intended to drive $15 billion towards achieving the SDGs. Over it’s ten phases, this program is designed to reliably drive enough funding to close the $23trillion gap between the funding the UN says is required to achieve the SDGs globally, and the funding that is available to do so.

The challenge is that launching a project to build a GCI so that GCI can be validated to have the potential to significantly increase a group’s general problem-solving ability, is itself a higher order problem that might not be reliably solvable by current decision-making processes, which aren’t reliably smart enough to converge on the best solution where it is this complex as in the case of GCI. In other words, neither individuals nor the policies of organizations are smart enough to reliably converge on the understanding that the biggest problem is that they aren’t smart enough to even understand the specific problem. To launch a project to implement a GCI you have to have GCI so that groups (researchers, donors, and other required participants) are reliably smart enough to see the need for it.

This is because if such a concept is not reliably within the capacity of any organization to see the need for, then to be able to reliably launch a project to implement a GCI one must either be an individual with the resources and attributes required to reliably do so alone, or one must define a process reliably capable of finding such a person. Such as defining a process to reliably find someone with the capacity to understand this complex concept, who also has all the attributes required to be incentivized by this project, and who has the profound gift of oration required to on their own move a large enough crowd emotionally to drive a sufficiently large movement to follow this vision. Or one must find someone with the inhuman genius to acquire sufficient expertise across sufficient academic disciplines to specify how the concept applies to each of those disciplines in enough detail, and across enough disciplines, to engage a sufficiently large group of researchers to have the capacity to implement the GCI.

To get over this impasse, this experiment begins with the small subset of GCI functionality that an individual can implement alone, and uses that functionality to get a group to validate the capacity of GCI to improve decision-making in a very specific area. That experiment is then used to build a larger subset of GCI functionality that increases capacity to raise funding to incentivize more participants to invest more time in building more complex functionality to validate the capacity of GCI to improve decision-making in a broader area, and then uses that to enable a larger group to execute the next phase of research with a bigger budget to build a larger set of functionality to validate the capacity of GCI to improve decision-making in a still broader area, in the same iterative cycle that nature evolves complex organisms.

The EA community is an ideal test bed to launch this experiment because it already contains a pre-selected group of people who not only believe in altruism, but who are actually doing something about it to try and live their beliefs. My question is how does one query the EA community directly with such a survey? Does this forum provide tools for conducting surveys? The most intelligent approach to any problem-solving might be to remain open to testing any potential solution, and then simply doing what works, as opposed to adhering to any beliefs or cognitive biases that cause one to filter out possibilities beyond our current understanding, or that cause us to adhere to doing what we think should work though results prove otherwise. Accordingly, this experiment must be decentralized and connect with members of the EA community directly if it is to reliably avoid being filtered out by the policies of any organization or any beliefs or cognitive biases of any individual leader of any organization.

No comments.