It’s so cool seeing articles that align with what I’ve been wanting to see for years! Holden also recently wrote about governance but with a more external-to-EA lens (for example, to responsibly govern AI companies) rather than using better governance to solve issues in EA causes and within EA itself.
This work is very aligned with what Roote is working on (one of the organizations I help run) and our work on meta existential risk (although we don’t explicitly name collective intelligence as a potential part of the solution for this problem). We think that improving governance and collective intelligence is very important. We’re fans of the civic tech efforts in Taiwan and our building societal-level information and coordination software with our Civic Abundance project (we are looking for funding and trying to hire a project lead!). Among the things we’ve looked at is integrating with Manifold to experiment with futarchy (but in the context of an informational dashboard, not tied to the actual governance process).
We’re very aligned with Web3 for social good (like Greenpilled) and Gitcoin. I personally believe that EA itself needs a better mechanism to fund public goods within the EA community, given that EAIF seems to commonly ask public goods projects to become revenue positive which, well, is usually not possible for public goods to do without becoming private goods. Very unfortunate.
I was not aware that there was any research on the specific implications of collective intelligence on existential risk. I would have been excited to read a quick summary of the main points/findings or some hyperlinked articles.
It’s so cool seeing articles that align with what I’ve been wanting to see for years! Holden also recently wrote about governance but with a more external-to-EA lens (for example, to responsibly govern AI companies) rather than using better governance to solve issues in EA causes and within EA itself.
This work is very aligned with what Roote is working on (one of the organizations I help run) and our work on meta existential risk (although we don’t explicitly name collective intelligence as a potential part of the solution for this problem). We think that improving governance and collective intelligence is very important. We’re fans of the civic tech efforts in Taiwan and our building societal-level information and coordination software with our Civic Abundance project (we are looking for funding and trying to hire a project lead!). Among the things we’ve looked at is integrating with Manifold to experiment with futarchy (but in the context of an informational dashboard, not tied to the actual governance process).
We’re very aligned with Web3 for social good (like Greenpilled) and Gitcoin. I personally believe that EA itself needs a better mechanism to fund public goods within the EA community, given that EAIF seems to commonly ask public goods projects to become revenue positive which, well, is usually not possible for public goods to do without becoming private goods. Very unfortunate.
I was not aware that there was any research on the specific implications of collective intelligence on existential risk. I would have been excited to read a quick summary of the main points/findings or some hyperlinked articles.
Thanks for writing this!!