The above makes me think that you should therefore be even more skeptical of OAA’s chances of success than you are about Gaia’s chances.
I am, but OAA also seems less specific, and it’s harder to evaluate its feasibility compared to something more concrete (like this proposal).
In fact, we think that if there are sufficiently many AI agents and decision intelligence systems that are model-based, i.e., use some kinds of executable state-space (“world”) models to do simulations, hypothesise counterfactually about different courses of actions and external conditions (sometimes in collaboration with other agents, i.e., planning together), and deploy regularisation techniques (from Monte Carlo aggregation of simulation results to amortized adversarial methods suggested by Bengio on slide 47 here) to permit compositional reasoning about risk and uncertaintly that scales beyond the boundary of a single agent, the benefits of collaborative inference of the most accurate and well-regularised models will be so huge that something like Gaia Network will emerge pretty much “by default” because a lot of scientists and industry players will work in parallel to build some versions and local patches of it.
My problem with this is that it sounds good, but this argument relies on many hidden premises, that make me inherently skeptical of any strong claims like “(…) the benefits of collaborative inference of the most accurate and well-regularised models will be so huge that something like Gaia Network will emerge pretty much ‘by default’”.
I think this could be addressed by a convincing MVP, and I think that you’re working on that, so I won’t push further on this point.
It’s fine with me and most other people except for e/accs, for now, but what about the time when the cost of training powerful/dangerous models will drop so much that anyone can buy a chip to train the next rogue AI for 1000$? How does compute governance look in this world?
The current best proposals for compute governance rely on very specific types of math. I don’t think throwing blockchain or DAOs at the problem makes a lot of sense, unless you find an instance of the very specific set of problems they’re good at solving.
My priors against the crypto world comes mostly from noticing a lot of people throwing tools to problems without a clear story of how these tools actually solve the problem. This has happened so many times that I have come to generally distrust crypto/blockchain proposals unless they give me a clear explanation of why using these technologies makes sense.
But I think the point I made here was kinda weak anyway (it was, at best, discrediting by association), so I don’t think it makes sense to litigate this particular point.
Compare with Collective Intelligence Project. It has started with the mission to “fix governance” (and pretty much “help to counteract Moloch” in the domain of political economy, too, they barely didn’t use this concept, or maybe they even did, I don’t want to check it now), and now they “pivoted” to AI safety and achieved great legibility on this path: e.g., they partner with OpenAI, apparently, on more than one project now. Does this mean that CIP is a “solution looking for a problem”? No, it’s just the kind of project that naturally lends to helps both with Moloch and AI safety. I’d say the same could be said of Gaia Network (if it is realised in some forms) and this lies pretty much in plain sight.
I find this decently convincing, actually. Like, maybe, I’m pattern matching too much on other projects which have in the past done something similar (just lightly rebranding themselves while tacking a completely different problem).
Overall, I still don’t feel very good about the overall feasibility of this project, but I think you were right to push back on some of my counterarguments here.
Thanks @Agustín Covarrubias . Glad to hear that you feel this is concrete enough to be critiqued and cross-validated, that was exactly our goal in writing and posting this. From your latest responses, it seems like the main reason why you “still don’t feel very good about the overall feasibility of this project” is the lack of a “convincing MVP”, is that right? We are indeed working on this along a few different lines, so I would be curious to understand what kind of evidence from an MVP it would take to convince you or shift your opinion about feasibility.
I am, but OAA also seems less specific, and it’s harder to evaluate its feasibility compared to something more concrete (like this proposal).
My problem with this is that it sounds good, but this argument relies on many hidden premises, that make me inherently skeptical of any strong claims like “(…) the benefits of collaborative inference of the most accurate and well-regularised models will be so huge that something like Gaia Network will emerge pretty much ‘by default’”.
I think this could be addressed by a convincing MVP, and I think that you’re working on that, so I won’t push further on this point.
The current best proposals for compute governance rely on very specific types of math. I don’t think throwing blockchain or DAOs at the problem makes a lot of sense, unless you find an instance of the very specific set of problems they’re good at solving.
My priors against the crypto world comes mostly from noticing a lot of people throwing tools to problems without a clear story of how these tools actually solve the problem. This has happened so many times that I have come to generally distrust crypto/blockchain proposals unless they give me a clear explanation of why using these technologies makes sense.
But I think the point I made here was kinda weak anyway (it was, at best, discrediting by association), so I don’t think it makes sense to litigate this particular point.
I find this decently convincing, actually. Like, maybe, I’m pattern matching too much on other projects which have in the past done something similar (just lightly rebranding themselves while tacking a completely different problem).
Overall, I still don’t feel very good about the overall feasibility of this project, but I think you were right to push back on some of my counterarguments here.
Thanks @Agustín Covarrubias . Glad to hear that you feel this is concrete enough to be critiqued and cross-validated, that was exactly our goal in writing and posting this. From your latest responses, it seems like the main reason why you “still don’t feel very good about the overall feasibility of this project” is the lack of a “convincing MVP”, is that right? We are indeed working on this along a few different lines, so I would be curious to understand what kind of evidence from an MVP it would take to convince you or shift your opinion about feasibility.