The Midas Project is a new AI safety organization. We use public advocacy to incentivize stronger self-governance from the companies developing and deploying high-risk AI products.
This week, we’re launching our first major campaign, targeting the AI company Cognition. Cognition is a rapidly growing startup [1] developing autonomous coding agents. Unfortunately, they’ve told the public virtually nothing about how,or even if, they will conduct risk evaluations to prevent misuse and other unintended outcomes. In fact, they’ve said virtually nothing about safety at all.
We’re calling on Cognition to release an industry-standard evaluation-based safety policy. We need your help to make this campaign a success. Here are five ways you can help, sorted by level of effort:
The risks posed by AI are, at least partially, the result of a market failure.
Tech companies are locked in an arms race that is forcing everyone (even the most safety-concerned) to move fast and cut corners. Meanwhile, consumers broadly agree that AI risks are serious and that the industry should move slower. However, this belief is disconnected from their everyday experience with AI products, and there isn’t a clear Schelling point allowing consumers to express their preference via the market.
Usually, the answer to a market failure like this is regulation. When it comes to AI safety, this is certainly the solution I find most promising. But such regulation isn’t happening quickly enough. And even if governments were moving quicker, AI safety as a field is pre-paradigmatic. Nobody knows exactly what guardrails will be most useful, and new innovations are needed.
So companies are largely being left to voluntarily implement safety measures. In an ideal world, AI companies would be in a race to the top, competing against each other to earn the trust of the public through comprehensive voluntary safety measures while minimally stifling innovation and the benefits of near-term applications. But the incentives aren’t clearly pointing in that direction—at least not yet.
However: EA-supported organizations have been successful at shifting corporate incentives in the past. Take the case of cage-free campaigns.
By engaging in advocacy that threatens to expose specific food companies for falling short of customers’ basic expectations regarding animal welfare, groups like The Humane League and Mercy For Animals have been able to create a race to the top for chicken welfare, leading virtually all US food companies to commit to going cage-free. [2] Creating this change was as simple as making the connection in the consumer’s mind between their pre-existing disapproval of inhumane battery cages and the eggs being served at their local fast food chain.
I believe this sort of public advocacy can be extremely effective. In fact, in the case of previous emerging technologies, I would go so far as to say it’s been too effective. Public advocacy played a major role in preventing the widespread adoption of GM crops and nuclear power in the twentieth century, despite huge financial incentives to develop these technologies. [3]
We haven’t seen this sort of activism leveraged to demand meaningful self-regulation from AI companies yet. So far, activism has tended to either (1) write off the value of self-governance (e.g., disparaging evaluation-based scaling policies as mere industry safety washing) or (2) take an all-or-nothing, outside-game approach (e.g., demanding a global moratorium on AI development). I do think much of this work is valuable. [4] But I also think there is an opportunity to use activism to create targeted incremental change by shining a spotlight on the least responsible AI companies in today’s market and calling on them to catch up to the industry standard.
About The Midas Project
The Midas Project is a nonprofit organization that leverages corporate outreach and public education to encourage the responsible development and deployment of advanced artificial intelligence technology.
We’re planning a number of public awareness campaigns to call out companies falling behind on AI safety, and to encourage industry best practices including risk evaluation and pre-deployment safety reviews.
Cognition is an AI startup building and deploying Devin, an autonomous coding agent that can browse the web, write code, execute said code, and generally take action independently. For more details on what we know about Devin, Zvi Mowshowitz provides a good summary. [5]
It should go without saying that deploying autonomous coding agents carries unique risks. Some of these are speculative, and some are already possible today. As such, conducting pre-deployment risk evaluation seems like the bare minimum for a company creating such coding agents.
Is Cognition doing risk assessment? We don’t know. Unlike nearly all leading AI companies, they haven’t released (or even announced plans to release) a policy on risk evaluation and model scaling. In fact, they haven’t released any safety policies whatsoever, as far as we can tell. We even reached out to ask them about this, but they wouldn’t return any of our emails.
So, we’re calling on the public to ask Cognition directly: How will you ensure your product is safe?
Anti-lab advocacy can be controversial and risky, making it difficult for some philanthropic institutions to fund due to reputational and strategic concerns. Therefore, grassroots individual support is particularly valuable for us. We accept donations via our website and greatly appreciate any support you can offer. Please note that we do not yet have tax-exempt status, though we expect that to change soon.
And while money is important — these campaigns are fundamentally people-powered. One of the most impactful ways you can help is to take some time to promote, share, or participate in our campaign. There’s no minimum bar in terms of experience or time commitment. Even a few minutes a week can be impactful. If you’re interested in taking action on our campaigns, join our action hub. If you’re interested in volunteering on a more regular, committed basis, fill out our volunteer form!
We’re also hoping to grow the team in the near future. If this is the kind of project you’d be interested in working on full-time, please get in touch.
To what degree these commitments will be fulfilled remains an open question, but this still appears to be one of the most cost-effective interventions in the animal rights movement’s history.
As mentioned, I think protests against nuclear energy and GMOs probably went much further than what was socially optimal. Is the AI safety community making that same mistake with AI? Well, I don’t think so, mostly because I think this technology just is unprecedented and uniquely dangerous. We’ll see if history vindicates this.
And, an update following news that their marketing was misleading, which had previously informed some of Zvi’s initial post. Despite this blunder from Cognition, the smart consensus still seems to be that their product is very capable and high-risk.
Announcing The Midas Project — and our first campaign (which you can help with!)
Summary
The Midas Project is a new AI safety organization. We use public advocacy to incentivize stronger self-governance from the companies developing and deploying high-risk AI products.
This week, we’re launching our first major campaign, targeting the AI company Cognition. Cognition is a rapidly growing startup [1] developing autonomous coding agents. Unfortunately, they’ve told the public virtually nothing about how, or even if, they will conduct risk evaluations to prevent misuse and other unintended outcomes. In fact, they’ve said virtually nothing about safety at all.
We’re calling on Cognition to release an industry-standard evaluation-based safety policy. We need your help to make this campaign a success. Here are five ways you can help, sorted by level of effort:
Keep in the loop about our campaigns by following us on Twitter and joining our mailing list.
Offer feedback and suggestions, by commenting on this post or by reaching out at info@themidasproject.com
Share our Cognition campaign on social media, sign the petition, or engage with our campaigns directly on our action hub.
Donate to support our future campaigns (tax-exempt status pending).
Sign up to volunteer, or express interest in joining our team full-time.
Background
The risks posed by AI are, at least partially, the result of a market failure.
Tech companies are locked in an arms race that is forcing everyone (even the most safety-concerned) to move fast and cut corners. Meanwhile, consumers broadly agree that AI risks are serious and that the industry should move slower. However, this belief is disconnected from their everyday experience with AI products, and there isn’t a clear Schelling point allowing consumers to express their preference via the market.
Usually, the answer to a market failure like this is regulation. When it comes to AI safety, this is certainly the solution I find most promising. But such regulation isn’t happening quickly enough. And even if governments were moving quicker, AI safety as a field is pre-paradigmatic. Nobody knows exactly what guardrails will be most useful, and new innovations are needed.
So companies are largely being left to voluntarily implement safety measures. In an ideal world, AI companies would be in a race to the top, competing against each other to earn the trust of the public through comprehensive voluntary safety measures while minimally stifling innovation and the benefits of near-term applications. But the incentives aren’t clearly pointing in that direction—at least not yet.
However: EA-supported organizations have been successful at shifting corporate incentives in the past. Take the case of cage-free campaigns.
By engaging in advocacy that threatens to expose specific food companies for falling short of customers’ basic expectations regarding animal welfare, groups like The Humane League and Mercy For Animals have been able to create a race to the top for chicken welfare, leading virtually all US food companies to commit to going cage-free. [2] Creating this change was as simple as making the connection in the consumer’s mind between their pre-existing disapproval of inhumane battery cages and the eggs being served at their local fast food chain.
I believe this sort of public advocacy can be extremely effective. In fact, in the case of previous emerging technologies, I would go so far as to say it’s been too effective. Public advocacy played a major role in preventing the widespread adoption of GM crops and nuclear power in the twentieth century, despite huge financial incentives to develop these technologies. [3]
We haven’t seen this sort of activism leveraged to demand meaningful self-regulation from AI companies yet. So far, activism has tended to either (1) write off the value of self-governance (e.g., disparaging evaluation-based scaling policies as mere industry safety washing) or (2) take an all-or-nothing, outside-game approach (e.g., demanding a global moratorium on AI development). I do think much of this work is valuable. [4] But I also think there is an opportunity to use activism to create targeted incremental change by shining a spotlight on the least responsible AI companies in today’s market and calling on them to catch up to the industry standard.
About The Midas Project
The Midas Project is a nonprofit organization that leverages corporate outreach and public education to encourage the responsible development and deployment of advanced artificial intelligence technology.
We’re planning a number of public awareness campaigns to call out companies falling behind on AI safety, and to encourage industry best practices including risk evaluation and pre-deployment safety reviews.
Learn more about us on our website, or follow us on Twitter.
Our Campaign Against Cognition
Cognition is an AI startup building and deploying Devin, an autonomous coding agent that can browse the web, write code, execute said code, and generally take action independently. For more details on what we know about Devin, Zvi Mowshowitz provides a good summary. [5]
It should go without saying that deploying autonomous coding agents carries unique risks. Some of these are speculative, and some are already possible today. As such, conducting pre-deployment risk evaluation seems like the bare minimum for a company creating such coding agents.
Is Cognition doing risk assessment? We don’t know. Unlike nearly all leading AI companies, they haven’t released (or even announced plans to release) a policy on risk evaluation and model scaling. In fact, they haven’t released any safety policies whatsoever, as far as we can tell. We even reached out to ask them about this, but they wouldn’t return any of our emails.
So, we’re calling on the public to ask Cognition directly: How will you ensure your product is safe?
How you can get involved
To support our campaign against Cognition, consider sharing the page on Twitter, signing our petition, or taking action on our action hub.
Anti-lab advocacy can be controversial and risky, making it difficult for some philanthropic institutions to fund due to reputational and strategic concerns. Therefore, grassroots individual support is particularly valuable for us. We accept donations via our website and greatly appreciate any support you can offer. Please note that we do not yet have tax-exempt status, though we expect that to change soon.
And while money is important — these campaigns are fundamentally people-powered. One of the most impactful ways you can help is to take some time to promote, share, or participate in our campaign. There’s no minimum bar in terms of experience or time commitment. Even a few minutes a week can be impactful. If you’re interested in taking action on our campaigns, join our action hub. If you’re interested in volunteering on a more regular, committed basis, fill out our volunteer form!
We’re also hoping to grow the team in the near future. If this is the kind of project you’d be interested in working on full-time, please get in touch.
Cognition was founded in late 2023 and has since raised hundreds of millions of dollars at a $2 billion valuation.
To what degree these commitments will be fulfilled remains an open question, but this still appears to be one of the most cost-effective interventions in the animal rights movement’s history.
As mentioned, I think protests against nuclear energy and GMOs probably went much further than what was socially optimal. Is the AI safety community making that same mistake with AI? Well, I don’t think so, mostly because I think this technology just is unprecedented and uniquely dangerous. We’ll see if history vindicates this.
I’m a proud volunteer and board member for PauseAI, for example.
And, an update following news that their marketing was misleading, which had previously informed some of Zvi’s initial post. Despite this blunder from Cognition, the smart consensus still seems to be that their product is very capable and high-risk.