Credo AI is not specifically targeted at reducing existential risk from AI. We are working with companies and policy makers who are converging on a set of responsible AI principles that need to be thought out better and implemented.
-
Speaking for myself now—I became interested in AI safety and governance because of the existential risk angle. As we have talked to companies and policy makers it is clear that most groups do not think about AI safety in that way. They are concerned with ethical issues like fairness—either for moral reasons, or, more likely, financial reasons (no one wants to have an article written about their unfair AI system!)
So what to do? I believe supporting companies to incorporate “ethical” principles like fairness into their development process is a first step to incorporating other more ambiguous values into their AI systems. In essence, Fairness is the first non-performance ethical value most governments and companies are realizing they want their AI systems to adhere to. It isn’t generic “value alignment”, but it is a big step from just minimizing a traditional loss function.
Moving beyond Fairness, there are so many components of AI development process, infrastructure and government understanding that need to be moved. Building a tool that can be incorporated into the heart of the development process provides an avenue to support companies on a host of responsible dimensions—some of which our customers will ask for (supporting fair AI systems), and some they won’t (reducing existential risk of our systems). All of this will be important for existential risk, particularly in a slow-takeoff scenario.
All that said, if the existential risk of AI systems is your specific focus (and you don’t believe in a slow-takeoff scenario where the interventions Credo AI will support could be helpful), then Credo AI may not be the right place for you.
Is work at Credo AI targeted at trying to reduce existential risk from advanced AI (whether from misalignment, accident, misuse, or structural risks)?
Credo AI is not specifically targeted at reducing existential risk from AI. We are working with companies and policy makers who are converging on a set of responsible AI principles that need to be thought out better and implemented.
-
Speaking for myself now—I became interested in AI safety and governance because of the existential risk angle. As we have talked to companies and policy makers it is clear that most groups do not think about AI safety in that way. They are concerned with ethical issues like fairness—either for moral reasons, or, more likely, financial reasons (no one wants to have an article written about their unfair AI system!)
So what to do? I believe supporting companies to incorporate “ethical” principles like fairness into their development process is a first step to incorporating other more ambiguous values into their AI systems. In essence, Fairness is the first non-performance ethical value most governments and companies are realizing they want their AI systems to adhere to. It isn’t generic “value alignment”, but it is a big step from just minimizing a traditional loss function.
Moving beyond Fairness, there are so many components of AI development process, infrastructure and government understanding that need to be moved. Building a tool that can be incorporated into the heart of the development process provides an avenue to support companies on a host of responsible dimensions—some of which our customers will ask for (supporting fair AI systems), and some they won’t (reducing existential risk of our systems). All of this will be important for existential risk, particularly in a slow-takeoff scenario.
All that said, if the existential risk of AI systems is your specific focus (and you don’t believe in a slow-takeoff scenario where the interventions Credo AI will support could be helpful), then Credo AI may not be the right place for you.