Credo AI is hiring!

Credo AI is hiring technical folks versed in responsible AI; if you are interested, please apply! If you aren’t a data scientist or other technical professional, but are inspired by our mission, please reach out. We are always looking for talented, passionate folks.

What is Credo AI?

Credo AIis a ventured-backed Responsible AI (RAI) company focused on the assessment and governance of AI systems. Our goal is to move RAI development from an “ethical” choice to an obvious one. We aim to do this both by making it easier for organizations to integrate RAI practices into their AI development and by collaborating with policy makers to set up appropriate ecosystem incentives. The ultimate goal is to reduce the risk of deploying AI systems, allowing us to capture the AI’s benefits while mitigating its costs.

We make RAI easier with our governance, risk & compliance (GRC) product and an open-source AI assessment framework called Lens. Our data science team focuses on the latter with the goal of creating the most approachable tool for comprehensive RAI assessment for any AI system. We take a “what you can’t observe, you can’t control” approach to this space, and believe that assessment lays the foundation for all other aspects of a RAI ecosystem (e.g., auditing, mitigation, regulation). Here’s a notebook showing some of Len’s capabilities in code.

A particular focus of our governance product is involving diverse stakeholders in the governance of AI systems. Technical teams obviously have important perspectives, but so do compliance, governance, product, social scientists, etc. We aim to provide the forum for their effective collaboration in our GRC software, and provide technical outputs via Lens that are useful for everyone.

Our collaboration with policy organizations is just beginning, but we are already contributing our perspective to the broader policy conversation. For instance, see our comments to NIST on Artificial Intelligence Risks. Our CEO and technical policy advisors have been part of the World Economic Forum, The Center for AI & Digital Policy, the Mozilla Foundation and the Biden Administration.

Who is Credo AI?

We are a small, ambitious team committed to RAI. We are a global, remote company with expertise in building amazing products, technical policy, social science, and, of course, AI. We are a humble group, and are focused on learning from the policy community, academia, and, most critically, our customers. Find a bit more about us and our founder here.

The data science team is currently 2 people (Ian Eisenberg and Amin Rasekh). The needs in this space are immense, so early hires will have the opportunity (and indeed the responsibility!) for owning significant components of our assessment framework.

Relationship to Effective Altruism

The EA community has argued that AI governance is an important cause area for a while. A great starter can be found here, many other posts are found here. The majority of work is either being pursued by particular governments, academia, or a few non-profits.

However, making principles of AI governance a reality requires a broader ecosystem approach, consisting of governments enacting regulations, customers and businesses demanding AI accountability from AI service providers, academic institutions exploring evidentially-backed governance approaches, independent auditors focused on evaluating AI systems, and more. There are many interacting parts that must come together to change the development of the AI systems affecting our lives—most of which are developed in the corporate sector.

Credo AI specifically engages with the corporate sector, and plays a role that is sometimes described as Model Ops. We are the bridge between theory, policy and implementation that can connect with corporate decision making. We think of ourselves as creating a responsible “choice architecture” that promotes responsible practices. For better or worse the bar for RAI development is very low right now, which means there is a ton we can do to improve the status quo, whether that’s by making relatively well researched approaches to “fair AI” easy to incorporate into model development, making existing regulations more understandable, or being the first to practically operationalize bleeding-edge RAI approaches.

There is plenty of low hanging fruit for us at these early stages, but our ambitions are great. In the medium term we would like to build the most comprehensive assessment framework for AI systems and help all AI-focused companies improve their RAI processes. At the longer time scale, we would love to inform an empirical theory of AI policy. Others have pointed out the difficulty AI policies will have in keeping up with the speed of technical innovation. Building a better science of effective AI governance requires knowledge of the policies corporations are employing, and their relative effectiveness. We are far (far!) away from having this kind of detail, but it’s the kind of long-term ambition we have.

Who should apply?

If you believe you have the skills and passion for contributing to the nascent world of AI governance, we want to hear from you!

To help you figure out if that’s you, I’ll describe some of the near-term challenges we are facing:

  • How can general principles of Responsible AI be operationalized?

  • How can we programmatically assess AI systems for principles like fairness, transparency, etc?

  • How can we make those assessments understandable and actionable for a broad range of stakeholders?

The data science team’s broader goal is to build an assessment framework that connects AI teams with RAI tools developed in academia, the open source world, and at Credo AI. We want this framework to make employing best practices easy so that “responsible AI development” becomes an obvious choice for any developer. Creating this assessment framework lays the groundwork for Credo AI’s other missions, which is generally to ensure that AI is developed responsibly.

To be a bit more concrete we are looking for people who:

  • Have an existing passion and knowledge in this space. You don’t have to have previously worked in “AI safety” or “responsible AI”, but this post shouldn’t be the first time you are thinking about these issues!

  • For the data science team you need to know how to program in python, and familiarity with the process of AI development is a definite plus.

  • If you aren’t interested in the data science team, but believe you can contribute, please reach out anyway!

  • Have an “owner” mindset. This word gets tossed around a lot, but at a startup our size it truly is a requirement. The ground is fertile and we need people who have the vision and follow through to develop wonderful things.

Hiring process and details

Our hiring process starts with you reaching out. We are looking for anyone who read the above section and thinks “that’s me!”. If that’s you, send a message to me at ian@credo.ai. Please include “Effective Altruism Forum” in the subject line so I know where you heard of us.

Specific jobs and requirements are posted here.

Q&A

We welcome any questions about what working at Credo AI is like, more details about our product, the hiring process, what we’re looking for, or whether you should apply. You can reach out to jobs@credo.ai, or reach out directly to me at ian@credo.ai.

Who am I?

My name is Ian Eisenberg. I’m a cognitive neuroscientist who moved into machine learning after finishing my PhD. While working in ML, I quickly realized that I was more interested in the socio-technical challenges of responsible AI development than AI capabilities, first becoming inspired by the challenges of building aligned AI systems. I am an organizer of Effective Altruism San Francisco, and spend some of my volunteer time with the pro-bono data science organization DataKind.