Credo AI is hiring for AI Gov Researcher & more!

Credo AI is hiring folks across the organization; if you are interested, please apply! I lead the AI governance research team and will describe the role I am hiring in detail, but please apply through our portal if you fit other roles. I’ve highlighted five that I believe may be of most interest to EA folks, but we also have soles in sales, customer success and demand-gen.

  1. AI Governance Researcher

  2. Policy Manager
  3. Solutions Engineer

  4. Product Designer

  5. TPM

About Credo AI

Credo AI is a company on a mission to empower organizations to deliver Responsible AI at scale. Credo AI brings context-driven governance and risk assessment to ensure compliant, fair, and auditable development and use of AI. Our goal is to move RAI development from an “ethical” choice to an obvious one. We aim to do this both by making it easier for organizations to integrate RAI practices into their AI development and by collaborating with policymakers to set up appropriate ecosystem incentives. The ultimate goal is to reduce the risk of deploying AI systems, allowing companies to capture AI’s benefits while mitigating the unintended consequences.

We make AI Governance easier with our AI governance platform, contribute to the ecosystem of AI governance through blogs, papers, policy advocacy (through direct involvement with policy makers and submitting commentary), and advance new methods and artifacts for AI transparency.

AI Governance Research Role

My team has two fundamental goals at Credo AI related to advancing best practices for AI governance. (1) We research and develop new technological approaches that can influence our product. (2) We crystalize these insights into novel research that we can publish as papers or blogs and perspectives that can influence our policy advocacy.

This is the role I’m hiring for. While all members of the team can contribute to both, this role is more focused on (2) above. Most of the information is in the job description, so please give that a read.

  • This role is not entry level. I won’t place a specific YOE expectation, but you should already have relevant experience in roles and fields relevant for AI.

  • This role will be practically connecting enterprise AI governance practices to technical AI systems.

  • Candidates should ideally have expertise in AI governance, enterprise risk management, technical AI systems, and bring a research background with demonstrated credibility.

  • Coding is not necessary but strong technical domain knowledge is!

Hiring process and details

Our hiring process starts with you applying through the job portal

Q&A

We welcome any questions about what working at Credo AI is like, more details about our product, the hiring process, what we’re looking for, or whether you should apply. You can reach out to jobs@credo.ai, or reach out directly to me at ian@credo.ai.

Relationship to Effective Altruism

In the past I’ve advertised roles here and felt the need to argue why AI governance is an important component of our general “AI-risk-mitigation” strategy. Now I’ll just point you to this 80k post.

The one addition I’ll make is how Credo AI specifically works in this landscape. A critical aspect of AI governance is proving that governance is possible and commensurate with innovation. You can see this in Senator Schumer’s recent remarks on AI focusing on “safe innovation”. This means that it’s important that we create tooling that makes AI governance practical, which both speeds up its adoption and influence on the normal lifecycle of AI and encourages policy makers that they can demand more.

Another important aspect is recognizing and representing the complicated value chain that requires oversight. While EA focuses primarily on catastrophic risk (and thus often focuses on leverage over frontier model developers) the value chain is actually vast, primarily comprised of downstream AI users and application developers. These are Credo AI’s bread and butter, and (1) helping them deploy AI more responsibly is a net good and a contributor to the overall culture of RAI and (2) supporting their procurement needs puts a different kind of pressure on upstream foundation model developers to build more transparent and safe AI systems.

I won’t belabor this point more, but it’s been a wonderful space for impact for me!

Who am I?

My name is Ian Eisenberg. I’m a cognitive neuroscientist who moved into machine learning after finishing my PhD. While working in ML, I quickly realized that I was more interested in the socio-technical challenges of responsible AI development than AI capabilities, first becoming inspired by the challenges of building aligned AI systems.

I am a Co-Founder of the AI Salon in SF. I’m also currently facilitating the AI Governance Course run by Blue Dot Impact (a great way to get into the field!). Previously, I was an organizer of Effective Altruism San Francisco, and spent some of my volunteer time with the pro-bono data science organization DataKind.

No comments.