I wish I could! Unfortunately, despite having several conversations and emails with the various AI safety donors, I’m still confused about why they are declining to fund CAIP. The message I’ve been getting is that other funding opportunities seem more valuable to them, but I don’t know exactly what criteria or measurement system they’re using.
At least one major donor said that they were trying to measure counterfactual impact—something like, try to figure out how much good the laws you’re championing would accomplish if they passed, and then ask how close they got to passing. However, I don’t understand why this analysis disfavors CAIP. Compared to most other organizations in the space, the laws we’re working on are less likely to pass, but would do much more good if they did pass.
Another possible factor is that my co-founder, Thomas Larsen, left CAIP in the spring of 2024, less than a year after starting the organization. As I understand it, Thomas left because he learned that political change is harder than he had initially thought, and because he felt frustrated that CAIP was not powerful enough to accomplish its mission within the short time that he expects we have left before superintelligence is deployed, and because he did not see a good fit between the skills he wanted to use (research, longform writing, forecasting) and CAIP’s day-to-day needs.
Thomas’s early departure is obviously an important piece of information that weighs against donating to CAIP, but given the context, I don’t think it’s reasonable for institutional donors to treat it as decisive. I actually agree with Thomas’s point that CAIP’s mission is very ambitious relative to our resources and that we most likely will not succeed. However, I think it’s worth trying anyway, because the stakes are so high that even a small chance of success is very valuable.
If you’re right, I think that would point to xrisk space funders trusting individuals way too much and institutions way too little. Thomas is a great guy but one guy losing belief in his work (which happens all the time mostly for private reasons, and mostly independent of the actual meaning of the work) should never be a reason to defund an otherwise functioning org, doing seemingly crucial work.
If the alternative theory is correct and the hit pieces are to blame, that still seems like an incorrect decision. When you’re lobbying for something important you can expect some pushback, that shouldn’t be a reason to pull out immediately.
Very well said! I think your first paragraph sums up the most important parts of the story of why CAIP was defunded—Thomas lost interest, mostly for private reasons, and the x-risk funders relied far too heavily on this data point. In part this is because the x-risk funders appear to lack any kind of formal grantmaking criteria, as I write about in post 7 of this sequence.
I wish I could! Unfortunately, despite having several conversations and emails with the various AI safety donors, I’m still confused about why they are declining to fund CAIP. The message I’ve been getting is that other funding opportunities seem more valuable to them, but I don’t know exactly what criteria or measurement system they’re using.
At least one major donor said that they were trying to measure counterfactual impact—something like, try to figure out how much good the laws you’re championing would accomplish if they passed, and then ask how close they got to passing. However, I don’t understand why this analysis disfavors CAIP. Compared to most other organizations in the space, the laws we’re working on are less likely to pass, but would do much more good if they did pass.
Another possible factor is that my co-founder, Thomas Larsen, left CAIP in the spring of 2024, less than a year after starting the organization. As I understand it, Thomas left because he learned that political change is harder than he had initially thought, and because he felt frustrated that CAIP was not powerful enough to accomplish its mission within the short time that he expects we have left before superintelligence is deployed, and because he did not see a good fit between the skills he wanted to use (research, longform writing, forecasting) and CAIP’s day-to-day needs.
Thomas’s early departure is obviously an important piece of information that weighs against donating to CAIP, but given the context, I don’t think it’s reasonable for institutional donors to treat it as decisive. I actually agree with Thomas’s point that CAIP’s mission is very ambitious relative to our resources and that we most likely will not succeed. However, I think it’s worth trying anyway, because the stakes are so high that even a small chance of success is very valuable.
If I were to bet on what’s happened here, I’d bet it’s something to do with Thomas leaving.
Looking at his LinkedIn and his Forum history, he seems very well connected in the field of AI safety.
I suspect it was easy to get funding because people knew and trusted him.
If you’re right, I think that would point to xrisk space funders trusting individuals way too much and institutions way too little. Thomas is a great guy but one guy losing belief in his work (which happens all the time mostly for private reasons, and mostly independent of the actual meaning of the work) should never be a reason to defund an otherwise functioning org, doing seemingly crucial work.
If the alternative theory is correct and the hit pieces are to blame, that still seems like an incorrect decision. When you’re lobbying for something important you can expect some pushback, that shouldn’t be a reason to pull out immediately.
I agree!
Very well said! I think your first paragraph sums up the most important parts of the story of why CAIP was defunded—Thomas lost interest, mostly for private reasons, and the x-risk funders relied far too heavily on this data point. In part this is because the x-risk funders appear to lack any kind of formal grantmaking criteria, as I write about in post 7 of this sequence.