I think this makes sense, but it seems kind of disconnected from the presentation, which seemed to indicate CAIP proposes reasonable policy and has a strong team. Perhaps Jason can clarify why he thinks major donors have passed on this opportunity.
I wish I could! Unfortunately, despite having several conversations and emails with the various AI safety donors, Iâm still confused about why they are declining to fund CAIP. The message Iâve been getting is that other funding opportunities seem more valuable to them, but I donât know exactly what criteria or measurement system theyâre using.
At least one major donor said that they were trying to measure counterfactual impactâsomething like, try to figure out how much good the laws youâre championing would accomplish if they passed, and then ask how close they got to passing. However, I donât understand why this analysis disfavors CAIP. Compared to most other organizations in the space, the laws weâre working on are less likely to pass, but would do much more good if they did pass.
Another possible factor is that my co-founder, Thomas Larsen, left CAIP in the spring of 2024, less than a year after starting the organization. As I understand it, Thomas left because he learned that political change is harder than he had initially thought, and because he felt frustrated that CAIP was not powerful enough to accomplish its mission within the short time that he expects we have left before superintelligence is deployed, and because he did not see a good fit between the skills he wanted to use (research, longform writing, forecasting) and CAIPâs day-to-day needs.
Thomasâs early departure is obviously an important piece of information that weighs against donating to CAIP, but given the context, I donât think itâs reasonable for institutional donors to treat it as decisive. I actually agree with Thomasâs point that CAIPâs mission is very ambitious relative to our resources and that we most likely will not succeed. However, I think itâs worth trying anyway, because the stakes are so high that even a small chance of success is very valuable.
If youâre right, I think that would point to xrisk space funders trusting individuals way too much and institutions way too little. Thomas is a great guy but one guy losing belief in his work (which happens all the time mostly for private reasons, and mostly independent of the actual meaning of the work) should never be a reason to defund an otherwise functioning org, doing seemingly crucial work.
If the alternative theory is correct and the hit pieces are to blame, that still seems like an incorrect decision. When youâre lobbying for something important you can expect some pushback, that shouldnât be a reason to pull out immediately.
Very well said! I think your first paragraph sums up the most important parts of the story of why CAIP was defundedâThomas lost interest, mostly for private reasons, and the x-risk funders relied far too heavily on this data point. In part this is because the x-risk funders appear to lack any kind of formal grantmaking criteria, as I write about in post 7 of this sequence.
I do not feel qualified to judge the effectiveness of an advocacy org from the outsideâthereâs a lot of critical information like whether theyâre offending people, if theyâre having an impact, whether theyâre sucking up oxygen from other orgs in the space, if their policy proposals are realistic, if theyâre making good strategic decisions, etc, that I donât really have the information to evaluate. So itâs hard to engage deeply with an orgâs case for itself, and I default to this kind of high level prior. Like, the funders can also see this strong case and still arenât funding it, so I think my argument stands
I think these are great criteria, Neel. If one or more of the funders had come to me and said, âHey, here are some people who youâve offended, or here are some people who say youâre sucking up their oxygen, or hereâs why your policy proposals are unrealistic,â then I probably would have just accepted their judgment and trusted that the money is better spent elsewhere. Part of why Iâm on the forum discussing these issues is that so far, nobody has offered me any details like that; essentially all I have is their bottom-line assessment that CAIP is less valuable than other funding opportunities.
I think we agree. Thinking out loud: Perhaps the community should consider a way to have a more transparent way of making these decisions. If we collectively decide to follow large funders, but are unable to understand their motives, it is impossible to have fund diversification.
I think this makes sense, but it seems kind of disconnected from the presentation, which seemed to indicate CAIP proposes reasonable policy and has a strong team. Perhaps Jason can clarify why he thinks major donors have passed on this opportunity.
I wish I could! Unfortunately, despite having several conversations and emails with the various AI safety donors, Iâm still confused about why they are declining to fund CAIP. The message Iâve been getting is that other funding opportunities seem more valuable to them, but I donât know exactly what criteria or measurement system theyâre using.
At least one major donor said that they were trying to measure counterfactual impactâsomething like, try to figure out how much good the laws youâre championing would accomplish if they passed, and then ask how close they got to passing. However, I donât understand why this analysis disfavors CAIP. Compared to most other organizations in the space, the laws weâre working on are less likely to pass, but would do much more good if they did pass.
Another possible factor is that my co-founder, Thomas Larsen, left CAIP in the spring of 2024, less than a year after starting the organization. As I understand it, Thomas left because he learned that political change is harder than he had initially thought, and because he felt frustrated that CAIP was not powerful enough to accomplish its mission within the short time that he expects we have left before superintelligence is deployed, and because he did not see a good fit between the skills he wanted to use (research, longform writing, forecasting) and CAIPâs day-to-day needs.
Thomasâs early departure is obviously an important piece of information that weighs against donating to CAIP, but given the context, I donât think itâs reasonable for institutional donors to treat it as decisive. I actually agree with Thomasâs point that CAIPâs mission is very ambitious relative to our resources and that we most likely will not succeed. However, I think itâs worth trying anyway, because the stakes are so high that even a small chance of success is very valuable.
If I were to bet on whatâs happened here, Iâd bet itâs something to do with Thomas leaving.
Looking at his LinkedIn and his Forum history, he seems very well connected in the field of AI safety.
I suspect it was easy to get funding because people knew and trusted him.
If youâre right, I think that would point to xrisk space funders trusting individuals way too much and institutions way too little. Thomas is a great guy but one guy losing belief in his work (which happens all the time mostly for private reasons, and mostly independent of the actual meaning of the work) should never be a reason to defund an otherwise functioning org, doing seemingly crucial work.
If the alternative theory is correct and the hit pieces are to blame, that still seems like an incorrect decision. When youâre lobbying for something important you can expect some pushback, that shouldnât be a reason to pull out immediately.
I agree!
Very well said! I think your first paragraph sums up the most important parts of the story of why CAIP was defundedâThomas lost interest, mostly for private reasons, and the x-risk funders relied far too heavily on this data point. In part this is because the x-risk funders appear to lack any kind of formal grantmaking criteria, as I write about in post 7 of this sequence.
I do not feel qualified to judge the effectiveness of an advocacy org from the outsideâthereâs a lot of critical information like whether theyâre offending people, if theyâre having an impact, whether theyâre sucking up oxygen from other orgs in the space, if their policy proposals are realistic, if theyâre making good strategic decisions, etc, that I donât really have the information to evaluate. So itâs hard to engage deeply with an orgâs case for itself, and I default to this kind of high level prior. Like, the funders can also see this strong case and still arenât funding it, so I think my argument stands
I think these are great criteria, Neel. If one or more of the funders had come to me and said, âHey, here are some people who youâve offended, or here are some people who say youâre sucking up their oxygen, or hereâs why your policy proposals are unrealistic,â then I probably would have just accepted their judgment and trusted that the money is better spent elsewhere. Part of why Iâm on the forum discussing these issues is that so far, nobody has offered me any details like that; essentially all I have is their bottom-line assessment that CAIP is less valuable than other funding opportunities.
I think we agree. Thinking out loud: Perhaps the community should consider a way to have a more transparent way of making these decisions. If we collectively decide to follow large funders, but are unable to understand their motives, it is impossible to have fund diversification.