Some minor clarifications for the Long Term Future Fund section (Larks did reach out to us and ask for feedback, though it looks like not all of our corrections made it into the final version). [Correction: We hadn’t actually send the relevant corrections back yet, so this is not Lark’s fault. Sorry for that!]
Notably this means the funds only paid out $150,000 to CFAR (10%), as the balance was made up by a private donor after CEA did not approve the second grant.
The thing as written is correct, but I want to clarify that CEA did not reject the grant, but was still in the process of deciding whether to approve the grant, when a private donor stepped in. I expect that CEA would have eventually approved this grant, though there is definitely still some uncertainty in that.
I was not impressed that one grant that saw harsh and accurate criticism on the forum after the first round was re-submitted for the second round. Ex post this didn’t matter as CEA rejected it on substantive grounds the second time
This is correct as written, but I think I want to clarify what is meant by resubmission here:
We’ve had a few grants that have run into logistical difficulties (like there not being a clear way to make this grant compatible with CEA’s charitable objectives), and in cases like this we’ve worked with the potential grantee to resolve those issues. I think that support we provide should be independent from the evaluations of grants that we make, and I don’t think we should reject grants because of logistical issues like that, if they are easily fixable.
The Lauren Lee grant ran into some issues in this space that took a while to resolve, so CEA ended up only properly evaluating the grant in the round following the one in which we recommended it, and then subsequently rejected it. The “resubmission” in that sense shouldn’t be seen as an additional strong endorsement of the grant, but is just a thing that could happen to any grant and I think doesn’t say much about how good we thought the grant was, after we had made the decision to recommend the grant.
Geography chart
We were about to send Larks an updated version of the geography data before this post went up. Here is a graph with my best guesses (this includes all recommendations we made, even for grants that didn’t end up going through):
And here is one excluding the three grants that ended up being covered by private donors:
[Edit Note: I briefly had a version of the comment up that showed the geographic distribution by count instead of by grant amount. This is now fixed.]
Also, obviously. Thank you a lot for writing this review. As someone with a strong interest in having good public discourse around AI Alignment, I am deeply grateful for all of your work on this, and deeply appreciate the care and effort that goes into these reviews and the effects it has on people trying to successfully navigate the growing AI Alignment landscape.
Some minor clarifications for the Long Term Future Fund section (Larks did reach out to us and ask for feedback, though it looks like not all of our corrections made it into the final version). [Correction: We hadn’t actually send the relevant corrections back yet, so this is not Lark’s fault. Sorry for that!]
The thing as written is correct, but I want to clarify that CEA did not reject the grant, but was still in the process of deciding whether to approve the grant, when a private donor stepped in. I expect that CEA would have eventually approved this grant, though there is definitely still some uncertainty in that.
This is correct as written, but I think I want to clarify what is meant by resubmission here:
We’ve had a few grants that have run into logistical difficulties (like there not being a clear way to make this grant compatible with CEA’s charitable objectives), and in cases like this we’ve worked with the potential grantee to resolve those issues. I think that support we provide should be independent from the evaluations of grants that we make, and I don’t think we should reject grants because of logistical issues like that, if they are easily fixable.
The Lauren Lee grant ran into some issues in this space that took a while to resolve, so CEA ended up only properly evaluating the grant in the round following the one in which we recommended it, and then subsequently rejected it. The “resubmission” in that sense shouldn’t be seen as an additional strong endorsement of the grant, but is just a thing that could happen to any grant and I think doesn’t say much about how good we thought the grant was, after we had made the decision to recommend the grant.
We were about to send Larks an updated version of the geography data before this post went up. Here is a graph with my best guesses (this includes all recommendations we made, even for grants that didn’t end up going through):
And here is one excluding the three grants that ended up being covered by private donors:
[Edit Note: I briefly had a version of the comment up that showed the geographic distribution by count instead of by grant amount. This is now fixed.]
Also, obviously. Thank you a lot for writing this review. As someone with a strong interest in having good public discourse around AI Alignment, I am deeply grateful for all of your work on this, and deeply appreciate the care and effort that goes into these reviews and the effects it has on people trying to successfully navigate the growing AI Alignment landscape.