Thanks for the update, I appreciate the transparency on the project’s shortcomings.
“Upon my initial review, it had a mixed track record. Some grants seemed quite exciting, some seemed promising, others lacked the information I needed to make an impact judgment, and others raised some concerns.”
I’d be interested in what (kind of) grants you think seem great and not so great.
This is a bit hard to go into detail without investing a lot of time. On a general level, I think some grants led to people starting projects with good, impactful output on areas EA cares about (including “meta”). This only describes some of the grants, but I think this is appropriate given the hits-based approach of this style of grantmaking. There were also some grants that I think created or deepened some risks without having much positive benefit. This is not specific to the particular grants made, but some of the general types of risks I would investigate if I did a more thorough review would be: impacts on the EA ecosystem/incentives (e.g. how does funding/not funding a particular project incentivize others), impacts on nascent fields (e.g. AI safety), and infohazards.
Thanks for the update, I appreciate the transparency on the project’s shortcomings.
“Upon my initial review, it had a mixed track record. Some grants seemed quite exciting, some seemed promising, others lacked the information I needed to make an impact judgment, and others raised some concerns.”
I’d be interested in what (kind of) grants you think seem great and not so great.
This is a bit hard to go into detail without investing a lot of time. On a general level, I think some grants led to people starting projects with good, impactful output on areas EA cares about (including “meta”). This only describes some of the grants, but I think this is appropriate given the hits-based approach of this style of grantmaking. There were also some grants that I think created or deepened some risks without having much positive benefit. This is not specific to the particular grants made, but some of the general types of risks I would investigate if I did a more thorough review would be: impacts on the EA ecosystem/incentives (e.g. how does funding/not funding a particular project incentivize others), impacts on nascent fields (e.g. AI safety), and infohazards.