I’d like to point out for the benefit of other forum readers that EAs have different views on the average expected value of projects, variance in expected value of projects, and prevalence and severity of negative expected value projects. Based on the applications that the EA Angel Group has received, as well as lengthy lists of projects that have existed or currently exist in the EA community, at present I do not think that many immediately obvious negative EV projects exist (it is possible that people come up with negative EV projects but then receive feedback on potential harms prior to the project’s existence becoming known to many people). I have seen a lot of projects that could potentially have a near-zero EV by failing to achieve their intended objectives or underperforming a top-rated EA charity, but people will often have highly varied opinions on a project’s EV.
Jan has a focus on x-risk and the long-term future. A project seeking to directly impact x-risks by doing something like AI safety research has not yet applied to the EA Angel Group, and I have rarely or ever seen projects like that in EA project lists. It is possible that people behind projects like those are already aware of the risks of sharing information or do not see the need to either share the project’s existence with many people or apply for early-stage funding from funders that do not focus exclusively x-risk. It is possible that complex projects that are doing direct work to impact the long-term future can have a greater potential to create harm and should be reviewed more rigorously.
In the case that most EA projects are EV positive and are in need of funding, then this article’s suggestion is likely net positive. Also, essentially all individual funders I’ve spoken with already consult other funders and experts if they see the need before making funding decisions. If this is the norm, which I think it is, this makes it much less likely that the unilateralist’s curse will happen in practice with regard to EA project funding.
Most importantly, this article’s proposal will probably only have a marginal impact on project and funder discoverability. Historically, many resources have existed online to enable EAs and funders to discover projects, like the .impact Hackpad (which shut down when Hackpad was acquired), various lists of projects that have popped up on the EA forum and elsewhere on the internet, and the EA Work Club. Announcing that a project exists or is seeking funding is simply sharing information, and there doesn’t appear to be any easy way to prevent people from sharing information if they want to.
Therefore, I do not think it is fair to label this proposal a “bad idea.” Implementing the article’s proposal only makes it marginally easier for funders to learn about things than existing methods like someone posting a project idea directly on the EA Forum and even seeking funding, as has been done many times in the past. Someone who is motivated enough about seeking funding can simply speak with a lot of EAs they encounter and ask for funding, sidestepping this article’s list of funders.
Nevertheless, there still may be a risk of one funder making a mistake and not seeking additional evaluations from others before funding something. That is why I created the EA Angel Group, which has an initial staff review of projects followed by funders sharing their evaluations of projects with each other to eliminate the possibility of one funder funding something while not being aware of the opinion of other funders. For optimal safety, a setup like the EA Angel Group is safer than publicly posting everyone’s contact information online and seems to achieve the same overall objectives as this article’s proposal.