While I understand the intent behind publicly praising well-known contributors, I think we should consider the potential downsides. Heaping more praise on individuals who are already widely celebrated could be net negative, especially when there are many others who contribute valuable work but go largely ignored on the forum. This risks reinforcing a narrow focus on a few voices at the expense of elevating diverse perspectives and recognizing unsung contributors. Perhaps it would be more productive to highlight those who often don’t receive recognition but still make significant contributions.
Also, I believe you meant “compliments,” not “complements.”
A common perception in EA is that Open Philanthropy and other elite EA organizations focus on doing the most good, which can come across as detached from broader community engagement. However, I believe there is a strong case, even from an impartial welfarist perspective, that empowering the broader EA community to explore and test ideas could be extremely high-EV. The EA community is vast, and there is a wealth of ideas beyond what the elite circle generates. Yet, the “do-ocracy” model, where people are encouraged to pursue their own projects, often disempowers those who don’t have the time or resources to do so.
Additionally, the dismissal of “EA should” statements, where suggestions are ignored because the originator isn’t positioned to implement them, further limits the potential for innovation. While tools like the EA Funds exist, they focus narrowly on pre-determined areas, and rejections are often made without feedback, leaving many high-EV ideas unexplored and unsupported.
Given that much of EA’s potential for innovation lies within the broader community, what steps can Open Phil take to better engage with and support exploratory, high-EV ideas from the wider EA base? How can Open Phil foster an environment where more ideas from the community can be tested, rather than maintaining a top-down approach that may be missing valuable opportunities?
Thanks for the comment, Ben! You’re right that a perfectly applied scout mindset involves critically analyzing information and updating based on evidence, rather than deferring. In theory, someone applying the scout mindset would update the correct amount based on the fact that they have an interest in a certain outcome, without automatically yielding to critiques. However, in practice, I think there’s a tendency within EA to celebrate the relinquishing of positions, almost as a marker of intellectual humility or objectivity.
This can create a culture where people may feel pressure to seem “scouty” by yielding more often than is optimal, even when the epistemological ecosystem might actually need them to advocate for the value of their intervention or program. In such cases, the desire to appear unbiased or intellectually humble could lead people to abandon or underplay their projects prematurely, which could be a loss for the broader system.
It’s a subtle difference, but I think it’s worth considering how the scout mindset is applied in practice, especially when there’s a risk of overcorrecting in the direction of giving up rather than pushing for the potential value of one’s work.
Thanks for your thoughtful response, Joey. I had originally approached this issue more from the perspective of founders and leaders being less “soldiery” for their projects, but I see that the funder’s viewpoint, especially regarding the counterfactual uses of money, is quite different and valid.
One key difference is that the counterfactual reallocation of talent from failed AIM charities may not be as impactful as the reallocation of funds by AIM-adjacent funders. As you mentioned, many people who worked at a failed AIM charity are likely to join another CE charity or work for an EA meta org, but these roles are in high demand and often attract top-tier talent regardless. It’s not clear that the movement of talent between these organizations would have as large an impact as reallocating funds to more successful initiatives.
This is where the dynamic between founders and funders diverges. From the leader’s perspective, it might make more sense to continue to pivot, seek out other funding sources, and keep the project alive, particularly if they still believe in the long-term potential. On the other hand, from the funder’s perspective, cutting their losses and focusing on capitalizing on wins may provide a much clearer path to maximizing impact. It seems that the optimal decisions for founders and funders could diverge, depending on their roles in the ecosystem.
I appreciate your insight into how marginal bets play into these decisions and how AIM’s cohort-based structure could actually benefit from higher shutdown rates. It seems like there’s a balance between empowering founders to pursue potential breakthroughs while ensuring funders can make optimal reallocation decisions for broader impact.
I agree that funders can sometimes quit too early on promising projects. Founders and those directly involved often play a crucial role in pushing through early setbacks. However, I worry that in the EA community, there’s an overemphasis on the “scout” mindset—being skeptical of one’s own work and too quick to defer to critiques from others.
I should acknowledge that I may be strawmanning the scout mindset a bit here. In its ideal form, the scout mindset encourages intellectual humility and a willingness to see things as they are, which is clearly valuable. The practical application, however, often leads people to focus on being extra vigilant about potential biases in projects they’re closely involved with. While this caution is important, I think there’s a risk that it prevents people from taking the necessary “inside view” to successfully lead new or risky projects.
Many successful endeavors require a degree of tenacity and advocacy from the inside—the willingness to believe in a project’s potential and push forward despite early doubts. In systems like the legal world or even competitive markets, having “soldiers” who advocate strongly for their side often leads to better overall outcomes. Within EA, founders and project leaders can play this same balancing role by fighting for their initiatives when external doubts arise. Without this advocacy, projects with high long-term potential might be abandoned too soon.
The soldier mindset may be particularly important for those leading high-risk, high-reward projects. Individuals like Elon Musk at SpaceX or Jeff Bezos at Amazon had to persist through early failures because they believed in their projects when others didn’t. Their success depended on taking the inside view and pushing forward despite setbacks. If founders in EA are too quick to adopt the scout mindset, we might lose out on similarly promising projects.
In short, while the scout mindset has its place, I think we need a balance. Founders and those deeply involved in a project should serve as a necessary counterweight to broader skepticism. A healthy epistemic system requires both scouts and soldiers, and right now, I think EA might benefit from more of the latter in some contexts. I’d also be interested in case studies of people who have quit projects to better understand these dynamics. Your research idea sounds like a great way to explore whether we’re underestimating the value of persistence in high-risk, high-reward initiatives.
Very disappointed that the EA hub is not at least retaining the information for future use or otherwise making any effort to convey the information. Collectively, EAs, including myself, have spent a lot of time submitting the information. EAs spend their valuable time contributing to these projects with trust that the information will be stewarded and used. Now it seems that no effort is being taken to see that there is even the possibility of future use of this information. I don’t know what the cost of the information retention would be, but I would be very surprised if it was lower than either the value of the possibility of its future gainful use and/or the harm from the loss of trust caused from the fact the products of community members’ time was discarded so cavalierly.
The lack of care for the information gathered will likely cause people to slightly update against spending their time to collaborate in networking projects.
Definitely feel like hubris and elitism damage new ideas being explored within the EA space. Speaking from my experience with a new idea that I founded a non-profit to promote, I have found the EA community generally unhelpful, with a few notable exceptions.
I was encouraged when I heard 80k and other sources discuss the value of exploration in conjunction with exploitation. This would mean if there isn’t evidence to support a new idea or intervention, but there is a plausible mechanism of impact, search costs are usually warranted. However, when discussing my idea, typically there was an exultation of “red-teaming” with very little discussion of development of the idea or empirical validation. I know that strongly evaluating the possibe limitations and downsides of new ideas is indispensable, but the degree to which this is valorized over idea development is absurd with respect to new ideas.
My experience interacting briefly with people with some power and influence in EA was rather disappointing as well. Always had the impression that organizations thought the existing thought leaders knew essentially all of the areas in which fruitful interventions might be found or ideas had merit. As far as EA using its resources to aid projects, the central consideration has seemed to be the connections one has made. A defense of this allocation can be made in that people with good ideas will eventually become known within EA, but the process is slow and selects for those with networking skills and patience.
When applying for grants with EAIF, a declination was not met with any explanation. The reasoning was that they did not have time. The notion that EAIF lacks the resources to hire sufficient staff that one could explain deficiencies in a grant proposal is absurd. It seems to me that either they they don’t hold people trying to contribute to EA in new ways in high regard, or that an explanation would potentially render them in some way accountable.
My local EA group is nice, but the focus seems to be valorizing EAs heroes rather than support ideas of the members. They often are unwilling to dedicate any thought or time to new members’ ideas. I would think that such groups could be working together to explore and develop high EV potential ways to better the world. Instead, my experience has been that of a fan club of thought leaders.
I cannot help but feel concerned that there are many people with awesome ideas that could produce high EV that lack my stubornity and/or confidence in my idea. I am a very strong believer in the core of EA: using reason to ascertain how to do the greatest good and doggedly pursuing it. I would think it would be immensely high EV to cultivate new ideas, evaluate the means and costs of empirical testing, and helping EAs, in fact implement tests.
EA purports to value new ideas, but it appears in action often unhelpful and even smothering with regard to their development.
There wouldn’t be an association with planned disloyalty nor an implication for military strategy because the person would keep his/her intention secret.
Midtermist12
While I understand the intent behind publicly praising well-known contributors, I think we should consider the potential downsides. Heaping more praise on individuals who are already widely celebrated could be net negative, especially when there are many others who contribute valuable work but go largely ignored on the forum. This risks reinforcing a narrow focus on a few voices at the expense of elevating diverse perspectives and recognizing unsung contributors. Perhaps it would be more productive to highlight those who often don’t receive recognition but still make significant contributions.
Also, I believe you meant “compliments,” not “complements.”
A common perception in EA is that Open Philanthropy and other elite EA organizations focus on doing the most good, which can come across as detached from broader community engagement. However, I believe there is a strong case, even from an impartial welfarist perspective, that empowering the broader EA community to explore and test ideas could be extremely high-EV. The EA community is vast, and there is a wealth of ideas beyond what the elite circle generates. Yet, the “do-ocracy” model, where people are encouraged to pursue their own projects, often disempowers those who don’t have the time or resources to do so.
Additionally, the dismissal of “EA should” statements, where suggestions are ignored because the originator isn’t positioned to implement them, further limits the potential for innovation. While tools like the EA Funds exist, they focus narrowly on pre-determined areas, and rejections are often made without feedback, leaving many high-EV ideas unexplored and unsupported.
Given that much of EA’s potential for innovation lies within the broader community, what steps can Open Phil take to better engage with and support exploratory, high-EV ideas from the wider EA base? How can Open Phil foster an environment where more ideas from the community can be tested, rather than maintaining a top-down approach that may be missing valuable opportunities?
Thanks for the comment, Ben! You’re right that a perfectly applied scout mindset involves critically analyzing information and updating based on evidence, rather than deferring. In theory, someone applying the scout mindset would update the correct amount based on the fact that they have an interest in a certain outcome, without automatically yielding to critiques. However, in practice, I think there’s a tendency within EA to celebrate the relinquishing of positions, almost as a marker of intellectual humility or objectivity.
This can create a culture where people may feel pressure to seem “scouty” by yielding more often than is optimal, even when the epistemological ecosystem might actually need them to advocate for the value of their intervention or program. In such cases, the desire to appear unbiased or intellectually humble could lead people to abandon or underplay their projects prematurely, which could be a loss for the broader system.
It’s a subtle difference, but I think it’s worth considering how the scout mindset is applied in practice, especially when there’s a risk of overcorrecting in the direction of giving up rather than pushing for the potential value of one’s work.
Thanks for your thoughtful response, Joey. I had originally approached this issue more from the perspective of founders and leaders being less “soldiery” for their projects, but I see that the funder’s viewpoint, especially regarding the counterfactual uses of money, is quite different and valid.
One key difference is that the counterfactual reallocation of talent from failed AIM charities may not be as impactful as the reallocation of funds by AIM-adjacent funders. As you mentioned, many people who worked at a failed AIM charity are likely to join another CE charity or work for an EA meta org, but these roles are in high demand and often attract top-tier talent regardless. It’s not clear that the movement of talent between these organizations would have as large an impact as reallocating funds to more successful initiatives.
This is where the dynamic between founders and funders diverges. From the leader’s perspective, it might make more sense to continue to pivot, seek out other funding sources, and keep the project alive, particularly if they still believe in the long-term potential. On the other hand, from the funder’s perspective, cutting their losses and focusing on capitalizing on wins may provide a much clearer path to maximizing impact. It seems that the optimal decisions for founders and funders could diverge, depending on their roles in the ecosystem.
I appreciate your insight into how marginal bets play into these decisions and how AIM’s cohort-based structure could actually benefit from higher shutdown rates. It seems like there’s a balance between empowering founders to pursue potential breakthroughs while ensuring funders can make optimal reallocation decisions for broader impact.
I agree that funders can sometimes quit too early on promising projects. Founders and those directly involved often play a crucial role in pushing through early setbacks. However, I worry that in the EA community, there’s an overemphasis on the “scout” mindset—being skeptical of one’s own work and too quick to defer to critiques from others.
I should acknowledge that I may be strawmanning the scout mindset a bit here. In its ideal form, the scout mindset encourages intellectual humility and a willingness to see things as they are, which is clearly valuable. The practical application, however, often leads people to focus on being extra vigilant about potential biases in projects they’re closely involved with. While this caution is important, I think there’s a risk that it prevents people from taking the necessary “inside view” to successfully lead new or risky projects.
Many successful endeavors require a degree of tenacity and advocacy from the inside—the willingness to believe in a project’s potential and push forward despite early doubts. In systems like the legal world or even competitive markets, having “soldiers” who advocate strongly for their side often leads to better overall outcomes. Within EA, founders and project leaders can play this same balancing role by fighting for their initiatives when external doubts arise. Without this advocacy, projects with high long-term potential might be abandoned too soon.
The soldier mindset may be particularly important for those leading high-risk, high-reward projects. Individuals like Elon Musk at SpaceX or Jeff Bezos at Amazon had to persist through early failures because they believed in their projects when others didn’t. Their success depended on taking the inside view and pushing forward despite setbacks. If founders in EA are too quick to adopt the scout mindset, we might lose out on similarly promising projects.
In short, while the scout mindset has its place, I think we need a balance. Founders and those deeply involved in a project should serve as a necessary counterweight to broader skepticism. A healthy epistemic system requires both scouts and soldiers, and right now, I think EA might benefit from more of the latter in some contexts. I’d also be interested in case studies of people who have quit projects to better understand these dynamics. Your research idea sounds like a great way to explore whether we’re underestimating the value of persistence in high-risk, high-reward initiatives.
Posting from an alt account...
Very disappointed that the EA hub is not at least retaining the information for future use or otherwise making any effort to convey the information. Collectively, EAs, including myself, have spent a lot of time submitting the information. EAs spend their valuable time contributing to these projects with trust that the information will be stewarded and used. Now it seems that no effort is being taken to see that there is even the possibility of future use of this information. I don’t know what the cost of the information retention would be, but I would be very surprised if it was lower than either the value of the possibility of its future gainful use and/or the harm from the loss of trust caused from the fact the products of community members’ time was discarded so cavalierly.
The lack of care for the information gathered will likely cause people to slightly update against spending their time to collaborate in networking projects.
Posting from an alt account...
Definitely feel like hubris and elitism damage new ideas being explored within the EA space. Speaking from my experience with a new idea that I founded a non-profit to promote, I have found the EA community generally unhelpful, with a few notable exceptions.
I was encouraged when I heard 80k and other sources discuss the value of exploration in conjunction with exploitation. This would mean if there isn’t evidence to support a new idea or intervention, but there is a plausible mechanism of impact, search costs are usually warranted. However, when discussing my idea, typically there was an exultation of “red-teaming” with very little discussion of development of the idea or empirical validation. I know that strongly evaluating the possibe limitations and downsides of new ideas is indispensable, but the degree to which this is valorized over idea development is absurd with respect to new ideas.
My experience interacting briefly with people with some power and influence in EA was rather disappointing as well. Always had the impression that organizations thought the existing thought leaders knew essentially all of the areas in which fruitful interventions might be found or ideas had merit. As far as EA using its resources to aid projects, the central consideration has seemed to be the connections one has made. A defense of this allocation can be made in that people with good ideas will eventually become known within EA, but the process is slow and selects for those with networking skills and patience.
When applying for grants with EAIF, a declination was not met with any explanation. The reasoning was that they did not have time. The notion that EAIF lacks the resources to hire sufficient staff that one could explain deficiencies in a grant proposal is absurd. It seems to me that either they they don’t hold people trying to contribute to EA in new ways in high regard, or that an explanation would potentially render them in some way accountable.
My local EA group is nice, but the focus seems to be valorizing EAs heroes rather than support ideas of the members. They often are unwilling to dedicate any thought or time to new members’ ideas. I would think that such groups could be working together to explore and develop high EV potential ways to better the world. Instead, my experience has been that of a fan club of thought leaders.
I cannot help but feel concerned that there are many people with awesome ideas that could produce high EV that lack my stubornity and/or confidence in my idea. I am a very strong believer in the core of EA: using reason to ascertain how to do the greatest good and doggedly pursuing it. I would think it would be immensely high EV to cultivate new ideas, evaluate the means and costs of empirical testing, and helping EAs, in fact implement tests.
EA purports to value new ideas, but it appears in action often unhelpful and even smothering with regard to their development.
There wouldn’t be an association with planned disloyalty nor an implication for military strategy because the person would keep his/her intention secret.