Exploring New Projects: Applied AI, Collective Knowledge, and Community-Driven Initiatives

Hi everyone,

Random Ideas pop into my mind very often. I’ve decided to write some down here, in shorter format with just the main ideas. This list is not focused on any specific organization that could implement them, or any cause area specifically but instead includes a mix of community-centered ideas, cause-related concepts, and more. Some of them might already exist in some way.

1.0 EA Applied AI Research and Entrepreneurship Center

Given the rapid advancements in AI and its numerous potential applications, it might be beneficial to establish an organization focused on identifying, researching, and funding applied AI projects that align with EA principles. This center would act as a specialized counterpart to existing organizations like CE or MissionMotor but with a distinct focus on AI-driven initiatives.

The primary objectives of this center would be:

  • Research and Identify Use Cases: Systematically explore potential ethical AI use cases.

  • Prototype and Develop Solutions: Engage with experts to prototype promising ideas, testing their feasibility and potential impact.

  • Fund and Incubate Projects: Establish funding channels and offer incubation support for high-impact AI projects.

This initiative could attract interest from the EA community, especially those engaged with AI ethics, technology policy, and entrepreneurship. It is also likely that this project would gain increasing relevance as AI technology continues to advance. Both the following ideas could be researched further in such an org, and be funded if they seem worthy of it.

1.1 The Effective Altruism Collective Mind

The EA community generates a wealth of ideas, insights, and resources. However, these contributions are often scattered across forums, personal notes, and discussions, leading to lost opportunities for collaboration and innovation. With advancements in AI, particularly in fine-tuning LLMs and Retrieval-Augmented Generation (RAG) systems, there is an opportunity to create a centralized knowledge hub specifically tailored to the needs and context of the EA community.

1.1.1 Fine-tuning

Two examples for the fine-tuning of the LLM would be “custom GPTs”, where models can be configured to operate within specialized contexts. A second example of this fine-tuning is the option within GPT’s settings to set a general context. This context acts as a baseline prompt that is implicitly applied to every conversation.

1.1.2 Integration with a Retrieval-Augmented Generation (RAG) System: To enhance the effectiveness of this model, it could be built with a Retrieval-Augmented Generation system. In this setup, the model could draw from a vast database of community contributions, which are stored as structured or semi-structured documents. When responding to queries, the model would first retrieve relevant documents or information from this database, ensuring that its responses are both contextually accurate and up-to-date with the latest contributions. This combination of generative and retrieval-based approaches would significantly improve the quality, precision, and reliability of the information provided by the LLM.

Proposal: Develop a collective, community-driven LLM specifically fine-tuned for EA contexts (this could use existing LLM’s API, the goal here is not to create a whole new LLM). This model would be a dynamic repository where EAs (including orgs that publish articles) can input their contributions, leading to a growing, context-rich knowledge base.

Key Features:

  • User Contributions: Community members (or orgs) could input their thoughts, resources, or updates into the model, which would then directly update the context for the LLM. For example, if a member shares a new project idea or research finding, the LLM’s knowledge would immediately adjust and incorporate this new information.

  • Contextual Knowledge: The fine-tuned LLM could be asked questions such as whether a project has already been proposed, what the current research trends in a particular area are, what resources might be important, or insights into EA-specific issues.

  • Monthly Analysis: A summary of the evolving knowledge and trends within the LLM’s growing context could be compiled and shared as a newsletter or forum post, keeping the community informed of new developments and insights.

Information Security: To mitigate risks of bad faith actors influencing the model’s content, a a few approaches could be implemented:

  • A vetted, advanced version with restricted access.

  • An open-access version for broader contributions, with oversight from project managers to maintain content integrity.

1.2 Rapid Exploration and Application Post-MLL Releases

Given the first-mover advantage in leveraging new AI capabilities, it could be crucial to organize a rapid-response gathering of experts within the EA community immediately following the release of major new large language models (e.g., Claude 4, GPT-5). This gathering could take place online or in a coworking space and would aim to quickly explore applications, brainstorm innovative ideas, and identify ethical, high-impact opportunities for generating quick funding (e.g., in the spirit of “founding to give”) or implementing practical solutions that require swift action. Additionally, forming a brainstorming group now, in anticipation of these releases, could help set the groundwork and align on strategies so that the community is prepared to act as soon as these models become available.

2.0 Project Exploration Center

CE is highly rigorous, which can be intimidating or lead people to feel that their ideas aren’t good enough or not worth the effort of sending over or fully formulating. For those who have project ideas but aren’t yet ready for that level of scrutiny, it might be valuable to create a community-driven team focused on exploring and researching ideas at a more accessible level. This center would serve as a lower-tier project exploration hub, offering a space for early-stage ideas to be refined and developed. It could also help cultivate new talent in project exploration and innovation. Additionally, CE could refer ideas that are promising but fall just short of their criteria for full investigation. Everyone in the community could submit ideas as they arise, making this a collaborative space for exploration and growth.

*as a sidenote, I have yet to find a database of projects that have either been tried or explored. In case this exists, I’d love to get a link to it :)

3.0 Influencers for Impact

The idea is to connect with a diverse range of influencers, including musicians, actors, YouTubers, Instagram personalities, TikTok creators, and streamers. These influencers have significant reach, with some having fanbases that trust and follow their advice closely. By introducing the concepts of Effective Altruism (EA) or charitable giving to these influencers, they could, in turn, pass these ideas on to their audiences. While many of their followers might be too young to donate themselves, they could still introduce these concepts to their parents, potentially influencing family decisions around giving. The influencers themselves, many of whom are quite wealthy, might also be inspired to start donating or adopting EA principles. Some connections have already been made with a few influencers who could help expand the network and introduce the project to others. Many influencers either genuinely want to make a positive impact, or maintain a socially responsible image for optics, both of which can be leveraged for this initiative.

4.0 Help for young but rich people within established efficient donating orgs

EA has sometimes been criticized as being primarily made up of “young, wealthy, white people.” While this critique is an overgeneralization that I do not agree with, there is a kernel of truth to it (see this post for reference). Young individuals with significant wealth, often from family inheritance, might struggle with the idea of donating “money their parents earned” or may be confused by the donation process and simply need some guidance. Personally, while we’re not really rich, I’ve considered donating a portion of my future inheritance but feel that I would benefit from having a mentor to guide me through the process. Even though I use EA principles in both my daily life, and for career planning, I still feel uncertain about how to approach this and whether I should proceed at all on a visceral level. This highlights the need for targeted support within EA-aligned giving organizations to assist young people who are in similar situations and might need mentorship or reassurance when it comes to effective giving.

5.0 EA investigative journalism

Investigative journalism is often shaped by the goals and agendas of the organizations that fund it, leading to motivated reasoning that may align with specific interests. An independent organization dedicated to investigative journalism, guided by EA principles and focusing on EA-related causes, could be a valuable initiative to explore. Such an organization could ensure that investigations remain unbiased and are driven by impact rather than external agendas. I do realize that the org Asterix exists. However, if I understand correctly, they work a little bit differently, meaning that people can send them articles rather than them having full time employees.

On a personal note, I just like investigative journalism and think this concept sounds cool, so I might be a bit biased in suggesting it.

Thanks for reading, any feedback is appreciated :=)