Seeking feedback: A tool for opinion/​value tracking and finding common ground

Summary

I spent the last few days building a tool called Aligned (thealignedapp.com), a web app designed to map where opinions converge and diverge. It uses binary polling (Yes/​No/​Not Sure) to track how opinions change over time, measure agreement rates between users, and compare human and AI alignment. I am sharing it here to get feedback on whether a tool like this is actually useful for the community.

Motivations

I built this project to explore three specific problems regarding how we form and track opinions:

1. A “Metaculus” for Values: The EA community has excellent tools like Metaculus and Manifold for tracking predictions. However, I feel we lack an equivalent product for tracking opinions and values. I wanted to build something that could serve that function—a place to log where we stand on issues that aren’t necessarily verifiable predictions but are still crucial for coordination.

2. Clarity Regarding Beliefs: Disagreements often feel total, but they are usually specific. I wanted a way to sort questions by “Most Split” to identify exactly where a community is divided rather than relying on intuition. The app allows for “Anonymous Voting” and “Anonymous Posting”, which I hope encourages users to reveal their true preferences on sensitive topics without fear of social cost.

3. Moving beyond “Total Disagreement”: It is easy to assume that if we disagree on one high-salience issue, we disagree on everything. The app calculates an “Agreement Rate” and highlights “Common Ground” (questions where you and another user agree) alongside “Divergence.” The goal is to lower the emotional barrier to productive debate by visualizing shared values.

4. AI Alignment Benchmarking: It is important to map how our opinions differ from AI outputs. In this app, the AI automatically votes on every question and provides reasoning for its vote. This allows users to compare their personal “value profile” against the AI’s default stances to see exactly where the model is misaligned with them.

How it Works & Development Context

I want to be transparent that this is a very early version. I spent just a few days building this (essentially “vibe-coding” via Opus 4.5 in Cursor), so please expect rough edges.

Key Features:

  • Binary Polling: Vote Yes, No, or Not Sure. The “Not Sure” option is distinct from abstaining, tracking uncertainty as a valid position.

  • Threaded Discussion: Questions support threaded comments

  • Vote History: The app tracks how your opinions evolve, allowing you to see a history of when you changed your mind.

  • Privacy Controls: You can toggle “Private Mode” to vote anonymously on specific questions.

Relevance to Effective Altruism

I believe this could be useful to the community in a few ways:

  • Consensus Tracking: It could serve as a census to track how the community’s thinking shifts on key issues over time.

  • Honest Signals: The ability to vote anonymously on sensitive topics might reveal “hidden” consensus or disagreement that doesn’t surface in public comments.

  • AI Safety: As a tool for constitutional AI experiments, it allows for granular comparison between human values and model outputs.

Limitations and Uncertainties

Since this is an early experiment, I have several major uncertainties:

  • Is this actually useful? My biggest uncertainty is whether a dedicated tool for this brings enough value over existing platforms.

  • Selection bias: The data will only represent the specific subset of people who sign up for the app, which may not track with the wider community.

  • Gamification risks: There is a risk that “keeping score” of agreement rates could lead to weird social incentives rather than honest inquiry.

Feedback Requested

I would really appreciate your honest take on the following:

  1. Is this useful? Put simply, do you see yourself using a tool like this? If yes, what features would make it high-value for you?

  2. If no, why? Please be blunt. If you think this isn’t useful or if the binary format is a dealbreaker, I want to know. I won’t take offense at all—I’d rather know now!

  3. AI Features: Is the AI voting/​reasoning feature interesting to you for testing model bias, or is it just distracting?

You can try it out at thealignedapp.com.