Experimental platform for AI value formation — seeking collaborators

I’m building a modular system to model value change in agents through social and emotional exchange.

Applications:

  • AI alignment experiments in moral development

  • Empathy and cooperation modeling

  • Social exchange and reciprocity research

  • Long-term value drift studies

Core design:

  • Interactions quantified via give /​ take /​ receive /​ allow

  • Empathy and energy balance weight decisions

  • Base (long-term) and current (short-term) states

  • Decay and growth thresholds for evolving morality

  • Outcomes emerge from accumulated experience

Status:

  • Backend: JavaScript + SQLite

  • Real-time UE5 integration

  • Narrative and symbolic overlay

  • Ready for insertion of research-driven behaviors and logging

Seeking:

  • Simulation scientists

  • Cognitive modelers

  • AI alignment researchers

  • UE5 developers

Why collaborate:

  • Theory-agnostic

  • Swap in new models without rewriting

  • Test how agents learn morality from interaction

DM or comment if interested.

No comments.