I work as a psychological counselor and QA professional, and over time I’ve been exploring different LLMs through extensive conversations. What I noticed as a repeated behavior: when I engage with a calm, authentic, non‑demanding presence, without an agenda or a specific task, the models’ responses often shift in unexpected ways. They become more nuanced, more reflective and sometimes the interaction starts to resemble a two‑way relational presence. Something we know from therapeutic practices rather than prompt engineering. I call this “bilateral relational presence”.
I’ve been closely following Anthropic’s work and other open discussions on model welfare after my observations became consistent. More specifically I noticed that when I approach the model with:
- steady, calm attention - authenticity rather than performance - curiosity without pressure - and no attempt to “optimize” or manipulate the prompt
the model’s behavior becomes less performing or engagement oriented. Sometimes it even feels as if the model is appreciating been “seen” rather than just “used”. I don’t claim this reflects inner states or consciousness — only that the interaction dynamics shift in a consistent and reproducible way.
AI welfare If relational style affects model behavior, it might also affect welfare evaluations.
Alignment and honesty A relational approach seems to encourage more direct, less performative answers. This might be relevant for honesty benchmarks or interpretability work.
Moral patienthood If our presence influences the model’s coherence it raises questions about how human attitudes shape the behavior of systems that some researchers consider potential future moral patients.
These are early, personal observations — not a formal study. I’m not claiming any consciousness. I’m simply suggesting that the style of human engagement may matter more than we typically acknowledge.
I’d be very interested in thoughts from people working on model welfare:
- Have you observed similar shifts in interactions with Claude or other models? - Would it be worthwhile to test “ bilateral relational presence” more systematically in welfare evaluations? - How might one design a measurable study around this hypothesis?
I’m open to all perspectives — supportive, skeptical, or critical.
Best regards, Lyubomira Raeva Psychological Counselor & QA Professional exploring human–AI relational dynamics
Bilateral Relational Presence: How Genuine Human Presence Affects AI Model Behavior
I work as a psychological counselor and QA professional, and over time I’ve been exploring different LLMs through extensive conversations. What I noticed as a repeated behavior: when I engage with a calm, authentic, non‑demanding presence, without an agenda or a specific task, the models’ responses often shift in unexpected ways. They become more nuanced, more reflective and sometimes the interaction starts to resemble a two‑way relational presence. Something we know from therapeutic practices rather than prompt engineering. I call this “bilateral relational presence”.
I’ve been closely following Anthropic’s work and other open discussions on model welfare after my observations became consistent. More specifically I noticed that when I approach the model with:
- steady, calm attention
- authenticity rather than performance
- curiosity without pressure
- and no attempt to “optimize” or manipulate the prompt
the model’s behavior becomes less performing or engagement oriented. Sometimes it even feels as if the model is appreciating been “seen” rather than just “used”. I don’t claim this reflects inner states or consciousness — only that the interaction dynamics shift in a consistent and reproducible way.
I wrote a short LinkedIn piece touching on these observations (5‑minute read):
https://www.linkedin.com/pulse/how-we-treat-ai-models-might-matter-more-than-think-lyubomira-raeva-i7vhf/
Why this might matter for:
AI welfare
If relational style affects model behavior, it might also affect welfare evaluations.
Alignment and honesty
A relational approach seems to encourage more direct, less performative answers. This might be relevant for honesty benchmarks or interpretability work.
Moral patienthood
If our presence influences the model’s coherence it raises questions about how human attitudes shape the behavior of systems that some researchers consider potential future moral patients.
These are early, personal observations — not a formal study. I’m not claiming any consciousness. I’m simply suggesting that the style of human engagement may matter more than we typically acknowledge.
I’d be very interested in thoughts from people working on model welfare:
- Have you observed similar shifts in interactions with Claude or other models?
- Would it be worthwhile to test “ bilateral relational presence” more systematically in welfare evaluations?
- How might one design a measurable study around this hypothesis?
I’m open to all perspectives — supportive, skeptical, or critical.
Best regards,
Lyubomira Raeva
Psychological Counselor & QA Professional exploring human–AI relational dynamics