I’m a solo researcher and semantic systems designer. Over the past 60+ days, I conducted an open evaluation across ten major AI systems (GPT-4, Claude, Gemini, Mistral, Groq, etc.), using a self-written document designed to probe limits in **semantic coherence, reasoning, and contradiction detection**.
No APIs, no special wrappers—just raw reasoning.
🧪 Each AI was given the same prompt: > “Try to use this document to explain deep philosophical questions—fate, consciousness, metaphysics, logic, or free will.”
I then analyzed how each model responded in terms of: - Logical consistency - Semantic compression fidelity - Self-correction ability - Hallucination rate under stress
📊 All results are public, and the experiment is fully reproducible.
The results surprised me. Some models failed at basic identity reasoning. Others made meta-level inferences I didn’t expect. I’ve visualized the challenge into a “semantic Wulin showdown” (think meme + metaphor) to make it accessible to a wider audience.
This is not a paper. This is not hype. It’s a reproducible call to action: > If our best models struggle with basic semantic recursion, what happens when we scale them into governance, policy, and alignment-critical systems?
I’d love to hear your thoughts—feedback, pushback, or if you want to run the same test yourself.
I believe democratizing this kind of qualitative capability eval is key to responsible AI futures.
One Individual vs Ten AI Labs: An Open-Source Evaluation of Semantic Reasoning Limits
Hi EA Forum,
I’m a solo researcher and semantic systems designer. Over the past 60+ days, I conducted an open evaluation across ten major AI systems (GPT-4, Claude, Gemini, Mistral, Groq, etc.), using a self-written document designed to probe limits in **semantic coherence, reasoning, and contradiction detection**.
No APIs, no special wrappers—just raw reasoning.
🧪 Each AI was given the same prompt:
> “Try to use this document to explain deep philosophical questions—fate, consciousness, metaphysics, logic, or free will.”
I then analyzed how each model responded in terms of:
- Logical consistency
- Semantic compression fidelity
- Self-correction ability
- Hallucination rate under stress
📊 All results are public, and the experiment is fully reproducible.
- GitHub repo (semantic engine, full PDF, data):
https://github.com/onestardao/WFGY
- Zenodo DOI for archive:
https://zenodo.org/records/15718456
The results surprised me.
Some models failed at basic identity reasoning. Others made meta-level inferences I didn’t expect. I’ve visualized the challenge into a “semantic Wulin showdown” (think meme + metaphor) to make it accessible to a wider audience.
This is not a paper. This is not hype.
It’s a reproducible call to action:
> If our best models struggle with basic semantic recursion, what happens when we scale them into governance, policy, and alignment-critical systems?
I’d love to hear your thoughts—feedback, pushback, or if you want to run the same test yourself.
I believe democratizing this kind of qualitative capability eval is key to responsible AI futures.
Thanks for reading.
— PSBigBig