Hello! I am the sole founder of VANTA Research—a bootstrapped AI research project focused building safe and resilient AI models optimized for human-AI collaboration. All of our published work has been made open source thus far, and in roughly 2 months has garnered 60k+ downloads on Hugging Face and Ollama across original model families.
I have several goals with VANTA Research, but among the first major stops is building a large (400B+) open source foundational model from scratch. I love learning, asking hard questions, and a good mystery.
DMs are always open.
It’s cool to see a role like this open up. I’m curious to see how SLT plays out in practice, especially at scale. I’ve seen some pretty dramatic shifts in generalization between different versions of the same language model, even just from one quantization to another. Definitely feels like important territory to explore.