Hello! I’m looking for AI Safety proposals/project ideas to deploy within CivitAI, the biggest open-source AI image model hosting platform.
I own AI Hub, an AI voice model hosting platform w/ ~1 million users. We are discussing integration/merging with CivitAI.com, which is an open source AI model hosting platform and the 7th most visited AI website with 27 million monthly users. Most of their hosting is Stable Diffusion image models, but I’m helping them introduce AI voice cloning model hosting. I’d get a lot of autonomy over distribution for voice and possibly text models.
The founders mentioned they’re also exploring content moderation solutions, and when I asked, they said they were open to working with AI Safety companies. CivitAI is very influential in the open-source community, so I think shaping policy here could significantly shape AI Safety wrt open-source and local models. Some directions I’m exploring:
Content and open source model moderation frameworks/tools
Collecting data and running tests that aid alignment research
If anyone has ideas, or knows anyone who might be interested, do LMK! This is exploratory, but I aim to formulate an idea and deploy to users within ~3 months, with 80% confidence. I am literally open to trying any suggestions—for-profit, nonprofit, research, product, governance, alignment research etc.
Even if you have an idea that’s not AI image-related, I have a decent (40%) chance of proposing it to Github, Huggingface etc. Plus, my task is to diversify outside of image models anyway, so LLM-related proposals could still be relevant.
(this is still in exploratory/idea phase, and I wasn’t sure if this should be a full post)
Hello! I’m looking for AI Safety proposals/project ideas to deploy within CivitAI, the biggest open-source AI image model hosting platform.
I own AI Hub, an AI voice model hosting platform w/ ~1 million users. We are discussing integration/merging with CivitAI.com, which is an open source AI model hosting platform and the 7th most visited AI website with 27 million monthly users. Most of their hosting is Stable Diffusion image models, but I’m helping them introduce AI voice cloning model hosting. I’d get a lot of autonomy over distribution for voice and possibly text models.
The founders mentioned they’re also exploring content moderation solutions, and when I asked, they said they were open to working with AI Safety companies. CivitAI is very influential in the open-source community, so I think shaping policy here could significantly shape AI Safety wrt open-source and local models. Some directions I’m exploring:
Content and open source model moderation frameworks/tools
Collecting data and running tests that aid alignment research
If anyone has ideas, or knows anyone who might be interested, do LMK! This is exploratory, but I aim to formulate an idea and deploy to users within ~3 months, with 80% confidence. I am literally open to trying any suggestions—for-profit, nonprofit, research, product, governance, alignment research etc.
Even if you have an idea that’s not AI image-related, I have a decent (40%) chance of proposing it to Github, Huggingface etc. Plus, my task is to diversify outside of image models anyway, so LLM-related proposals could still be relevant.
(this is still in exploratory/idea phase, and I wasn’t sure if this should be a full post)