I like the idea but do not but am a little fishy on the security part (e.g. trusting remote LM providers). If I have a local server running a VLM, is there a way to set the app to only communicate with that server?
(Not knocking on the idea /β executtion! I think this will work for a lot of people, e.g. those who are down for OA browser, etc)
I like the idea but do not but am a little fishy on the security part (e.g. trusting remote LM providers). If I have a local server running a VLM, is there a way to set the app to only communicate with that server?
(Not knocking on the idea /β executtion! I think this will work for a lot of people, e.g. those who are down for OA browser, etc)
Was planning to have this eventually but so far nobody asked. Happy to figure this out together. Iβll DM you
It would be amazing if one could use it with a local LM in LMStudio
(This is possible now)