If the separation is going to continue, I’d prefer it be entrusted to (elected? appointed but independent-of-CEA?) stewards. My concern is that community tagging might end up being voting by a different name (users will be less likely to tag things they like).
If you want independent criteria-based judgements, it might realistically be a good option to have the judgements made by an LLM—with the benefit of having the classification instantly (as a bonus you could publish the prompt used, so the judgements would be easier for people to audit).
Fyi, the Forum team has experimented with LLMs for tagging posts (and for automating some other tasks, like reviewing new users), but so far none have been accurate enough to rely on. Nonetheless, I appreciate your comment, since we weren’t really tracking the transparency/auditing upside of using LLMs.
(I’m curious how much you’ve invested in giving them detailed prompts about what information to assess in applying particular tags, or even more structured workflows, vs just taking smart models and seeing if they can one-shot it; but I don’t really need to know any of this.)
I don’t think i agree with this general principle. I think there are few serious requests for extra beurocracy here on the forum and they can probably be assessed one by one on merit?
If the requests were overwhelming then maybe I’d agree
If the separation is going to continue, I’d prefer it be entrusted to (elected? appointed but independent-of-CEA?) stewards. My concern is that community tagging might end up being voting by a different name (users will be less likely to tag things they like).
If you want independent criteria-based judgements, it might realistically be a good option to have the judgements made by an LLM—with the benefit of having the classification instantly (as a bonus you could publish the prompt used, so the judgements would be easier for people to audit).
Fyi, the Forum team has experimented with LLMs for tagging posts (and for automating some other tasks, like reviewing new users), but so far none have been accurate enough to rely on. Nonetheless, I appreciate your comment, since we weren’t really tracking the transparency/auditing upside of using LLMs.
That makes sense!
(I’m curious how much you’ve invested in giving them detailed prompts about what information to assess in applying particular tags, or even more structured workflows, vs just taking smart models and seeing if they can one-shot it; but I don’t really need to know any of this.)
As a general principle I think the forum team should reject ~ all requests for additional bureaucracy.
I don’t think i agree with this general principle. I think there are few serious requests for extra beurocracy here on the forum and they can probably be assessed one by one on merit?
If the requests were overwhelming then maybe I’d agree
Bit I’m not a libertarian ;)