Itâs not clearly bad. Itâs badness depends on what the training is like, and what your views are around a complicated background set of topics involving gender and feminism, none of which have clear and obvious answers. It is clearly woke in a descriptive non-pejorative sense, but thatâs not the same thing as clearly bad.
EDIT: For example, here is one very obvious way of justifying some sort of âget girls into scienceâ spending that is totally compatible with centre-right meritocratic classical liberalism and isnât in any sense obviously discriminatory against boys. Suppose girls who are in fact capable of growing up to do science and engineering just systematically underestimate their capacity to do those things. Then âpropagandaâ aimed at increasing the confidence of those girls specifically is a totally sane and reasonable response. It might not in fact be the correct response: maybe there is no way to change things, maybe the money is better spent elsewhere etc. But itâs not mad and its not discriminatory in any obvious sense, unless anything targeted only at any demographic subgroup is automatically discriminatory, which at best only a defensible position not an obvious one. I donât know if smart girls are in fact underconfident in this way, but it wouldnât particularly susprirse me.
Itâs not clearly bad. Itâs badness depends on what the training is like, and what your views are around a complicated background set of topics involving gender and feminism, none of which have clear and obvious answers.
The topic here is whether the administration is good at using AI to identify things it dislikes. Whether or not you personally approve of using scientific grants to fund ideological propaganda is, as the OP notes, besides the point. Their use of AI thus far is, according to Scottâs data, a success by their lights, and I donât see any much evidence to support huwâs claim that their are being âunthoughtfulâ or overconfident. They may disagree with huw on goals, but given those goals, they seem to be doing a reasonable job of promoting them.
I agree with the very narrow point that flagging grants that mention some minor woke spending while mostly being about something elde is not a sign of the AI generating false positives when asked to search for wokeness. Indeed, I already said in my first comment that the flagged material was indeed âwokeâ in some sense.
Itâs not clearly bad. Itâs badness depends on what the training is like, and what your views are around a complicated background set of topics involving gender and feminism, none of which have clear and obvious answers. It is clearly woke in a descriptive non-pejorative sense, but thatâs not the same thing as clearly bad.
EDIT: For example, here is one very obvious way of justifying some sort of âget girls into scienceâ spending that is totally compatible with centre-right meritocratic classical liberalism and isnât in any sense obviously discriminatory against boys. Suppose girls who are in fact capable of growing up to do science and engineering just systematically underestimate their capacity to do those things. Then âpropagandaâ aimed at increasing the confidence of those girls specifically is a totally sane and reasonable response. It might not in fact be the correct response: maybe there is no way to change things, maybe the money is better spent elsewhere etc. But itâs not mad and its not discriminatory in any obvious sense, unless anything targeted only at any demographic subgroup is automatically discriminatory, which at best only a defensible position not an obvious one. I donât know if smart girls are in fact underconfident in this way, but it wouldnât particularly susprirse me.
The topic here is whether the administration is good at using AI to identify things it dislikes. Whether or not you personally approve of using scientific grants to fund ideological propaganda is, as the OP notes, besides the point. Their use of AI thus far is, according to Scottâs data, a success by their lights, and I donât see any much evidence to support huwâs claim that their are being âunthoughtfulâ or overconfident. They may disagree with huw on goals, but given those goals, they seem to be doing a reasonable job of promoting them.
I agree with the very narrow point that flagging grants that mention some minor woke spending while mostly being about something elde is not a sign of the AI generating false positives when asked to search for wokeness. Indeed, I already said in my first comment that the flagged material was indeed âwokeâ in some sense.