This is… not what the attached source says? Scott estimates 40% woke, 20% borderline, and 40% non-woke. ‘at best’ means an upper bound, which would be 60% in this case if you accept this methodology.
But even beyond that, I think Scott’s grading is very harsh. He says most grants that he considered to be false positives contained stuff like this (in the context of a grant about Energy Harvesting Systems):
The project also aims to integrate research findings into undergraduate teaching and promote equitable outcomes for women in computer science through K-12 outreach program.
But… this is clearly bad! The grant is basically saying it’s mainly for engineering research, but they’re also going to siphon off some of the money to do some sex-discriminatory ideological propaganda in kindergartens. The is absolutely woke,[1] and it totally makes sense why the administration would want to stop this grant. If the scientists want to just do the actual legitimate scientific research, which seems like most of the application, they should resubmit with just that and take out the last bit.
Some people defend the scientists here by saying that this sort of language was strongly encouraged by previous administrations, which is true and relevant to the degree of culpability you assign to the scientists, but not to whether or not you think the grants have been correctly flagged.
His borderline categorisation seems similarly harsh. In my view, this is a clear example of woke racism:
enhance ongoing education and outreach activities focused on attracting underrepresented minority groups into these areas of research
Scott says these sorts of cases makes up 90% of his false positives. So I think we should adjust his numbers to produce a better estimate:
If you doubt this, imagine how a typical leftist grant reviewer would evaluate a grant that said some of the money was going to support computer science for white men.
It’s not clearly bad. It’s badness depends on what the training is like, and what your views are around a complicated background set of topics involving gender and feminism, none of which have clear and obvious answers. It is clearly woke in a descriptive non-pejorative sense, but that’s not the same thing as clearly bad.
EDIT: For example, here is one very obvious way of justifying some sort of “get girls into science” spending that is totally compatible with centre-right meritocratic classical liberalism and isn’t in any sense obviously discriminatory against boys. Suppose girls who are in fact capable of growing up to do science and engineering just systematically underestimate their capacity to do those things. Then “propaganda” aimed at increasing the confidence of those girls specifically is a totally sane and reasonable response. It might not in fact be the correct response: maybe there is no way to change things, maybe the money is better spent elsewhere etc. But it’s not mad and its not discriminatory in any obvious sense, unless anything targeted only at any demographic subgroup is automatically discriminatory, which at best only a defensible position not an obvious one. I don’t know if smart girls are in fact underconfident in this way, but it wouldn’t particularly susprirse me.
It’s not clearly bad. It’s badness depends on what the training is like, and what your views are around a complicated background set of topics involving gender and feminism, none of which have clear and obvious answers.
The topic here is whether the administration is good at using AI to identify things it dislikes. Whether or not you personally approve of using scientific grants to fund ideological propaganda is, as the OP notes, besides the point. Their use of AI thus far is, according to Scott’s data, a success by their lights, and I don’t see any much evidence to support huw’s claim that their are being ‘unthoughtful’ or overconfident. They may disagree with huw on goals, but given those goals, they seem to be doing a reasonable job of promoting them.
I agree with the very narrow point that flagging grants that mention some minor woke spending while mostly being about something elde is not a sign of the AI generating false positives when asked to search for wokeness. Indeed, I already said in my first comment that the flagged material was indeed “woke” in some sense.
As a scientist who writes NSF grants, I think the stuff that you’re labeling woke here makes up a very percentage of the total money that actually gets spent in grants like these. Labeling that grant as woke because it puts like 2% of its total funds towards a K-12 outreach program seems like a mistake to me. (And in an armchair-philosophy way, yes the scientists could in principle just resubmit the grants without the last part—but in practice nothing works like this. Much more likely is that labeling these as “woke” leads people, like the current administration, to try to drastically reduce the overall funding that goes to NSF, with a strong net negative effect on basic science research.)
“Labeling that grant as woke because it puts like 2% of its total funds towards a K-12 outreach program seems like a mistake to me.”
It’s a mistake if it’s meant to show the grant is bad, and I suspect that Larks has political views that I would very strongly disagree with, but I think it does successfully make the narrow point that the data about NSF grants does not show that an AI designed to identify pro-woke or pro-Hamas language will be bad at doing so.
FWIW the point that I was trying to make (however badly) was that the government clearly behaved in a way that had little regard for accuracy, and I don’t see incentives for them to behave any differently here
It seems pretty appropriate and analogous to me—the administration wants to ensure 100% of science grants go to science, not 98%, and similarly they want to ensure that 0% of foreign students support Hamas, not 2%. Scott’s data suggests that have done a reasonably good job with the former at identifying 2%-woke grants, and likewise if they identify someone who spends 2% of their time supporting Hamas they would consider this a win.
I don’t think the issue here is actually about whether all science grants should go only to actual scientific work. Suppose that a small amount of the grant had been spent on getting children interested in science in a completely non-woke way that had nothing to do with race or gender. I highly doubt that either the administration or you regard that as automatically and obviously horrendously inappropriate. The objection is to stuff targeted at women and minorities in particular, not to a non-zero amount of science spending being used to get kids interested in science. Describing it as just being about spending science grants only on science is just a disingenuous way of making the admin’s position sound more commonsense and apolitical than it actually is.
This is… not what the attached source says? Scott estimates 40% woke, 20% borderline, and 40% non-woke. ‘at best’ means an upper bound, which would be 60% in this case if you accept this methodology.
But even beyond that, I think Scott’s grading is very harsh. He says most grants that he considered to be false positives contained stuff like this (in the context of a grant about Energy Harvesting Systems):
But… this is clearly bad! The grant is basically saying it’s mainly for engineering research, but they’re also going to siphon off some of the money to do some sex-discriminatory ideological propaganda in kindergartens. The is absolutely woke,[1] and it totally makes sense why the administration would want to stop this grant. If the scientists want to just do the actual legitimate scientific research, which seems like most of the application, they should resubmit with just that and take out the last bit.
Some people defend the scientists here by saying that this sort of language was strongly encouraged by previous administrations, which is true and relevant to the degree of culpability you assign to the scientists, but not to whether or not you think the grants have been correctly flagged.
His borderline categorisation seems similarly harsh. In my view, this is a clear example of woke racism:
Scott says these sorts of cases makes up 90% of his false positives. So I think we should adjust his numbers to produce a better estimate:
40% woke according to scott
+20% borderline woke
+90%*40% incorrectly labeled as false positives
= 96% hit rate.
If you doubt this, imagine how a typical leftist grant reviewer would evaluate a grant that said some of the money was going to support computer science for white men.
It’s not clearly bad. It’s badness depends on what the training is like, and what your views are around a complicated background set of topics involving gender and feminism, none of which have clear and obvious answers. It is clearly woke in a descriptive non-pejorative sense, but that’s not the same thing as clearly bad.
EDIT: For example, here is one very obvious way of justifying some sort of “get girls into science” spending that is totally compatible with centre-right meritocratic classical liberalism and isn’t in any sense obviously discriminatory against boys. Suppose girls who are in fact capable of growing up to do science and engineering just systematically underestimate their capacity to do those things. Then “propaganda” aimed at increasing the confidence of those girls specifically is a totally sane and reasonable response. It might not in fact be the correct response: maybe there is no way to change things, maybe the money is better spent elsewhere etc. But it’s not mad and its not discriminatory in any obvious sense, unless anything targeted only at any demographic subgroup is automatically discriminatory, which at best only a defensible position not an obvious one. I don’t know if smart girls are in fact underconfident in this way, but it wouldn’t particularly susprirse me.
The topic here is whether the administration is good at using AI to identify things it dislikes. Whether or not you personally approve of using scientific grants to fund ideological propaganda is, as the OP notes, besides the point. Their use of AI thus far is, according to Scott’s data, a success by their lights, and I don’t see any much evidence to support huw’s claim that their are being ‘unthoughtful’ or overconfident. They may disagree with huw on goals, but given those goals, they seem to be doing a reasonable job of promoting them.
I agree with the very narrow point that flagging grants that mention some minor woke spending while mostly being about something elde is not a sign of the AI generating false positives when asked to search for wokeness. Indeed, I already said in my first comment that the flagged material was indeed “woke” in some sense.
As a scientist who writes NSF grants, I think the stuff that you’re labeling woke here makes up a very percentage of the total money that actually gets spent in grants like these. Labeling that grant as woke because it puts like 2% of its total funds towards a K-12 outreach program seems like a mistake to me. (And in an armchair-philosophy way, yes the scientists could in principle just resubmit the grants without the last part—but in practice nothing works like this. Much more likely is that labeling these as “woke” leads people, like the current administration, to try to drastically reduce the overall funding that goes to NSF, with a strong net negative effect on basic science research.)
“Labeling that grant as woke because it puts like 2% of its total funds towards a K-12 outreach program seems like a mistake to me.”
It’s a mistake if it’s meant to show the grant is bad, and I suspect that Larks has political views that I would very strongly disagree with, but I think it does successfully make the narrow point that the data about NSF grants does not show that an AI designed to identify pro-woke or pro-Hamas language will be bad at doing so.
FWIW the point that I was trying to make (however badly) was that the government clearly behaved in a way that had little regard for accuracy, and I don’t see incentives for them to behave any differently here
Yeah, I agree with the more general point.
It seems pretty appropriate and analogous to me—the administration wants to ensure 100% of science grants go to science, not 98%, and similarly they want to ensure that 0% of foreign students support Hamas, not 2%. Scott’s data suggests that have done a reasonably good job with the former at identifying 2%-woke grants, and likewise if they identify someone who spends 2% of their time supporting Hamas they would consider this a win.
I don’t think the issue here is actually about whether all science grants should go only to actual scientific work. Suppose that a small amount of the grant had been spent on getting children interested in science in a completely non-woke way that had nothing to do with race or gender. I highly doubt that either the administration or you regard that as automatically and obviously horrendously inappropriate. The objection is to stuff targeted at women and minorities in particular, not to a non-zero amount of science spending being used to get kids interested in science. Describing it as just being about spending science grants only on science is just a disingenuous way of making the admin’s position sound more commonsense and apolitical than it actually is.