I am saying something like:
If actual risk is reduced by a quantity A, but the perceived reduction is actually B, then it’s worth spending time telling people the difference in cases where B>>A, else the effects of the project have been negative (like the effects of green-washing or more relevantly safety-washing). This is not about AI safety in general but for a particular intervention on a case by case basis.
I am saying something like: If actual risk is reduced by a quantity A, but the perceived reduction is actually B, then it’s worth spending time telling people the difference in cases where B>>A, else the effects of the project have been negative (like the effects of green-washing or more relevantly safety-washing). This is not about AI safety in general but for a particular intervention on a case by case basis.
Ah I see, sorry. Agreed