The NYT article Your A.I. Radiologist Will Not Be With You Soon reports, “Leaders at OpenAI, Anthropic and other companies in Silicon Valley now predict that A.I. will eclipse humans in most cognitive tasks within a few years… The predicted extinction of radiologists provides a telling case study. So far, A.I. is proving to be a powerful medical tool to increase efficiency and magnify human abilities, rather than take anyone’s job.”[1]
I disagree that this is a “telling case study.”[2] Radiology has several attributes which make it hard to generalize to other jobs:
Patients are legally prohibited from using AI to replace human radiologists.[3]
Medical providers are legally prohibited from billing for AI radiologists.[4]
Malpractice insurance does not cover AI radiology.[5]
Moreover, the article is framed as Geoff Hinton having confidently predicted that AI would replace radiologists and this prediction as having been proven wrong, but his statement felt more to me like an offhand remark/hope.
Offhand remarks from ML researchers aren’t reliable economic forecasts
People trying to predict the effects of automation/AI capabilities should consider that employees often perform valuable services which aren’t easily captured in evals, such as “beside manner” and “regulatory capture”
If you have a job where a) your customers are legally prohibited from hiring someone other than you, b) even if an enterprising competitor decides to run the legal risk of replacing you they still have to pay you, and c) anyone who replaces you is likely to be sued, you probably have reasonable job security
Takeaways I don’t endorse:
Radiology’s impacts being less than Hinton thought means that we should disbelieve:
Claims that future AI could drive wages even lower, e.g. Barnett 2025
Or really any claim which is supported by something more than an offhand remark
Many people work in jobs similar to radiology where e.g. it is illegal to replace them with AI, and therefore we can easily extrapolate from limited wage impacts in radiology to job loss in other sectors of the economy
Appendix: Data and Methodology for the sample of AI Radiology products
Data
The following products were included in my random sample:
Product
Legally usable by patients?
Notes
Viz.AI Contact
No
Aidoc
No
HeartFlow FFRct
No
Arterys Cardio DL
No
QuantX
No
ProFound AI for Digital Breast Tomosynthesis
No
OsteoDetect
No
Lunit INSIGHT CXR Triage
No
Caption Guidance
No
Not intended to assist radiologists; intended to assist ultrasound techs.
SubtlePET
No
Methodology
I asked GPT 5.1 to randomly sample products and record whether they were legally usable by patients. Transcript here. I then manually verified each product.
Note that, because the supply of radiologists is artificially limited, a drop in demand needn’t actually cause a change in the number of radiologists employed. It would be expected to decrease their wages though. In the rest of this post, I will respond to a steelman of the NYT which is talking about a decrease in the wage of radiologists, not a decrease in the number employed.
I get vague vibes from the NYT article like “predictions of job loss from AI automation aren’t trustworthy”, but they don’t make a very clear argument, so it’s possible that I am misunderstanding their point. My apologies if so. Thanks to Yarrow for this point.
Medical billing is complex, but, roughly, providers are reimbursed for the labor they put in to seeing the patient, not the patient’s improved outcomes. In my sample of 10 AI products, only 1 of the ten had a CPT code (meaning that providers can’t bill even $0.01 more for using those products than for using a non-AI tool) and that one which did could only be billed in combination with human labor.
Possibly at some point in the future, juries will acknowledge the supremacy of AI systems, but I doubt a present day jury would be very sympathetic to a hospital that replaced human radiologists with an AI that made a mistake. Some insurers have a blanket exclusion for AI-caused malpractice. Radiology has one of the highest rates of malpractice lawsuits. Thanks to Jason for this point.
Works in Progress has an article which goes into more detail about the state of radiology automation, and is helpful for better understanding the current state, though I think they are underselling the regulatory barriers
Radiology Automation Does Not Generalize to Other Jobs
The NYT article Your A.I. Radiologist Will Not Be With You Soon reports, “Leaders at OpenAI, Anthropic and other companies in Silicon Valley now predict that A.I. will eclipse humans in most cognitive tasks within a few years… The predicted extinction of radiologists provides a telling case study. So far, A.I. is proving to be a powerful medical tool to increase efficiency and magnify human abilities, rather than take anyone’s job.”[1]
I disagree that this is a “telling case study.”[2] Radiology has several attributes which make it hard to generalize to other jobs:
Patients are legally prohibited from using AI to replace human radiologists.[3]
Medical providers are legally prohibited from billing for AI radiologists.[4]
Malpractice insurance does not cover AI radiology.[5]
Moreover, the article is framed as Geoff Hinton having confidently predicted that AI would replace radiologists and this prediction as having been proven wrong, but his statement felt more to me like an offhand remark/hope.
Takeaways from this incident I endorse:[6]
Offhand remarks from ML researchers aren’t reliable economic forecasts
People trying to predict the effects of automation/AI capabilities should consider that employees often perform valuable services which aren’t easily captured in evals, such as “beside manner” and “regulatory capture”
If you have a job where a) your customers are legally prohibited from hiring someone other than you, b) even if an enterprising competitor decides to run the legal risk of replacing you they still have to pay you, and c) anyone who replaces you is likely to be sued, you probably have reasonable job security
Takeaways I don’t endorse:
Radiology’s impacts being less than Hinton thought means that we should disbelieve:
Claims that AI has already driven decreased wages, e.g. Azar et al. 2025 or Brynjolfsson et al. 2025
Claims that future AI could drive wages even lower, e.g. Barnett 2025
Or really any claim which is supported by something more than an offhand remark
Many people work in jobs similar to radiology where e.g. it is illegal to replace them with AI, and therefore we can easily extrapolate from limited wage impacts in radiology to job loss in other sectors of the economy
Appendix: Data and Methodology for the sample of AI Radiology products
Data
The following products were included in my random sample:
Methodology
I asked GPT 5.1 to randomly sample products and record whether they were legally usable by patients. Transcript here. I then manually verified each product.
Note that, because the supply of radiologists is artificially limited, a drop in demand needn’t actually cause a change in the number of radiologists employed. It would be expected to decrease their wages though. In the rest of this post, I will respond to a steelman of the NYT which is talking about a decrease in the wage of radiologists, not a decrease in the number employed.
I get vague vibes from the NYT article like “predictions of job loss from AI automation aren’t trustworthy”, but they don’t make a very clear argument, so it’s possible that I am misunderstanding their point. My apologies if so. Thanks to Yarrow for this point.
I randomly sampled 10 AI radiology products and found that patients are legally allowed to purchase 0 of them. See appendix.
Medical billing is complex, but, roughly, providers are reimbursed for the labor they put in to seeing the patient, not the patient’s improved outcomes. In my sample of 10 AI products, only 1 of the ten had a CPT code (meaning that providers can’t bill even $0.01 more for using those products than for using a non-AI tool) and that one which did could only be billed in combination with human labor.
Possibly at some point in the future, juries will acknowledge the supremacy of AI systems, but I doubt a present day jury would be very sympathetic to a hospital that replaced human radiologists with an AI that made a mistake. Some insurers have a blanket exclusion for AI-caused malpractice. Radiology has one of the highest rates of malpractice lawsuits. Thanks to Jason for this point.
Works in Progress has an article which goes into more detail about the state of radiology automation, and is helpful for better understanding the current state, though I think they are underselling the regulatory barriers