Today, hundreds of startup corporations around the world are trying to implement deep learning in radiology. Yet the number of radiologists who have been displaced by AI is practically zero. (In fact, there is a worldwide deficiency of them.)
At least for the short term, that figure is likely to remain unchanged. Radiology has established harder to automate than Hinton — and many others — believed. For medicine in general, this is no less true. There are many cases of concept, such as automated investigation of pneumonia from chest X-rays. Still, surprisingly, few incidents in which deep learning (a machine learning technique that is presently the most dominant approach to AI) has achieved the transmutations and improvements so often promised.
To begin with, the laboratory data for the effectiveness of deep learning is not as sound as it might appear. Positive results, when machines utilizing AI outdo their human equivalents, tend to get considerable media recognition while negative consequences, when devices don’t do as well as people, are rarely reported in academic publications and get even fewer media coverage.
Meanwhile, a developing body of literature shows that deep learning is fundamentally exposed to “adversarial attacks,” and is often easily fooled by artificial associations. An overturned school bus, for example, might be substituted for a snowplow if it happens to be enclosed by snow. With a few pieces of tape, a stop sign was reconstructed, so a deep learning system mistook it for a speed limit. If these sorts of difficulties have become well-known in the machine learning society, their implications are less well-understood inside medicine.