Whether we realize it or not, we all interact with artificial intelligence (AI) multiple times a day for tasks like web searches, voice recognition, online shopping and social media. Tech companies like Amazon, Google, Facebook and Apple commonly employ advanced AI algorithms — but AI has also moved beyond Silicon Valley into other industries like finance, transportation and manufacturing. AI adoption across healthcare is somewhat lagging, but it’s starting to make significant contributions across many clinical and administrative areas.
AI can potentially provide solutions that help address some of the simmering problems in the U.S. healthcare system, including inequitable access to care, high costs, workforce shortages, capacity constraints, workflow inefficiencies and medical errors. For instance, some AI technologies can increase access to preventive care and screening tests, minimize treatment delays, reduce diagnostic errors and improve adherence to guideline-directed therapies.
AI particularly excels in one area where clinicians often struggle: collecting and deciphering in a timely manner the enormous amount of patient data now bombarding the clinician from multiple sources. Large amounts of digitized patient health data — like that in the electronic health record (EHR) — are a driving factor in the need for AI, and AI is very good at recognizing complex and often subtle patterns within this big data.
Further, AI is available 24/7 and isn’t subject to factors like fatigue and distraction. Computers also are exponentially faster than humans at analyzing data. Finally, for many types of low-complexity, repetitive tasks, AI can free up provider time to concentrate on higher-value functions — potentially improving healthcare job satisfaction and reducing burnout.
AI applications in healthcare
AI has hundreds of potential applications across the entire spectrum of healthcare, ranging from uses in the earliest stages of clinical research and preventive care to multiple points within an acute care encounter and following through to the post-acute care setting and even into the patient’s home. Some functions of AI in the care continuum include its use to communicate with patients, perform repetitive tasks, interpret imaging and other diagnostic data, mine the EHR for insights and assist clinicians with decision-making.
Many of the AI technologies in healthcare are currently in the developmental or early adopter phase. However, a few of the early AI success stories (including its use for stroke, arrhythmia and eye diseases) are now in use at thousands of hospitals and have served in diagnosis or treatment for hundreds of thousands of patients. Some of the more advanced use cases involve:
- Image-based disease detection
- Sensor signal processing
- Patient risk analysis and predictive analytics
- Natural language processing
Image interpretation is one of the earliest and most well-developed applications of healthcare AI. The development of the convolutional neural network (CNN) was a major milestone in this application. CNN programming architecture was launched following an impressive performance by a CNN in a 2012 internet image recognition competition. In the decade since, CNNs have become a dominant form of AI programming, with numerous medical applications in image-intensive fields like radiology, ophthalmology, dermatology and pathology.
Some AI uses in radiology include chest X-ray interpretation, wrist fracture diagnosis, mammography and interpretation of head CT scans for acute stroke. For chest X-rays, one clinical study reported radiologist-level performance of a CNN for detection of 11 different pathologies. Reading speed by AI also was significantly better than human reading. The study set of 420 X-rays took the radiologist four hours to read on average — compared to 90 seconds by the CNN. Additional research showed that in the diagnosis of wrist fractures, the average diagnostic sensitivity for an emergency medicine physician was 80.8% unaided, but sensitivity improved to 91.5% when the physician was provided with adjunct use of AI. In this case, a CNN was trained on 135,400 x-rays previously annotated by a panel of orthopedic specialists.
AI also can be used to classify various skin lesions, rashes, moles and cysts, and to differentiate between benign and malignant findings. For example, dermatologists base diagnoses of skin cancer on the size, color, shape and other visual characteristics of lesions. AI algorithms can be trained to recognize these features and provide an initial diagnosis. One study compared performance of a CNN algorithm to a panel of 58 dermatologists and found that the AI outperformed most of them for melanoma recognition.
The field of ophthalmology is highly image-centric and has pioneered some of the earliest applications of AI. University of Iowa researchers developed the first AI platform for detection of more than mild diabetic retinopathy, and the FDA cleared the technology for marketing in April 2018. In this application, AI analyzes a retinal image for features like hemorrhage, edema, and vascular or fibrous proliferation. A clinical trial of 900 patients at 10 sites demonstrated comparable or better AI diagnostic accuracy than most ophthalmologists. Importantly, these results were obtained in primary care settings — improving patient access to this critical screening test and potentially preventing blindness in some patients.
Many AI imaging applications are cloud-based, where image data are uploaded to specialized hardware for AI analysis; in contrast, colonoscopy requires real-time image analysis. The first FDA-cleared, AI-based product that enables real-time computer-aided detection of polyps during colonoscopies is comprised of computer hardware and AI software that connects between the endoscope and a conventional video output monitor. AI performs an analysis on each frame of the input video. Upon detection of an important lesion, the system alerts the physician in real time with a box superimposed on the display monitor.
Pathology is on the verge of AI transformation due to increased digitization of biopsy whole slide images. Once digitized, AI techniques can assist pathologists with slide analysis. For example, the first FDA-cleared AI for assistance in core needle biopsy detection of prostate cancer uses AI to output a “yes or no” binary classification and highlights a single image location displaying the highest probability of cancer.
Medical researchers have reported fewer false negatives due to adjunctive use of this AI — resulting in increased overall prostate cancer diagnostic sensitivity of approximately 7% to 16%. Subanalysis showed this was due mostly to the identification of smaller, lower-grade tumors that are often harder to diagnose. AI use also resulted in less diagnostic variation across a range of pathologist experience levels, patient populations and laboratory practices. A future use case may involve AI as a first-read tool to eliminate negative slides that don’t require further review, which could significantly reduce pathology workload. AI use also may reduce the time spent reviewing each slide and could alter the need for additional consultations, re-reads and use of more advanced immunostaining techniques.
The AI revolution is well underway with new applications across most areas of medicine. It’s already beginning to affect patient outcomes, improve access to care and streamline workflows. AI hardware and software is expected to improve continually and rapidly, and developers will continue to introduce niche clinical applications in response to demonstrated needs. These trends and others will undoubtedly bolster the growth and adoption of healthcare AI.
About the author: Joe Cummings, Ph.D., is Vizient’s technology program director within the Performance Improvement Collaboratives group. His areas of expertise include technology assessment, evidence-based medicine and clinical supply integration and he has research experience with a broad array of medical devices, including artificial intelligence, remote monitoring, robotics, rapid diagnostics, enhanced biomaterials, metagenomics and telehealth. His background is in biomedical engineering.