BioWorld MedTech
Archive Search
Home :  Headlines



By Mark McCarty

Regulatory Editor

The twin tools of artificial intelligence and big data have proven more cumbersome than anticipated to apply to several fields of endeavor, but a new article in the Journal of the American College of Radiology says radiologists – and by implication, device makers – must begin now to sort through the challenges presented by these instruments if diagnostic imaging is to become a 21st century science.

The article appeared in a March 1 special issue of JACR under the title, "Artificial intelligence and machine learning in radiology: opportunities, challenges, pitfalls, and criteria for success." The authors say AI can ease a radiologist's workload by "identifying suspicious or positive cases for early review," and that radiomic data that are not visible to the naked eye may increase the diagnostic and prognostic value of imaging. The authors say, however, that fears that the blend of AI and massive volumes of data will put radiologists out of business are overblown.

The authors point out that picture archiving systems now house billions of images, but there is no system for centralized file sharing that would foster the requisite machine learning. There is also a need for reference data sets for use as validation tools for an AI system, but another source of noise in the current environment is a "high variability in imaging protocols between institutions," not to mention a considerable degree of variability in the execution of a protocol within a given clinical operation.

In addition to the prospect of greater diagnostic accuracy, such a system could sort through large volumes of images to help streamline the process of image examination for overtaxed imaging clinics, and the authors say observer fatigue still incurs the risk of failure to detect a true positive exam, particularly in the case of large-volume screening services. One rather interesting scientific problem is that AI may suffer from "inherent limitations" in distinguishing between normal and abnormal images, the authors state, thanks to the fact that biological data behave more like continuous random variables than discrete random variables, a predicament that will impose considerable demands on radiologists and device makers alike in the years ahead.

The right-now value of AI

Bibb Allen, chief medical officer of the American College of Radiology's Data Science Institute, told BioWorld MedTech that JACR will routinely include content dealing with AI and big data in the future, but he said the sensitivity-specificity dilemma won't disappear in this new world. Data science training "will help avoid some of those false positives," he said, and while there is a question as to where stakeholders and regulators will fall on the question of minimum levels of sensitivity and specificity, the value of these two instruments is not entirely defined by those two parameters.

"In the short term, AI has so much more potential for radiologists other than as detectors," Allen said, stating that the prioritization function creates economies of time and effort if only because a suspect image might prod the radiologist to direct additional studies without the patient having to leave the CT lab. Patients would benefit, too, thanks to the elimination of a trip to the clinic that would otherwise have been necessary.

Allen said a false positive is not necessarily a problem, noting that an emergency department physician would have examined the patient's scans regardless of whether a suggestion of pneumothorax proves correct. AI might pick up a subtle rib fracture that co-occurred with the pneumothorax, however, a fracture the radiologist might not have noticed on a stressful, high-volume weekend night in the ER.

Other uses of AI will require more labor-intensive activities up front, such as the construction of a data set for classification of lung nodules, but Allen said "having some of those things prepopulated for us, I think, makes us better radiologists," despite concerns that specialists could become overly reliant on the algorithm.

Allen confirmed that the data ownership problem will have to be resolved, although that resolution may end up being handled on a case-by-case basis for the time being. He said the ACR's Data Science Institute "is looking at solutions of validation and certification of algorithms" for FDA approval. One approach would be to set up a centralized process for images and corresponding data sets for all to make use of, but Allen said a federated process could be invoked as well.

In the federated model, individual practices would use their own data toward development of their own use cases, or they could contract with developers to handle the job. Developers in turn could contract with multiple institutions for their expertise in an effort to deal with technical issues such as patient diversity and the problems associated with differences in imaging equipment.

Validation of that algorithm is a bit more demanding, of course, and Allen said, "we're struggling with whether we can do it in a federated way, or would it have to be centralized?" A centralized repository offers some obvious advantages when it comes to validating the conclusions an algorithm draws from a data set, but he noted that hospitals might be reluctant to sell those data, in part because of all the confidentiality and privacy issues that would attach to such a practice.

Allen said publishers of algorithms could offer those algorithms to clinics via cloud computing, but the ability to upload images directly from the imaging system is a more economical proposition than a system that calls for a separate workstation for the upload. This separate workstation would not only be an additional footprint in the hospital's IT infrastructure, it would also be added steps for the physician. In any event, Allen said, clinical use of an algorithm "has to be something that's integrated into our workflow or nobody is going to use it."

Allen said payment for these algorithms would be handled as a practice expense under most fee-for-service models, but that a need for a novel CPT code for each algorithm would be impractical. While private payers are sometimes credited with being first to adopt a new technology, Allen indicated that ACR still sees public payers as the best place to start. "CMS is where we can influence all [payers] at once," he said.



Published  March 5, 2018

Home   |   About Us   |   Contact Us   |   Copyright Notices   |   Terms of Use   |   Privacy Statement

clarivate.com | BioWorld.com | bioworldmedtech.com