This Title All WIREs
How to cite this WIREs title:
WIREs Data Mining Knowl Discov
Impact Factor: 2.541

O‐MedAL: Online active deep learning for medical image analysis

Full article on Wiley Online Library:   HTML PDF

Can't access this content? Tell your librarian.

Abstract Active learning (AL) methods create an optimized labeled training set from unlabeled data. We introduce a novel online active deep learning method for medical image analysis. We extend our MedAL AL framework to present new results in this paper. A novel sampling method queries the unlabeled examples that maximize the average distance to all training set examples. Our online method enhances performance of its underlying baseline deep network. These novelties contribute to significant performance improvements, including improving the model's underlying deep network accuracy by 6.30%, using only 25% of the labeled dataset to achieve baseline accuracy, reducing backpropagated images during training by as much as 67%, and demonstrating robustness to class imbalance in binary and multiclass tasks. This article is categorized under: Technologies > Machine Learning Technologies > Classification Application Areas > Health Care
Proposed active learning pipeline. To solve a supervised classification task, we will use a deep network (DN), an initial labeled dataset , an unlabeled dataset , and an oracle who can label data. We desire to label as few examples as possible. For each active learning iteration, we use the DN to compute a feature embedding for all labeled examples in and the top M unlabeled examples from with highest predictive entropy. We select and label oracle examples furthest in feature space from the centroid of all labeled examples. The oracle examples are selected one at a time, and the centroid updated after each labeling. We train the model on the expanded training set and repeat the process. In the online setting, the model weights are not reset between iterations and we use only the newly labeled examples and a subset of previously labeled examples
[ Normal View | Magnified View ]
Online MedAL (O‐MedAL) evaluation. Showing how computational efficiency (b) and accuracy (c) relate to labeling efficiency. Marked points and table (a) highlight which MedAL and O‐MedAL models perform best in each category
[ Normal View | Magnified View ]
Online MedAL (O‐MedAL) versus MedAL Mechanics. O‐MedAL (red) preserves the labeling efficiency of MedAL (green) with less computation and higher accuracy, showing robustness to imbalance in the unlabeled data,
[ Normal View | Magnified View ]
MedAL outperforms competing methods by obtaining better performance with fewer labeled training images. Figures (a), (b), and (c) show that MedAL outperforms random sampling and uncertainty sampling (entropy measure) by consistently obtaining better test set accuracy as the size of the training set increases. Figure (d) shows that MedAL outperforms a deep Bayesian active learning method (Gal, Islam, & Ghahramani, ) in terms of area under the receiver operating characteristic (ROC) curve
[ Normal View | Magnified View ]
Examples from all three datasets evaluated in this work. From left to right, top to bottom: Messidor, skin cancer, and breast cancer datasets. The class labels are shown bellow each of the images
[ Normal View | Magnified View ]

Browse by Topic

Application Areas > Health Care
Technologies > Classification
Technologies > Machine Learning

Access to this WIREs title is by subscription only.

Recommend to Your
Librarian Now!

The latest WIREs articles in your inbox

Sign Up for Article Alerts