RADNET: Radiologist Level Accuracy using Deep Learning for HEMORRHAGE detection in CT Scans
Authors: Monika Grewal, Muktabh Mayank Srivastava, Pulkit Kumar, Srikrishna Varadarajan
We describe a deep learning approach for automated brain hemorrhage detection from computed tomography (CT) scans. Our model emulates the procedure followed by radiologists to analyze a 3D CT scan in real-world. Similar to radiologists, the model sifts through 2D cross-sectional slices while paying close attention to potential hemorrhagic regions. Further, the model utilizes 3D context from neighboring slices to improve predictions at each slice and subsequently, aggregates the slice-level predictions to provide the diagnosis at CT level. We refer to our proposed approach as Recurrent Attention DenseNet (RADnet) as it employs original DenseNet architecture along with adding the components of attention for slice level predictions and recurrent neural network layer for incorporating 3D context. The real-world performance of RADnet has been benchmarked against independent analysis performed by three senior radiologists for 77 brain CTs. RADnet demonstrates 81.82% hemorrhage prediction accuracy at CT level that is comparable to radiologists. Further, RADnet achieves higher recall than two of the three radiologists, which is remarkable.
RADNet, our proposed architecture to detect Brain hemorrhage emulates radiologists trying to detect hemorrhages in the brain by sliding/up down among CT Slices, by treating Hemorrhage detection as a sequence modeling problem where the elements of sequences are 2D CT Slices. A Dense Convnet with attention is used to deduce things at a slice level and an LSTM is then used to classify a sequence of slices. When evaluated against radiologists, RadNet had a performance comparable to radiologists and a F1 score better than them.
Anatomical labeling of brain CT scan anomalies using multi-context nearest neighbor relation networks
Authors: Srikrishna Varadarajan, Muktabh Mayank Srivastava, Monika Grewal, Pulkit Kumar
This work is an endeavor to develop a deep learning methodology for automated anatomical labeling of a given region of interest (ROI) in brain computed tomography (CT) scans. We combine both local and global context to obtain a representation of the ROI. We then use Relation Networks (RNs) to predict the corresponding anatomy of the ROI based on its relationship score for each class. Further, we propose a novel strategy employing nearest neighbors approach for training RNs. We train RNs to learn the relationship of the target ROI with the joint representation of its nearest neighbors in each class instead of all data-points in each class. The proposed strategy leads to better training of RNs along with increased performance as compared to training baseline RN network.
What we propose here is a Meta-learning algorithm, in the sense that it is class independent and can generalize for any new anatomy by adding a small set of examples (a few hundred slices). The proposed algorithm can effectively scale up to any number of anatomies with minimal effort.