Imon Banerjee is currently working as an Instructor with joint affiliation in Radiology and Biomedical Data Science Dept. Starting from 2016, she was a Post-doctoral scholar in the Laboratory of Quantitative Imaging at Stanford university. She received her Ph.D. from The University of Genova, Italy in 2016. During her Ph.D., she received Marie Curie European fellowship and worked as an early-stage researcher at The Institute for Applied Mathematics and Information Technologies, National Research Council, Italy. During her Ph.D., she developed novel techniques for building patient-specific 3D computational models. She completed her Master thesis in The European Organization for Nuclear Research (CERN), Geneva. Her research is focused on developing big data mining and predictive modeling techniques to support clinical diagnosis and treatment.

Academic Appointments

  • Instructor, Radiology

Honors & Awards

  • Marie Curie fellowship, FP7 Marie Curie Initial Traning Networks (ITN) (2012 - 2015)
  • Master Thesis Grant, The European Organization for Nuclear Research (2010)
  • Gate Scholarship, Ministry of Human Resource Development(MHRD), Government of India (2009 - 2011)

Professional Education

  • Postdoc, Stanford, Bioinformatics (2017)
  • Ph.D., University of Genova, Computer Science (2016)
  • Master of Technology, National Institute of Technology, Durgapur, Information Technology (2011)


All Publications

  • Automated Detection of Measurements and Their Descriptors in Radiology Reports Using a Hybrid Natural Language Processing Algorithm. Journal of digital imaging Bozkurt, S., Alkim, E., Banerjee, I., Rubin, D. L. 2019


    Radiological measurements are reported in free text reports, and it is challenging to extract such measures for treatment planning such as lesion summarization and cancer response assessment. The purpose of this work is to develop and evaluate a natural language processing (NLP) pipeline that can extract measurements and their core descriptors, such as temporality, anatomical entity, imaging observation, RadLex descriptors, series number, image number, and segment from a wide variety of radiology reports (MR, CT, and mammogram). We created a hybrid NLP pipeline that integrates rule-based feature extraction modules and conditional random field (CRF) model for extraction of the measurements from the radiology reports and links them with clinically relevant features such as anatomical entities or imaging observations. The pipeline was trained on 1117 CT/MR reports, and performance of the system was evaluated on an independent set of 100 expert-annotated CT/MR reports and also tested on 25 mammography reports. The system detected 813 out of 806 measurements in the CT/MR reports; 784 were true positives, 29 were false positives, and 0 were false negatives. Similarly, from the mammography reports, 96% of the measurements with their modifiers were extracted correctly. Our approach could enable the development of computerized applications that can utilize summarized lesion measurements from radiology report of varying modalities and improve practice by tracking the same lesions along multiple radiologic encounters.

    View details for DOI 10.1007/s10278-019-00237-9

    View details for PubMedID 31222557

  • Automatic inference of BI-RADS final assessment categories from narrative mammography report findings Journal of Biomedical Informatics Banerjee, I., Bozkurt, S., Alkim, E., Sagreiya, H., Kurian, A. W., Rubin, D. L. 2019
  • Weakly supervised natural language processing for assessing patient-centered outcome following prostate cancer treatment. JAMIA open Banerjee, I., Li, K., Seneviratne, M., Ferrari, M., Seto, T., Brooks, J. D., Rubin, D. L., Hemandez-Boussard, T. 2019; 2 (1): 150?59


    The population-based assessment of patient-centered outcomes (PCOs) has been limited by the efficient and accurate collection of these data. Natural language processing (NLP) pipelines can determine whether a clinical note within an electronic medical record contains evidence on these data. We present and demonstrate the accuracy of an NLP pipeline that targets to assess the presence, absence, or risk discussion of two important PCOs following prostate cancer treatment: urinary incontinence (UI) and bowel dysfunction (BD).We propose a weakly supervised NLP approach which annotates electronic medical record clinical notes without requiring manual chart review. A weighted function of neural word embedding was used to create a sentence-level vector representation of relevant expressions extracted from the clinical notes. Sentence vectors were used as input for a multinomial logistic model, with output being either presence, absence or risk discussion of UI/BD. The classifier was trained based on automated sentence annotation depending only on domain-specific dictionaries (weak supervision).The model achieved an average F1 score of 0.86 for the sentence-level, three-tier classification task (presence/absence/risk) in both UI and BD. The model also outperformed a pre-existing rule-based model for note-level annotation of UI with significant margin.We demonstrate a machine learning method to categorize clinical notes based on important PCOs that trains a classifier on sentence vector representations labeled with a domain-specific dictionary, which eliminates the need for manual engineering of linguistic rules or manual chart review for extracting the PCOs. The weakly supervised NLP pipeline showed promising sensitivity and specificity for identifying important PCOs in unstructured clinical text notes compared to rule-based algorithms.

    View details for PubMedID 31032481

  • Automatic Inference of BI-RADS Final Assessment Categories from Narrative Mammography Report Findings. Journal of biomedical informatics Banerjee, I., Bozkurt, S., Alkim, E., Sagreiya, H., Kurian, A. W., Rubin, D. L. 2019: 103137


    We propose an efficient natural language processing approach for inferring the BI-RADS final assessment categories by analyzing only the mammogram findings reported by the mammographer in narrative form. The proposed hybrid method integrates semantic term embedding with distributional semantics, producing a context-aware vector representation of unstructured mammography reports. A large corpus of unannotated mammography reports (300,000) was used to learn the context of the key-terms using a distributional semantics approach, and the trained model was applied to generate context-aware vector representations of the reports annotated with BI-RADS category(22,091). The vectorized reports were utilized to train a supervised classifier to derive the BI-RADS assessment class. Even though the majority of the proposed embedding pipeline is unsupervised, the classifier was able to recognize substantial semantic information for deriving the BI-RADS categorization not only on a holdout internal testset and also on an external validation set (1,900 reports). Our proposed method outperforms a recently published domain-specific rule-based system and could be relevant for evaluating concordance between radiologists. With minimal requirement for task specific customization, the proposed method can be easily transferable to a different domain to support large scale text mining or derivation of patient phenotype.

    View details for PubMedID 30807833

  • Predicting inadequate postoperative pain management in depressed patients: A machine learning approach. PloS one Parthipan, A., Banerjee, I., Humphreys, K., Asch, S. M., Curtin, C., Carroll, I., Hernandez-Boussard, T. 2019; 14 (2): e0210575


    Widely-prescribed prodrug opioids (e.g., hydrocodone) require conversion by liver enzyme CYP-2D6 to exert their analgesic effects. The most commonly prescribed antidepressant, selective serotonin reuptake inhibitors (SSRIs), inhibits CYP-2D6 activity and therefore may reduce the effectiveness of prodrug opioids. We used a machine learning approach to identify patients prescribed a combination of SSRIs and prodrug opioids postoperatively and to examine the effect of this combination on postoperative pain control. Using EHR data from an academic medical center, we identified patients receiving surgery over a 9-year period. We developed and validated natural language processing (NLP) algorithms to extract depression-related information (diagnosis, SSRI use, symptoms) from structured and unstructured data elements. The primary outcome was the difference between preoperative pain score and postoperative pain at discharge, 3-week and 8-week time points. We developed computational models to predict the increase or decrease in the postoperative pain across the 3 time points by using the patient's EHR data (e.g. medications, vitals, demographics) captured before surgery. We evaluate the generalizability of the model using 10-fold cross-validation method where the holdout test method is repeated 10 times and mean area-under-the-curve (AUC) is considered as evaluation metrics for the prediction performance. We identified 4,306 surgical patients with symptoms of depression. A total of 14.1% were prescribed both an SSRI and a prodrug opioid, 29.4% were prescribed an SSRI and a non-prodrug opioid, 18.6% were prescribed a prodrug opioid but were not on SSRIs, and 37.5% were prescribed a non-prodrug opioid but were not on SSRIs. Our NLP algorithm identified depression with a F1 score of 0.95 against manual annotation of 300 randomly sampled clinical notes. On average, patients receiving prodrug opioids had lower average pain scores (p<0.05), with the exception of the SSRI+ group at 3-weeks postoperative follow-up. However, SSRI+/Prodrug+ had significantly worse pain control at discharge, 3 and 8-week follow-up (p < .01) compared to SSRI+/Prodrug- patients, whereas there was no difference in pain control among the SSRI- patients by prodrug opioid (p>0.05). The machine learning algorithm accurately predicted the increase or decrease of the discharge, 3-week and 8-week follow-up pain scores when compared to the pre-operative pain score using 10-fold cross validation (mean area under the receiver operating characteristic curve 0.87, 0.81, and 0.69, respectively). Preoperative pain, surgery type, and opioid tolerance were the strongest predictors of postoperative pain control. We provide the first direct clinical evidence that the known ability of SSRIs to inhibit prodrug opioid effectiveness is associated with worse pain control among depressed patients. Current prescribing patterns indicate that prescribers may not account for this interaction when choosing an opioid. The study results imply that prescribers might instead choose direct acting opioids (e.g. oxycodone or morphine) in depressed patients on SSRIs.

    View details for PubMedID 30726237

  • Comparative effectiveness of convolutional neural network (CNN) and recurrent neural network (RNN) architectures for radiology text report classification. Artificial intelligence in medicine Banerjee, I., Ling, Y., Chen, M. C., Hasan, S. A., Langlotz, C. P., Moradzadeh, N., Chapman, B., Amrhein, T., Mong, D., Rubin, D. L., Farri, O., Lungren, M. P. 2018


    This paper explores cutting-edge deep learning methods for information extraction from medical imaging free text reports at a multi-institutional scale and compares them to the state-of-the-art domain-specific rule-based system - PEFinder and traditional machine learning methods - SVM and Adaboost. We proposed two distinct deep learning models - (i) CNN Word - Glove, and (ii) Domain phrase attention-based hierarchical recurrent neural network (DPA-HNN), for synthesizing information on pulmonary emboli (PE) from over 7370 clinical thoracic computed tomography (CT) free-text radiology reports collected from four major healthcare centers. Our proposed DPA-HNN model encodes domain-dependent phrases into an attention mechanism and represents a radiology report through a hierarchical RNN structure composed of word-level, sentence-level and document-level representations. Experimental results suggest that the performance of the deep learning models that are trained on a single institutional dataset, are better than rule-based PEFinder on our multi-institutional test sets. The best F1 score for the presence of PE in an adult patient population was 0.99 (DPA-HNN) and for a pediatrics population was 0.99 (HNN) which shows that the deep learning models being trained on adult data, demonstrated generalizability to pediatrics population with comparable accuracy. Our work suggests feasibility of broader usage of neural network models in automated classification of multi-institutional imaging text reports for a variety of applications including evaluation of imaging utilization, imaging yield, clinical decision support tools, and as part of automated classification of large corpus for medical imaging deep learning work.

    View details for PubMedID 30477892

  • Automated Survival Prediction in Metastatic Cancer Patients Using High-Dimensional Electronic Medical Record Data. Journal of the National Cancer Institute Gensheimer, M. F., Henry, A. S., Wood, D. J., Hastie, T. J., Aggarwal, S., Dudley, S. A., Pradhan, P., Banerjee, I., Cho, E., Ramchandran, K., Pollom, E., Koong, A. C., Rubin, D. L., Chang, D. T. 2018


    Background: Oncologists use patients' life expectancy to guide decisions and may benefit from a tool that accurately predicts prognosis. Existing prognostic models generally use only a few predictor variables. We used an electronic medical record dataset to train a prognostic model for patients with metastatic cancer.Methods: The model was trained and tested using 12588 patients treated for metastatic cancer in the Stanford Health Care system from 2008 to 2017. Data sources included provider note text, labs, vital signs, procedures, medication orders, and diagnosis codes. Patients were divided randomly into a training set used to fit the model coefficients and a test set used to evaluate model performance (80%/20% split). A regularized Cox model with 4126 predictor variables was used. A landmarking approach was used due to the multiple observations per patient, with t0 set to the time of metastatic cancer diagnosis. Performance was also evaluated using 399 palliative radiation courses in test set patients.Results: The C-index for overall survival was 0.786 in the test set (averaged across landmark times). For palliative radiation courses, the C-index was 0.745 (95% confidence interval [CI] = 0.715 to 0.775) compared with 0.635 (95% CI = 0.601 to 0.669) for a published model using performance status, primary tumor site, and treated site (two-sided P<.001). Our model's predictions were well-calibrated.Conclusions: The model showed high predictive performance, which will need to be validated using external data. Because it is fully automated, the model can be used to examine providers' practice patterns and could be deployed in a decision support tool to help improve quality of care.

    View details for PubMedID 30346554

  • Probabilistic Prognostic Estimates of Survival in Metastatic Cancer Patients (PPES-Met) Utilizing Free-Text Clinical Narratives. Scientific reports Banerjee, I., Gensheimer, M. F., Wood, D. J., Henry, S., Aggarwal, S., Chang, D. T., Rubin, D. L. 2018; 8 (1): 10037


    We propose a deep learning model - Probabilistic Prognostic Estimates of Survival in Metastatic Cancer Patients (PPES-Met) for estimating short-term life expectancy (>3 months) of the patients by analyzing free-text clinical notes in the electronic medical record, while maintaining the temporal visit sequence. In a single framework, we integrated semantic data mapping and neural embedding technique to produce a text processing method that extracts relevant information from heterogeneous types of clinical notes in an unsupervised manner, and we designed a recurrent neural network to model the temporal dependency of the patient visits. The model was trained on a large dataset (10,293 patients) and validated on a separated dataset (1818 patients). Our method achieved an area under the ROC curve (AUC) of 0.89. To provide explain-ability, we developed an interactive graphical tool that may improve physician understanding of the basis for the model's predictions. The high accuracy and explain-ability of the PPES-Met model may enable our model to be used as a decision support tool to personalize metastatic cancer treatment and provide valuable assistance to the physicians.

    View details for PubMedID 29968730

  • Supporting shared hypothesis testing in the biomedical domain JOURNAL OF BIOMEDICAL SEMANTICS Agibetov, A., Jimenez-Ruiz, E., Ondresik, M., Solimando, A., Banerjee, I., Guerrini, G., Catalano, C. E., Oliveira, J. M., Patane, G., Reis, R. L., Spagnuolo, M. 2018; 9: 9


    Pathogenesis of inflammatory diseases can be tracked by studying the causality relationships among the factors contributing to its development. We could, for instance, hypothesize on the connections of the pathogenesis outcomes to the observed conditions. And to prove such causal hypotheses we would need to have the full understanding of the causal relationships, and we would have to provide all the necessary evidences to support our claims. In practice, however, we might not possess all the background knowledge on the causality relationships, and we might be unable to collect all the evidence to prove our hypotheses.In this work we propose a methodology for the translation of biological knowledge on causality relationships of biological processes and their effects on conditions to a computational framework for hypothesis testing. The methodology consists of two main points: hypothesis graph construction from the formalization of the background knowledge on causality relationships, and confidence measurement in a causality hypothesis as a normalized weighted path computation in the hypothesis graph. In this framework, we can simulate collection of evidences and assess confidence in a causality hypothesis by measuring it proportionally to the amount of available knowledge and collected evidences.We evaluate our methodology on a hypothesis graph that represents both contributing factors which may cause cartilage degradation and the factors which might be caused by the cartilage degradation during osteoarthritis. Hypothesis graph construction has proven to be robust to the addition of potentially contradictory information on the simultaneously positive and negative effects. The obtained confidence measures for the specific causality hypotheses have been validated by our domain experts, and, correspond closely to their subjective assessments of confidences in investigated hypotheses. Overall, our methodology for a shared hypothesis testing framework exhibits important properties that researchers will find useful in literature review for their experimental studies, planning and prioritizing evidence collection acquisition procedures, and testing their hypotheses with different depths of knowledge on causal dependencies of biological processes and their effects on the observed conditions.

    View details for PubMedID 29422110

  • Automatic information extraction from unstructured mammography reports using distributed semantics JOURNAL OF BIOMEDICAL INFORMATICS Gupta, A., Banerjee, I., Rubin, D. L. 2018; 78: 78?86


    To date, the methods developed for automated extraction of information from radiology reports are mainly rule-based or dictionary-based, and, therefore, require substantial manual effort to build these systems. Recent efforts to develop automated systems for entity detection have been undertaken, but little work has been done to automatically extract relations and their associated named entities in narrative radiology reports that have comparable accuracy to rule-based methods. Our goal is to extract relations in a unsupervised way from radiology reports without specifying prior domain knowledge. We propose a hybrid approach for information extraction that combines dependency-based parse tree with distributed semantics for generating structured information frames about particular findings/abnormalities from the free-text mammography reports. The proposed IE system obtains a F1-score of 0.94 in terms of completeness of the content in the information frames, which outperforms a state-of-the-art rule-based system in this domain by a significant margin. The proposed system can be leveraged in a variety of applications, such as decision support and information retrieval, and may also easily scale to other radiology domains, since there is no need to tune the system with hand-crafted information extraction rules.

    View details for PubMedID 29329701

  • Integrative Personal Omics Profiles during Periods of Weight Gain and Loss. Cell systems Piening, B. D., Zhou, W., Contrepois, K., Röst, H., Gu Urban, G. J., Mishra, T., Hanson, B. M., Bautista, E. J., Leopold, S., Yeh, C. Y., Spakowicz, D., Banerjee, I., Chen, C., Kukurba, K., Perelman, D., Craig, C., Colbert, E., Salins, D., Rego, S., Lee, S., Zhang, C., Wheeler, J., Sailani, M. R., Liang, L., Abbott, C., Gerstein, M., Mardinoglu, A., Smith, U., Rubin, D. L., Pitteri, S., Sodergren, E., McLaughlin, T. L., Weinstock, G. M., Snyder, M. P. 2018


    Advances in omics technologies now allow an unprecedented level of phenotyping for human diseases, including obesity, in which individual responses to excess weight are heterogeneous and unpredictable. To aid the development of better understanding of these phenotypes, we performed a controlled longitudinal weight perturbation study combining multiple omics strategies (genomics, transcriptomics, multiple proteomics assays, metabolomics, and microbiomics) during periods of weight gain and loss in humans. Results demonstrated that: (1) weight gain is associated with the activation of strong inflammatory and hypertrophic cardiomyopathy signatures in blood; (2) although weight loss reverses some changes, a number of signatures persist, indicative of long-term physiologic changes; (3) we observed omics signatures associated with insulin resistance that may serve as novel diagnostics; (4) specific biomolecules were highly individualized and stable in response to perturbations, potentially representing stable personalized markers. Most data are available open access and serve as a valuable resource for the community.

    View details for PubMedID 29361466

  • Relevance Feedback for Enhancing Content Based Image Retrieval and Automatic Prediction of Semantic Image Features: Application to Bone Tumor Radiographs. Journal of biomedical informatics Banerjee, I., Kurtz, C., Edward Devorah, A., Do, B., Rubin, D. L., Beaulieu, C. F. 2018


    The majority of current medical CBIR systems perform retrieval based only on "imaging signatures" generated by extracting pixel-level quantitative features, and only rarely has a feedback mechanism been incorporated to improve retrieval performance. In addition, current medical CBIR approaches do not routinely incorporate semantic terms that model the user's high-level expectations, and this can limit CBIR performance.We propose a retrieval framework that exploits a hybrid feature space (HFS) that is built by integrating low-level image features and high-level semantic terms, through rounds of relevance feedback (RF) and performs similarity-based retrieval to support semi-automatic image interpretation. The novelty of the proposed system is that it can impute the semantic features of the query image by reformulating the query vector representation in the HFS via user feedback. We implemented our framework as a prototype that performs the retrieval over a database of 811 radiographic images that contains 69 unique types of bone tumors.We evaluated the system performance by conducting independent reading sessions with two subspecialist musculoskeletal radiologists. For the test set, the proposed retrieval system at fourth RF iteration of the sessions conducted with both the radiologists achieved mean average precision (MAP) value ? 0.90 where the initial MAP with baseline CBIR was 0.20. In addition, we also achieved high prediction accuracy (>0.8) for the majority of the semantic features automatically predicted by the system.Our proposed framework addresses some limitations of existing CBIR systems by incorporating user feedback and simultaneously predicting the semantic features of the query image. This obviates the need for the user to provide those terms and makes CBIR search more efficient for inexperience users/trainees. Encouraging results achieved in the current study highlight possible new directions in radiological image interpretation employing semantic CBIR combined with relevance feedback of visual similarity.

    View details for PubMedID 29981490

  • Radiology report annotation using intelligent word embeddings: Applied to multi-institutional chest CT cohort JOURNAL OF BIOMEDICAL INFORMATICS Banerjee, I., Chen, M. C., Lungren, M. P., Rubin, D. L. 2018; 77: 11?20


    We proposed an unsupervised hybrid method - Intelligent Word Embedding (IWE) that combines neural embedding method with a semantic dictionary mapping technique for creating a dense vector representation of unstructured radiology reports. We applied IWE to generate embedding of chest CT radiology reports from two healthcare organizations and utilized the vector representations to semi-automate report categorization based on clinically relevant categorization related to the diagnosis of pulmonary embolism (PE). We benchmark the performance against a state-of-the-art rule-based tool, PeFinder and out-of-the-box word2vec. On the Stanford test set, the IWE model achieved average F1 score 0.97, whereas the PeFinder scored 0.9 and the original word2vec scored 0.94. On UPMC dataset, the IWE model's average F1 score was 0.94, when the PeFinder scored 0.92 and word2vec scored 0.85. The IWE model had lowest generalization error with highest F1 scores. Of particular interest, the IWE model (trained on the Stanford dataset) outperformed PeFinder on the UPMC dataset which was used originally to tailor the PeFinder model.

    View details for PubMedID 29175548

    View details for PubMedCentralID PMC5771955

  • Assessing treatment response in triple-negative breast cancer from quantitative image analysis in perfusion magnetic resonance imaging. Journal of medical imaging (Bellingham, Wash.) Banerjee, I., Malladi, S., Lee, D., Depeursinge, A., Telli, M., Lipson, J., Golden, D., Rubin, D. L. 2018; 5 (1): 011008


    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is sensitive but not specific to determining treatment response in early stage triple-negative breast cancer (TNBC) patients. We propose an efficient computerized technique for assessing treatment response, specifically the residual tumor (RT) status and pathological complete response (pCR), in response to neoadjuvant chemotherapy. The proposed approach is based on Riesz wavelet analysis of pharmacokinetic maps derived from noninvasive DCE-MRI scans, obtained before and after treatment. We compared the performance of Riesz features with the traditional gray level co-occurrence matrices and a comprehensive characterization of the lesion that includes a wide range of quantitative features (e.g., shape and boundary). We investigated a set of predictive models ([Formula: see text]) incorporating distinct combinations of quantitative characterizations and statistical models at different time points of the treatment and some area under the receiver operating characteristic curve (AUC) values we reported are above 0.8. The most efficient models are based on first-order statistics and Riesz wavelets, which predicted RT with an AUC value of 0.85 and pCR with an AUC value of 0.83, improving results reported in a previous study by [Formula: see text]. Our findings suggest that Riesz texture analysis of TNBC lesions can be considered a potential framework for optimizing TNBC patient care.

    View details for PubMedID 29134191

    View details for PubMedCentralID PMC5668126

  • A Scalable Machine Learning Approach for Inferring Probabilistic US-LI-RADS Categorization. AMIA ... Annual Symposium proceedings. AMIA Symposium Banerjee, I., Choi, H. H., Desser, T., Rubin, D. L. 2018; 2018: 215?24


    We propose a scalable computerized approach for large-scale inference of Liver Imaging Reporting and Data System (LI-RADS) final assessment categories in narrative ultrasound (US) reports. Although our model was trained on reports created using a LI-RADS template, it was also able to infer LI-RADS scoring for unstructured reports that were created before the LI-RADS guidelines were established. No human-labelled data was required in any step of this study; for training, LI-RADS scores were automatically extracted from those reports that contained structured LI-RADS scores, and it translated the derived knowledge to reasoning on unstructured radiology reports. By providing automated LI-RADS categorization, our approach may enable standardizing screening recommendations and treatment planning of patients at risk for hepatocellular carcinoma, and it may facilitate AI-based healthcare research with US images by offering large scale text mining and data gathering opportunities from standard hospital clinical data repositories.

    View details for PubMedID 30815059

  • Inferring Generative Model Structure with Static Analysis. Advances in neural information processing systems Varma, P., He, B., Bajaj, P., Banerjee, I., Khandwala, N., Rubin, D. L., Re, C. 2017; 30: 239?49


    Obtaining enough labeled data to robustly train complex discriminative models is a major bottleneck in the machine learning pipeline. A popular solution is combining multiple sources of weak supervision using generative models. The structure of these models affects training label quality, but is difficult to learn without any ground truth labels. We instead rely on these weak supervision sources having some structure by virtue of being encoded programmatically. We present Coral, a paradigm that infers generative model structure by statically analyzing the code for these heuristics, thus reducing the data required to learn structure significantly. We prove that Coral's sample complexity scales quasilinearly with the number of heuristics and number of relations found, improving over the standard sample complexity, which is exponential in n for identifying nth degree relations. Experimentally, Coral matches or outperforms traditional structure learning approaches by up to 3.81 F1 points. Using Coral to model dependencies instead of assuming independence results in better performance than a fully supervised model by 3.07 accuracy points when heuristics are used to label radiology data without ground truth labels.

    View details for PubMedID 29391769

  • Transfer learning on fused multiparametric MR images for classifying histopathological subtypes of rhabdomyosarcoma. Computerized medical imaging and graphics Banerjee, I., Crawley, A., Bhethanabotla, M., Daldrup-Link, H. E., Rubin, D. L. 2017


    This paper presents a deep-learning-based CADx for the differential diagnosis of embryonal (ERMS) and alveolar (ARMS) subtypes of rhabdomysarcoma (RMS) solely by analyzing multiparametric MR images. We formulated an automated pipeline that creates a comprehensive representation of tumor by performing a fusion of diffusion-weighted MR scans (DWI) and gadolinium chelate-enhanced T1-weighted MR scans (MRI). Finally, we adapted transfer learning approach where a pre-trained deep convolutional neural network has been fine-tuned based on the fused images for performing classification of the two RMS subtypes. We achieved 85% cross validation prediction accuracy from the fine-tuned deep CNN model. Our system can be exploited to provide a fast, efficient and reproducible diagnosis of RMS subtypes with less human interaction. The framework offers an efficient integration between advanced image processing methods and cutting-edge deep learning techniques which can be extended to deal with other clinical domains that involve multimodal imaging for disease diagnosis.

    View details for DOI 10.1016/j.compmedimag.2017.05.002

    View details for PubMedID 28515009

  • Combination of visual and symbolic knowledge: A survey in anatomy. Computers in biology and medicine Banerjee, I., Patané, G., Spagnuolo, M. 2017; 80: 148-157


    In medicine, anatomy is considered as the most discussed field and results in a huge amount of knowledge, which is heterogeneous and covers aspects that are mostly independent in nature. Visual and symbolic modalities are mainly adopted for exemplifying knowledge about human anatomy and are crucial for the evolution of computational anatomy. In particular, a tight integration of visual and symbolic modalities is beneficial to support knowledge-driven methods for biomedical investigation. In this paper, we review previous work on the presentation and sharing of anatomical knowledge, and the development of advanced methods for computational anatomy, also focusing on the key research challenges for harmonizing symbolic knowledge and spatial 3D data.

    View details for DOI 10.1016/j.compbiomed.2016.11.018

    View details for PubMedID 27940289

  • Radiology Report Annotation using Intelligent Word Embeddings: Applied to Multi-institutional Chest CT Cohort Journal of Biomedical Informatics Banerjee, I., et al 2017

    View details for DOI 10.1016/j.jbi.2017.11.01

  • Inferring Generative Model Structure with Static Analysis Varma, P., He, B., Bajaj, P., Khandwala, N., Banerjee, I., Rubin, D., Re, C., Guyon, Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2017
  • Computerized Prediction of Radiological Observations Based on Quantitative Feature Analysis: Initial Experience in Liver Lesions. Journal of digital imaging Banerjee, I., Beaulieu, C. F., Rubin, D. L. 2017; 30 (4): 506?18


    We propose a computerized framework that, given a region of interest (ROI) circumscribing a lesion, not only predicts radiological observations related to the lesion characteristics with 83.2% average prediction accuracy but also derives explicit association between low-level imaging features and high-level semantic terms by exploiting their statistical correlation. Such direct association between semantic concepts and low-level imaging features can be leveraged to build a powerful annotation system for radiological images that not only allows the computer to infer the semantics from diverse medical images and run automatic reasoning for making diagnostic decision but also provides "human-interpretable explanation" of the system output to facilitate better end user understanding of computer-based diagnostic decisions. The core component of our framework is a radiological observation detection algorithm that maximizes the low-level imaging feature relevancy for each high-level semantic term. On a liver lesion CT dataset, we have implemented our framework by incorporating a large set of state-of-the-art low-level imaging features. Additionally, we included a novel feature that quantifies lesion(s) present within the liver that have a similar appearance as the primary lesion identified by the radiologist. Our framework achieved a high prediction accuracy (83.2%), and the derived association between semantic concepts and imaging features closely correlates with human expectation. The framework has been only tested on liver lesion CT images, but it is capable of being applied to other imaging domains.

    View details for PubMedID 28639186

  • Intelligent Word Embeddings of Free-Text Radiology Reports. AMIA ... Annual Symposium proceedings. AMIA Symposium Banerjee, I., Madhavan, S., Goldman, R. E., Rubin, D. L. 2017; 2017: 411?20


    Radiology reports are a rich resource for advancing deep learning applications in medicine by leveraging the large volume of data continuously being updated, integrated, and shared. However, there are significant challenges as well, largely due to the ambiguity and subtlety of natural language. We propose a hybrid strategy that combines semantic-dictionary mapping and word2vec modeling for creating dense vector embeddings of free-text radiology reports. Our method leverages the benefits of both semantic-dictionary mapping as well as unsupervised learning. Using the vector representation, we automatically classify the radiology reports into three classes denoting confidence in the diagnosis of intracranial hemorrhage by the interpreting radiologist. We performed experiments with varying hyperparameter settings of the word embeddings and a range of different classifiers. Best performance achieved was a weighted precision of 88% and weighted recall of 90%. Our work offers the potential to leverage unstructured electronic health record data by allowing direct analysis of narrative clinical notes.

    View details for PubMedID 29854105

  • Computerized Prediction of Radiological Observations Based on Quantitative Feature Analysis: Initial Experience in Liver Lesions Journal of Digital Imaging Banerjee, I. 2017: 506?18


    We propose a computerized framework that, given a region of interest (ROI) circumscribing a lesion, not only predicts radiological observations related to the lesion characteristics with 83.2% average prediction accuracy but also derives explicit association between low-level imaging features and high-level semantic terms by exploiting their statistical correlation. Such direct association between semantic concepts and low-level imaging features can be leveraged to build a powerful annotation system for radiological images that not only allows the computer to infer the semantics from diverse medical images and run automatic reasoning for making diagnostic decision but also provides "human-interpretable explanation" of the system output to facilitate better end user understanding of computer-based diagnostic decisions. The core component of our framework is a radiological observation detection algorithm that maximizes the low-level imaging feature relevancy for each high-level semantic term. On a liver lesion CT dataset, we have implemented our framework by incorporating a large set of state-of-the-art low-level imaging features. Additionally, we included a novel feature that quantifies lesion(s) present within the liver that have a similar appearance as the primary lesion identified by the radiologist. Our framework achieved a high prediction accuracy (83.2%), and the derived association between semantic concepts and imaging features closely correlates with human expectation. The framework has been only tested on liver lesion CT images, but it is capable of being applied to other imaging domains.

    View details for DOI 10.1007/s10278-017-9987-0

    View details for PubMedCentralID PMC5537098

  • Semantics-driven annotation of patient-specific 3D data: a step to assist diagnosis and treatment of rheumatoid arthritis VISUAL COMPUTER Banerjee, I., Agibetov, A., Catalano, C. E., Patane, G., Spagnuolo, M. 2016; 32 (10): 1337-1349
  • Computerized Multiparametric MR image Analysis for Prostate Cancer Aggressiveness-Assessment NIPS 2016 Workshop on Machine Learning for Health (NIPS ML4HC) Banerjee, I. 2016
  • Generation of 3D Canonical Anatomical Models: An Experience on Carpal Bones NEW TRENDS IN IMAGE ANALYSIS AND PROCESSING - ICIAP 2015 WORKSHOPS Banerjee, I., Laga, H., Patane, G., Kurtek, S., Srivastava, A., Spagnuolo, M. 2015; 9281: 167-174
  • Semantic annotation of 3D anatomical models to support diagnosis and follow-up analysis of musculoskeletal pathologies International Journal of Computer Assisted Radiology and Surgery Banerjee, I. 2015
  • Accessing and Representing Knowledge in the Medical Field: Visual and Lexical Modalities. 3D Multiscale Physiological Human. Banerjee, I. Springer, London. 2013

Footer Links:

Stanford Medicine Resources: