Bio

Bio


I'm a PhD student in Biomedical Informatics, advised by Sanjay Basu. Previously, I obtained a BS in Statistics at the University of Chicago. My research interests include biostatistical methods and machine learning, with applications to public health and social justice.

Publications

All Publications


  • Forecasting Internally Displaced Population Migration Patterns in Syria and Yemen. Disaster medicine and public health preparedness Huynh, B. Q., Basu, S. 2019: 1?6

    Abstract

    OBJECTIVES: Armed conflict has contributed to an unprecedented number of internally displaced persons (IDPs), individuals who are forced out of their homes but remain within their country. IDPs often urgently require shelter, food, and healthcare, yet prediction of when IDPs will migrate to an area remains a major challenge for aid delivery organizations. We sought to develop an IDP migration forecasting framework that could empower humanitarian aid groups to more effectively allocate resources during conflicts.METHODS: We modeled monthly IDP migration between provinces within Syria and within Yemen using data on food prices, fuel prices, wages, location, time, and conflict reports. We compared machine learning methods with baseline persistence methods of forecasting.RESULTS: We found a machine learning approach that more accurately forecast migration trends than baseline persistence methods. A random forest model outperformed the best persistence model in terms of root mean square error of log migration by 26% and 17% for the Syria and Yemen datasets, respectively.CONCLUSIONS: Integrating diverse data sources into a machine learning model appears to improve IDP migration prediction. Further work should examine whether implementation of such models can enable proactive aid allocation for IDPs in anticipation of forecast arrivals.

    View details for DOI 10.1017/dmp.2019.73

    View details for PubMedID 31452495

  • Breast lesion classification based on dynamic contrast-enhanced magnetic resonance images sequences with long short-term memory networks. Journal of medical imaging (Bellingham, Wash.) Antropova, N., Huynh, B., Li, H., Giger, M. L. 2019; 6 (1): 011002

    Abstract

    We present a breast lesion classification methodology, based on four-dimensional (4-D) dynamic contrast-enhanced magnetic resonance images (DCE-MRI), using recurrent neural networks in combination with a pretrained convolutional neural network (CNN). The method enables to capture not only the two-dimensional image features but also the temporal enhancement patterns presented in DCE-MRI. We train a long short-term memory (LSTM) network on temporal sequences of feature vectors extracted from the dynamic MRI sequences. To capture the local changes in lesion enhancement, the feature vectors are obtained from various levels of a pretrained CNN. We compare the LSTM method's performance to that of a CNN fine-tuned on "RGB" MRIs, formed by precontrast, first, and second postcontrast MRIs. LSTM significantly outperformed the fine-tuned CNN, resulting in AUC LSTM = 0.88 and AUC fine - tuned = 0.84 , p = 0.00085 , in the task of distinguishing benign and malignant lesions. Our method captures clinically useful information carried by the full 4-D dynamic MRI sequence and outperforms the standard fine-tuning method.

    View details for PubMedID 30840721

  • Recurrent Neural Networks for Breast Lesion Classification based on DCE-MRIs Antropova, N., Huynh, B., Giger, M., Petrick, N., Mori, K. SPIE-INT SOC OPTICAL ENGINEERING. 2018

    View details for DOI 10.1117/12.2293265

    View details for Web of Science ID 000432546900091

  • Deep learning in breast cancer risk assessment: evaluation of convolutional neural networks on a clinical dataset of full-field digital mammograms. Journal of medical imaging (Bellingham, Wash.) Li, H., Giger, M. L., Huynh, B. Q., Antropova, N. O. 2017; 4 (4): 041304

    Abstract

    To evaluate deep learning in the assessment of breast cancer risk in which convolutional neural networks (CNNs) with transfer learning are used to extract parenchymal characteristics directly from full-field digital mammographic (FFDM) images instead of using computerized radiographic texture analysis (RTA), 456 clinical FFDM cases were included: a "high-risk" BRCA1/2 gene-mutation carriers dataset (53 cases), a "high-risk" unilateral cancer patients dataset (75 cases), and a "low-risk dataset" (328 cases). Deep learning was compared to the use of features from RTA, as well as to a combination of both in the task of distinguishing between high- and low-risk subjects. Similar classification performances were obtained using CNN [area under the curve [Formula: see text]; standard error [Formula: see text]] and RTA ([Formula: see text]; [Formula: see text]) in distinguishing BRCA1/2 carriers and low-risk women. However, in distinguishing unilateral cancer patients and low-risk women, performance was significantly greater with CNN ([Formula: see text]; [Formula: see text]) compared to RTA ([Formula: see text]; [Formula: see text]). Fusion classifiers performed significantly better than the RTA-alone classifiers with AUC values of 0.86 and 0.84 in differentiating BRCA1/2 carriers from low-risk women and unilateral cancer patients from low-risk women, respectively. In conclusion, deep learning extracted parenchymal characteristics from FFDMs performed as well as, or better than, conventional texture analysis in the task of distinguishing between cancer risk populations.

    View details for PubMedID 28924576

    View details for PubMedCentralID PMC5596198

  • A deep feature fusion methodology for breast cancer diagnosis demonstrated on three imaging modality datasets. Medical physics Antropova, N., Huynh, B. Q., Giger, M. L. 2017

    Abstract

    Deep learning methods for radiomics/computer-aided diagnosis (CADx) are often prohibited by small datasets, long computation time, and the need for extensive image preprocessing.We aim to develop a breast CADx methodology that addresses the aforementioned issues by exploiting the efficiency of pre-trained convolutional neural networks (CNNs) and using pre-existing handcrafted CADx features.We present a methodology that extracts and pools low- to mid-level features using a pretrained CNN and fuses them with handcrafted radiomic features computed using conventional CADx methods. Our methodology is tested on three different clinical imaging modalities (dynamic contrast enhanced-MRI [690 cases], full-field digital mammography [245 cases], and ultrasound [1125 cases]).From ROC analysis, our fusion-based method demonstrates, on all three imaging modalities, statistically significant improvements in terms of AUC as compared to previous breast cancer CADx methods in the task of distinguishing between malignant and benign lesions. (DCE-MRI [AUC = 0.89 (se = 0.01)], FFDM [AUC = 0.86 (se = 0.01)], and ultrasound [AUC = 0.90 (se = 0.01)]).We proposed a novel breast CADx methodology that can be used to more effectively characterize breast lesions in comparison to existing methods. Furthermore, our proposed methodology is computationally efficient and circumvents the need for image preprocessing.

    View details for PubMedID 28681390

  • Digital mammographic tumor classification using transfer learning from deep convolutional neural networks. Journal of medical imaging (Bellingham, Wash.) Huynh, B. Q., Li, H., Giger, M. L. 2016; 3 (3): 034501-?

    Abstract

    Convolutional neural networks (CNNs) show potential for computer-aided diagnosis (CADx) by learning features directly from the image data instead of using analytically extracted features. However, CNNs are difficult to train from scratch for medical images due to small sample sizes and variations in tumor presentations. Instead, transfer learning can be used to extract tumor information from medical images via CNNs originally pretrained for nonmedical tasks, alleviating the need for large datasets. Our database includes 219 breast lesions (607 full-field digital mammographic images). We compared support vector machine classifiers based on the CNN-extracted image features and our prior computer-extracted tumor features in the task of distinguishing between benign and malignant breast lesions. Five-fold cross validation (by lesion) was conducted with the area under the receiver operating characteristic (ROC) curve as the performance metric. Results show that classifiers based on CNN-extracted features (with transfer learning) perform comparably to those using analytically extracted features [area under the ROC curve [Formula: see text]]. Further, the performance of ensemble classifiers based on both types was significantly better than that of either classifier type alone ([Formula: see text] versus 0.81, [Formula: see text]). We conclude that transfer learning can improve current CADx methods while also providing standalone classifiers without large datasets, facilitating machine-learning methods in radiomics and precision medicine.

    View details for DOI 10.1117/1.JMI.3.3.034501

    View details for PubMedID 27610399

Footer Links:

Stanford Medicine Resources: