Yu Zhang received the Ph.D. degree in Control Science and Engineering from East China University of Science and Technology, Shanghai, China. He worked as a Research Associate for two years in the Laboratory for Advanced Brain Signal Processing at RIKEN Brain Science Institute, Japan, where he has focused on developing advanced pattern recognition algorithm for EEG analysis with applications in brain-computer interfaces. He worked as a Postdoctoral Fellows for one year at University of North Carolina at Chapel Hill, where he has mainly studied in functional brain network analysis with fMRI for brain disease diagnosis. He is working in the Etkin Lab at Stanford University as a Postdoctoral Fellows and has focused on machine learning-based sophisticated analysis of EEG and fMRI connectivity with TMS for various medical applications, including subtype identification, treatment outcome prediction, and so on. His research interests include computational neuroscience, brain network, medical imaging computing, machine learning, artificial intelligence, and signal processing.

Professional Education

  • Postdoctoral Research Fellow, Universith of North Carolina at Chapel Hill, Neuroimaging Analysis (2016)
  • Ph.D., ECUST, Machine Learning and Brain Signal Processing (2013)
  • Research Associate, RIKEN Brain Science Institute, Japan, Brain Signal Processing (2012)

Stanford Advisors


All Publications

  • Temporally Constrained Sparse Group Spatial Patterns for Motor Imagery BCI IEEE TRANSACTIONS ON CYBERNETICS Zhang, Y., Nam, C. S., Zhou, G., Jin, J., Wang, X., Cichocki, A. 2019; 49 (9): 3322–32


    Common spatial pattern (CSP)-based spatial filtering has been most popularly applied to electroencephalogram (EEG) feature extraction for motor imagery (MI) classification in brain-computer interface (BCI) application. The effectiveness of CSP is highly affected by the frequency band and time window of EEG segments. Although numerous algorithms have been designed to optimize the spectral bands of CSP, most of them selected the time window in a heuristic way. This is likely to result in a suboptimal feature extraction since the time period when the brain responses to the mental tasks occurs may not be accurately detected. In this paper, we propose a novel algorithm, namely temporally constrained sparse group spatial pattern (TSGSP), for the simultaneous optimization of filter bands and time window within CSP to further boost classification accuracy of MI EEG. Specifically, spectrum-specific signals are first derived by bandpass filtering from raw EEG data at a set of overlapping filter bands. Each of the spectrum-specific signals is further segmented into multiple subseries using sliding window approach. We then devise a joint sparse optimization of filter bands and time windows with temporal smoothness constraint to extract robust CSP features under a multitask learning framework. A linear support vector machine classifier is trained on the optimized EEG features to accurately identify the MI tasks. An experimental study is implemented on three public EEG datasets (BCI Competition III dataset IIIa, BCI Competition IV datasets IIa, and BCI Competition IV dataset IIb) to validate the effectiveness of TSGSP in comparison to several other competing methods. Superior classification performance (averaged accuracies are 88.5%, 83.3%, and 84.3% for the three datasets, respectively) based on the experimental results confirms that the proposed algorithm is a promising candidate for performance improvement of MI-based BCIs.

    View details for DOI 10.1109/TCYB.2018.2841847

    View details for Web of Science ID 000470988800009

    View details for PubMedID 29994667

  • Regularized Group Sparse Discriminant Analysis for P300-Based Brain-Computer Interface INTERNATIONAL JOURNAL OF NEURAL SYSTEMS Wu, Q., Zhang, Y., Liu, J., Sun, J., Cichocki, A., Gao, F. 2019; 29 (6)
  • Strength and similarity guided group-level brain functional network construction for MCI diagnosis PATTERN RECOGNITION Zhang, Y., Zhang, H., Chen, X., Liu, M., Zhu, X., Lee, S., Shen, D. 2019; 88: 421–30
  • Sparse Group Representation Model for Motor Imagery EEG Classification IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS Jiao, Y., Zhang, Y., Chen, X., Yin, E., Jin, J., Wang, X., Cichocki, A. 2019; 23 (2): 631–41


    A potential limitation of a motor imagery (MI) based brain-computer interface (BCI) is that it usually requires a relatively long time to record sufficient electroencephalogram (EEG) data for robust classifier training. The calibration burden during data acquisition phase will most probably cause a subject to be reluctant to use a BCI system. To alleviate this issue, we propose a novel sparse group representation model (SGRM) for improving the efficiency of MI-based BCI by exploiting the intersubject information. Specifically, preceded by feature extraction using common spatial pattern, a composite dictionary matrix is constructed with training samples from both the target subject and other subjects. By explicitly exploiting within-group sparse and group-wise sparse constraints, the most compact representation of a test sample of the target subject is then estimated as a linear combination of columns in the dictionary matrix. Classification is implemented by calculating the class-specific representation residual based on the significant training samples corresponding to the nonzero representation coefficients. Accordingly, the proposed SGRM method effectively reduces the required training samples from the target subject due to auxiliary data available from other subjects. With two public EEG data sets, extensive experimental comparisons are carried out between SGRM and other state-of-the-art approaches. Superior classification performance of our method using 40 trials of the target subject for model calibration (Averaged accuracy = 78.2%, Kappa = 0.57 and Averaged accuracy = 77.7%, Kappa = 0.55 for the two data sets, respectively) indicates its promising potential for improving the practicality of MI-based BCI.

    View details for DOI 10.1109/JBHI.2018.2832538

    View details for Web of Science ID 000460666400019

    View details for PubMedID 29994055

  • Regularized Group Sparse Discriminant Analysis for P300-Based Brain-Computer Interface. International journal of neural systems Wu, Q., Zhang, Y., Liu, J., Sun, J., Cichocki, A., Gao, F. 2019: 1950002


    Event-related potentials (ERPs) especially P300 are popular effective features for brain-computer interface (BCI) systems based on electroencephalography (EEG). Traditional ERP-based BCI systems may perform poorly for small training samples, i.e. the undersampling problem. In this study, the ERP classification problem was investigated, in particular, the ERP classification in the high-dimensional setting with the number of features larger than the number of samples was studied. A flexible group sparse discriminative analysis algorithm based on Moreau-Yosida regularization was proposed for alleviating the undersampling problem. An optimization problem with the group sparse criterion was presented, and the optimal solution was proposed by using the regularized optimal scoring method. During the alternating iteration procedure, the feature selection and classification were performed simultaneously. Two P300-based BCI datasets were used to evaluate our proposed new method and compare it with existing standard methods. The experimental results indicated that the features extracted via our proposed method are efficient and provide an overall better P300 classification accuracy compared with several state-of-the-art methods.

    View details for PubMedID 30880525

  • Hierarchical feature fusion framework for frequency recognition in SSVEP-based BCIs. Neural networks : the official journal of the International Neural Network Society Zhang, Y., Yin, E., Li, F., Zhang, Y., Guo, D., Yao, D., Xu, P. 2019; 119: 1–9


    Effective frequency recognition algorithms are critical in steady-state visual evoked potential (SSVEP) based brain-computer interfaces (BCIs). In this study, we present a hierarchical feature fusion framework which can be used to design high-performance frequency recognition methods. The proposed framework includes two primary techniques for fusing features: spatial dimension fusion (SD) and frequency dimension fusion (FD). Both SD and FD fusions are obtained using a weighted strategy with a nonlinear function. To assess our novel methods, we used the correlated component analysis (CORRCA) method to investigate the efficiency and effectiveness of the proposed framework. Experimental results were obtained from a benchmark dataset of thirty-five subjects and indicate that the extended CORRCA method used within the framework significantly outperforms the original CORCCA method. Accordingly, the proposed framework holds promise to enhance the performance of frequency recognition methods in SSVEP-based BCIs.

    View details for DOI 10.1016/j.neunet.2019.07.007

    View details for PubMedID 31376634

  • Correlated Component Analysis for Enhancing the Performance of SSVEP-Based Brain-Computer Interface (vol 26, pg 948, 2018) IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING Zhang, Y., Guo, D., Li, F., Yin, E., Zhang, Y., Li, P., Zhao, Q., Tanaka, T., Yao, D., Xu, P., Nakanishi, M. 2018; 26 (8): 1645–46


    In the above paper [1], a method has been proposed to use the correlated component analysis (CORCA) to learn spatial filters with multiple blocks of individual training data for steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) scenario. In order to evaluate the performance of CORCA, the task-related component analysis (TRCA)-based method was used as a baseline method [2]. For a fair and convincing comparison, the MATLAB codes on the website ( for implementing TRCA method provided by Dr. Masaki Nakanishi, the first author of [2], were used to take the role of the TRCA method. At that time, the proposed CORCA-based method outperforms the TRCA-based method [1].

    View details for DOI 10.1109/TNSRE.2018.2851318

    View details for Web of Science ID 000441426400017

    View details for PubMedID 30102598

  • Removal of muscle artefacts from few-channel EEG recordings based on multivariate empirical mode decomposition and independent vector analysis ELECTRONICS LETTERS Xu, X., Chen, X., Zhang, Y. 2018; 54 (14): 866–67
  • Two-Stage Frequency Recognition Method Based on Correlated Component Analysis for SSVEP-Based BCI. IEEE transactions on neural systems and rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society Zhang, Y., Yin, E., Li, F., Zhang, Y., Tanaka, T., Zhao, Q., Cui, Y., Xu, P., Yao, D., Guo, D. 2018; 26 (7): 1314–23


    A canonical correlation analysis (CCA) is a state-of-the-art method for frequency recognition in steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) systems. Various extended methods have been developed, and among such methods, a combination method of CCA and individual-template-based CCA has achieved the best performance. However, the CCA requires the canonical vectors to be orthogonal, which may not be a reasonable assumption for the EEG analysis. In this paper, we propose using the correlated component analysis (CORRCA) rather than CCA to implement frequency recognition. CORRCA can relax the constraint of canonical vectors in CCA and generate the same projection vector for two multichannel EEG signals. Furthermore, we propose a two-stage method based on the basic CORRCA method (termed TSCORRCA). Evaluated on a benchmark data set of 35 subjects, the experimental results demonstrate that CORRCA significantly outperformed CCA, and TSCORRCA obtained the best performance among the compared methods. This paper demonstrates that CORRCA-based methods have a great potential for implementing high-performance SSVEP-based BCI systems.

    View details for DOI 10.1109/TNSRE.2018.2848222

    View details for PubMedID 29985141

  • PTSD Subtype Identification Based on Resting-State EEG Functional Connectivity Biomarkers Zhang, Y., Toll, R., Wu, W., Longwell, P., Shpigel, E., Abu Amara, D., Gonzalez, B., Mann, S., Hart, R., Marmar, C., Etkin, A. ELSEVIER SCIENCE INC. 2018: S141
  • Correlated Component Analysis for Enhancing the Performance of SSVEP-Based Brain-Computer Interface IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING Zhang, Y., Guo, D., Li, F., Yin, E., Zhang, Y., Li, P., Zhao, Q., Tanaka, T., Yao, D., Xu, P. 2018; 26 (5): 948–56


    A new method for steady-state visual evoked potentials (SSVEPs) frequency recognition is proposed to enhance the performance of SSVEP-based brain-computer interface (BCI). Correlated component analysis (CORCA) is introduced, which originally was designed to find linear combinations of electrodes that are consistent across subjects and maximally correlated between them. We propose a CORCA algorithm to learn spatial filters with multiple blocks of individual training data for SSVEP-based BCI scenario. The spatial filters are used to remove background noises by combining the multichannel electroencephalogram signals. We conduct a comparison between the proposed CORCA-based and the task-related component analysis (TRCA) based methods using a 40-class SSVEP benchmark data set recorded from 35 subjects. Our experimental study validates the efficiency of the CORCA-based method, and the extensive comparison results indicate that the CORCA-based method significantly outperforms the TRCA-based method. Superior performance demonstrates that the proposed method holds the promising potential to achieve satisfactory performance for SSVEP-based BCI with a large number of targets.

    View details for DOI 10.1109/TNSRE.2018.2826541

    View details for Web of Science ID 000432011000003

    View details for PubMedID 29752229

  • Naturalistic Clinical Monitoring of rTMS-Induced Plasticity With TMS-EEG Keller, C., Wu, W., Sarhadi, K., Zhang, Y., Kerwin, L., Bhati, M., Etkin, A. ELSEVIER SCIENCE INC. 2018: S195
  • Exploiting Convolutional Neural Networks With Deeply Local Description for Remote Sensing Image Classification IEEE ACCESS Liu, N., Wan, L., Zhang, Y., Zhou, T., Huo, H., Fang, T. 2018; 6: 11215–28

Latest information on COVID-19