Academic Appointments

Administrative Appointments

  • Director, Interdisciplinary Program in Financial Mathematics, Stanford University (1999 - Present)
  • Chair, School of Statistics, Chinese Academy of Sciences (2003 - Present)
  • Co-director, Biostatistics Core, Stanford University Cancer Center (2004 - Present)
  • Steering Committee, Methods of Analysis Program in the Social Sciences, Stanford University (2005 - Present)

Honors & Awards

  • COPSS Award, Committee of Presidents of Statistical Societies (1983)
  • Fellow, John Simon Guggenheim Foundation (1983)
  • Member, Academia Sinica (1994)
  • Fellow, Center for Advanced Study in Behavioral Sciences (1999)
  • Abraham Wald Prize, Sequential Analysis: Design Methods & Applications (2005)

Professional Education

  • B.A. with First Class Honors, University of Hong Kong, Mathematics (1967)
  • M.A., Columbia University, Mathematical Statistics (1970)
  • Ph.D., Columbia University, Mathematical Statistics (1971)

Research & Scholarship

Current Research and Scholarly Interests

Lai is widely recognized as a prolific leader in the field of sequential statistical analysis. Among his principal achievements is the development of a comprehensive theory of sequential tests of composite hypotheses, unifying previous approaches and providing far-reaching extensions to cope with the practical complexities that arise in the applications to group sequential clinical trials. In particular, this theory paved the way for his ground-breaking work with Shih on flexible and nearly-optimal group sequential tests that can “self-tune” to the unknown parameters during the course of the trial, under pre-specified constraints on the maximum sample size and significance level.

Other major breakthroughs include (a) accurate confidence intervals following sequential tests by using an innovative resampling approach, (b) a definitive solution to the long-standing “multi-armed bandit problem”, and (c) the development of statistically and computationally efficient sequential change-point detection procedures in multivariate time series and stochastic systems, for applications to industrial quality control, fault detection in engineering systems and segmentation in computational biology.

Besides sequential analysis, Lai has also made ground-breaking contributions to (i) stochastic approximation and recursive estimation, (ii) adaptive control of linear stochastic systems and Markov decision processes, (iii) saddlepoint approximations and boundary-crossing probabilities in Markov random walks and random fields, and (iv) survival analysis, in particular, rand- and M-estimators in regression models when the response variable is subject to censoring and truncation, and interim analysis of clinical trials with failure-time endpoints.


2013-14 Courses


Journal Articles

  • Sequential design of phase II-III cancer trials STATISTICS IN MEDICINE Lai, T. L., Lavori, P. W., Shih, M. 2012; 31 (18): 1944-1960


    Although traditional phase II cancer trials are usually single arm, with tumor response as endpoint, and phase III trials are randomized and incorporate interim analyses with progression-free survival or other failure time as endpoint, this paper proposes a new approach that seamlessly expands a randomized phase II study of response rate into a randomized phase III study of time to failure. This approach is based on advances in group sequential designs and joint modeling of the response rate and time to event. The joint modeling is reflected in the primary and secondary objectives of the trial, and the sequential design allows the trial to adapt to increase in information on response and survival patterns during the course of the trial and to stop early either for conclusive evidence on efficacy of the experimental treatment or for the futility in continuing the trial to demonstrate it, on the basis of the data collected so far.

    View details for DOI 10.1002/sim.5346

    View details for Web of Science ID 000306471100004

    View details for PubMedID 22422502

  • Clinical trial designs for testing biomarker-based personalized therapies CLINICAL TRIALS Lai, T. L., Lavori, P. W., Shih, M. I., Sikic, B. I. 2012; 9 (2): 141-154


    Advances in molecular therapeutics in the past decade have opened up new possibilities for treating cancer patients with personalized therapies, using biomarkers to determine which treatments are most likely to benefit them, but there are difficulties and unresolved issues in the development and validation of biomarker-based personalized therapies. We develop a new clinical trial design to address some of these issues. The goal is to capture the strengths of the frequentist and Bayesian approaches to address this problem in the recent literature and to circumvent their limitations.We use generalized likelihood ratio tests of the intersection null and enriched strategy null hypotheses to derive a novel clinical trial design for the problem of advancing promising biomarker-guided strategies toward eventual validation. We also investigate the usefulness of adaptive randomization (AR) and futility stopping proposed in the recent literature.Simulation studies demonstrate the advantages of testing both the narrowly focused enriched strategy null hypothesis related to validating a proposed strategy and the intersection null hypothesis that can accommodate to a potentially successful strategy. AR and early termination of ineffective treatments offer increased probability of receiving the preferred treatment and better response rates for patients in the trial, at the expense of more complicated inference under small-to-moderate total sample sizes and some reduction in power.The binary response used in the development phase may not be a reliable indicator of treatment benefit on long-term clinical outcomes. In the proposed design, the biomarker-guided strategy (BGS) is not compared to 'standard of care', such as physician's choice that may be informed by patient characteristics. Therefore, a positive result does not imply superiority of the BGS to 'standard of care'. The proposed design and tests are valid asymptotically. Simulations are used to examine small-to-moderate sample properties.Innovative clinical trial designs are needed to address the difficulties and issues in the development and validation of biomarker-based personalized therapies. The article shows the advantages of using likelihood inference and interim analysis to meet the challenges in the sample size needed and in the constantly evolving biomarker landscape and genomic and proteomic technologies.

    View details for DOI 10.1177/1740774512437252

    View details for Web of Science ID 000302636500001

    View details for PubMedID 22397801

  • Efficient Adaptive Randomization and Stopping Rules in Multi-arm Clinical Trials for Testing a New Treatment SEQUENTIAL ANALYSIS-DESIGN METHODS AND APPLICATIONS Lai, T. L., Liao, O. Y. 2012; 31 (4): 441-457
  • Adaptive Trial Designs ANNUAL REVIEW OF PHARMACOLOGY AND TOXICOLOGY, VOL 52 Lai, T. L., Lavori, P. W., Shih, M. 2012; 52: 101-110


    We review adaptive designs for clinical trials, giving special attention to the control of the Type I error in late-phase confirmatory trials, when the trial planner wishes to adjust the final sample size of the study in response to an unblinded analysis of interim estimates of treatment effects. We point out that there is considerable inefficiency in using the adaptive designs that employ conditional power calculations to reestimate the sample size and that maintain the Type I error by using certain weighted test statistics. Although these adaptive designs have little advantage over familiar group-sequential designs, our review also describes recent developments in adaptive designs that are both flexible and efficient. We also discuss the use of Bayesian designs, when the context of use demands control over operating characteristics (Type I and II errors) and correction of the bias of estimated treatment effects.

    View details for DOI 10.1146/annurev-pharmtox-010611-134504

    View details for Web of Science ID 000301839600006

    View details for PubMedID 21838549

  • Sequential Importance Sampling and Resampling for Dynamic Portfolio Credit Risk OPERATIONS RESEARCH Deng, S., Giesecke, K., Lai, T. L. 2012; 60 (1): 78-91
  • Futility stopping in clinical trials STATISTICS AND ITS INTERFACE He, P., Lai, T. L., Liao, O. Y. 2012; 5 (4): 415-423

    View details for DOI 10.1214/10-AAP758

    View details for Web of Science ID 000298249900009

  • EVALUATING PROBABILITY FORECASTS ANNALS OF STATISTICS Lai, T. L., Gross, S. T., Shen, D. B. 2011; 39 (5): 2356-2382

    View details for DOI 10.1214/11-AOS902

    View details for Web of Science ID 000299186500007

  • Sequential generalized likelihood ratio tests for vaccine safety evaluation STATISTICS IN MEDICINE Shih, M., Lai, T. L., Heyse, J. F., Chen, J. 2010; 29 (26): 2698-2708


    The evaluation of vaccine safety involves pre-clinical animal studies, pre-licensure randomized clinical trials, and post-licensure safety studies. Sequential design and analysis are of particular interest because they allow early termination of the trial or quick detection that the vaccine exceeds a prescribed bound on the adverse event rate. After a review of the recent developments in this area, we propose a new class of sequential generalized likelihood ratio tests for evaluating adverse event rates in two-armed pre-licensure clinical trials and single-armed post-licensure studies. The proposed approach is illustrated using data from the Rotavirus Efficacy and Safety Trial. Simulation studies of the performance of the proposed approach and other methods are also given.

    View details for DOI 10.1002/sim.4036

    View details for Web of Science ID 000284023800004

    View details for PubMedID 20799244

  • Stochastic segmentation models for array-based comparative genomic hybridization data analysis BIOSTATISTICS Lai, T. L., Xing, H., Zhang, N. 2008; 9 (2): 290-307


    Array-based comparative genomic hybridization (array-CGH) is a high throughput, high resolution technique for studying the genetics of cancer. Analysis of array-CGH data typically involves estimation of the underlying chromosome copy numbers from the log fluorescence ratios and segmenting the chromosome into regions with the same copy number at each location. We propose for the analysis of array-CGH data, a new stochastic segmentation model and an associated estimation procedure that has attractive statistical and computational properties. An important benefit of this Bayesian segmentation model is that it yields explicit formulas for posterior means, which can be used to estimate the signal directly without performing segmentation. Other quantities relating to the posterior distribution that are useful for providing confidence assessments of any given segmentation can also be estimated by using our method. We propose an approximation method whose computation time is linear in sequence length which makes our method practically applicable to the new higher density arrays. Simulation studies and applications to real array-CGH data illustrate the advantages of the proposed approach.

    View details for DOI 10.1093/biostatistics/kxm031

    View details for Web of Science ID 000254293400007

    View details for PubMedID 17855472

  • A combined superiority and non-inferiority approach to multiple endpoints in clinical trials STATISTICS IN MEDICINE Bloch, D. A., Lai, T. L., Su, Z., Tubert-Bitter, P. 2007; 26 (6): 1193-1207


    Treatment comparisons in clinical trials often involve multiple endpoints. By making use of bootstrap tests, we develop a new non-parametric approach to multiple-endpoint testing that can be used to demonstrate non-inferiority of a new treatment for all endpoints and superiority for some endpoint when it is compared to an active control. It is shown that this approach does not incur a large multiplicity cost in sample size to achieve reasonable power and that it can incorporate complex dependencies in the multivariate distributions of all outcome variables for the two treatments via bootstrap resampling.

    View details for DOI 10.1002/sim.2611

    View details for Web of Science ID 000244903400002

    View details for PubMedID 16791905

  • Confidence intervals for survival quantiles in the Cox regression model LIFETIME DATA ANALYSIS Lai, T. L., Su, Z. 2006; 12 (4): 407-419


    Median survival times and their associated confidence intervals are often used to summarize the survival outcome of a group of patients in clinical trials with failure-time endpoints. Although there is an extensive literature on this topic for the case in which the patients come from a homogeneous population, few papers have dealt with the case in which covariates are present as in the proportional hazards model. In this paper we propose a new approach to this problem and demonstrate its advantages over existing methods, not only for the proportional hazards model but also for the widely studied cases where covariates are absent and where there is no censoring. As an illustration, we apply it to the Stanford Heart Transplant data. Asymptotic theory and simulation studies show that the proposed method indeed yields confidence intervals and bands with accurate coverage errors.

    View details for DOI 10.1007/s10985-006-9024-y

    View details for Web of Science ID 000242998200002

    View details for PubMedID 17053975

  • Modified Haybittle-Peto group sequential designs for testing superiority and non-inferiority hypotheses in clinical trials STATISTICS IN MEDICINE Lai, T. L., Shih, M. C., Zhu, G. R. 2006; 25 (7): 1149-1167


    In designing an active controlled clinical trial, one sometimes has to choose between a superiority objective (to demonstrate that a new treatment is more effective than an active control therapy) and a non-inferiority objective (to demonstrate that it is no worse than the active control within some pre-specified non-inferiority margin). It is often difficult to decide which study objective should be undertaken at the planning stage when one does not have actual data on the comparative advantage of the new treatment. By making use of recent advances in the theory of efficient group sequential tests, we show how this difficulty can be resolved by a flexible group sequential design that can adaptively choose between the superiority and non-inferiority objectives during interim analyses. While maintaining the type I error probability at a pre-specified level, the proposed test is shown to have power advantage and/or sample size saving over fixed sample size tests for either only superiority or non-inferiority, and over other group sequential designs in the literature.

    View details for DOI 10.1002/sim.2357

    View details for Web of Science ID 000236528500005

    View details for PubMedID 16189814

  • A new approach to modeling covariate effects and individualization in population pharmacokinetics-pharmacodynamics JOURNAL OF PHARMACOKINETICS AND PHARMACODYNAMICS Lai, T. L., Shih, M. C., Wong, S. P. 2006; 33 (1): 49-74


    By combining Laplace's approximation and Monte Carlo methods to evaluate multiple integrals, this paper develops a new approach to estimation in nonlinear mixed effects models that are widely used in population pharmacokinetics and pharmacodynamics. Estimation here involves not only estimating the model parameters from Phase I and II studies but also using the fitted model to estimate the concentration versus time curve or the drug effects of a subject who has covariate information but sparse measurements. Because of its computational tractability, the proposed approach can model the covariate effects nonparametrically by using (i) regression splines or neural networks as basis functions and (ii) AIC or BIC for model selection. Its computational and statistical advantages are illustrated in simulation studies and in Phase I trials.

    View details for DOI 10.1007/s10928-005-9000-2

    View details for Web of Science ID 000236842900003

    View details for PubMedID 16402288

  • One-sided tests in clinical trials with multiple endpoints BIOMETRICS Bloch, D. A., Lai, T. L., Tubert-Bitter, P. 2001; 57 (4): 1039-1047


    Treatment comparisons in clinical trials often involve several endpoints. For example, one might wish to demonstrate that a new treatment is superior to the current standard for some components of the multivariate response vector and is not inferior, modulo biologically unimportant difference to the standard treatment for all other components. We introduce a new approach to multiple-endpoint testing that incorporates the essential univariate and multivariate features of the treatment effects. This approach is compared with existing methods in a simulation study and applied to data on rheumatoid arthritis patients receiving one of two treatments.

    View details for Web of Science ID 000174956800006

    View details for PubMedID 11764242

  • Computer-based screening of patients with HIV/AIDS for clinical-trial eligibility. The Online journal of current clinical trials Carlson, R. W., Tu, S. W., Lane, N. M., Lai, T. L., Kemper, C. A., Musen, M. A., Shortliffe, E. H. 1995; Doc No 179: [3347 words, 32 paragraphs]


    To assess the potential effect of a computer-based system on accrual to clinical trials, we have developed methodology to identify retrospectively and prospectively patients who are eligible or potentially eligible for protocols.Retrospective chart abstraction with computer screening of data for potential protocol eligibility.A county-operated clinic serving human immunodeficiency virus (HIV) positive patients with or without acquired immune deficiency syndrome (AIDS).A randomly selected group of 60 patients who were HIV-infected, 30 of whom had an AIDS-defining diagnosis.Using a computer-based eligibility screening system, for each clinic visit and hospitalization, patients were categorized as eligible, potentially eligible, or ineligible for each of the 17 protocols active during the 7-month study period. Reasons for ineligibility were categorized.None of the patients was enrolled on a clinical trial during the 7-month period. Thirteen patients were identified as eligible for protocol; three patients were eligible for two different protocols; and one patient was eligible for the same protocol during two different time intervals. Fifty-four patients were identified as potentially eligible for a total of 165 accrual opportunities, but important information, such as the result of a required laboratory test, was missing, so that eligibility could not be determined unequivocally. Ineligibility for protocol was determined in 414 (35%) potential opportunities based only on conditions that were amenable to modification, such as the use of concurrent medications; 194 (17%) failed only laboratory tests or subjective determinations not routinely performed; and 346 (29%) failed only routine laboratory tests.There are substantial numbers of eligible and potentially eligible patients who are not enrolled or evaluated for enrollment in prospective clinical trials. Computer-based eligibility screening when coupled with a computer-based medical record offers the potential to identify patients eligible or potentially eligible for clinical trial, to assist in the selection of protocol eligibility criteria, and to make accrual estimates.

    View details for PubMedID 7719564



    After a brief review of commonly used methods for parameter estimation from ligand-binding data in the biochemistry literature, we propose some diagnostic checks and statistical tests of the underlying assumptions and develop methods for evaluating the biases and variances of the estimates and for constructing confidence intervals. Examples on the analysis of data from two radioligand-binding experiments are presented to illustrate these methods.

    View details for Web of Science ID A1994PL74800017

    View details for PubMedID 7981398


Footer Links:

Stanford Medicine Resources: