I am a fourth year clinical medical student at Stanford University School of Medicine. Here you will find out about my interests including a list of my publications and projects. I completed my doctoral research on training and evaluation of robotic surgical techniques with the Biorobotics Lab at the University of Washington in Spring 2013. I am a co-founder of C-SATS, Inc., a surgical performance assessment company that uses expert reviews and the wisdom of the crowd to train surgeons and medical practitioners.


All Publications

  • Crowd-sourced assessment of surgical skills in cricothyrotomy procedure JOURNAL OF SURGICAL RESEARCH Aghdasi, N., Bly, R., White, L. W., Hannaford, B., Moe, K., Lendvay, T. S. 2015; 196 (2): 302-306


    Objective assessment of surgical skills is resource intensive and requires valuable time of expert surgeons. The goal of this study was to assess the ability of a large group of laypersons using a crowd-sourcing tool to grade a surgical procedure (cricothyrotomy) performed on a simulator. The grading included an assessment of the entire procedure by completing an objective assessment of technical skills survey.Two groups of graders were recruited as follows: (1) Amazon Mechanical Turk users and (2) three expert surgeons from University of Washington Department of Otolaryngology. Graders were presented with a video of participants performing the procedure on the simulator and were asked to grade the video using the objective assessment of technical skills questions. Mechanical Turk users were paid $0.50 for each completed survey. It took 10 h to obtain all responses from 30 Mechanical Turk users for 26 training participants (26 videos/tasks), whereas it took 60 d for three expert surgeons to complete the same 26 tasks.The assessment of surgical performance by a group (n = 30) of laypersons matched the assessment by a group (n = 3) of expert surgeons with a good level of agreement determined by Cronbach alpha coefficient = 0.83.We found crowd sourcing was an efficient, accurate, and inexpensive method for skills assessment with a good level of agreement to experts' grading.

    View details for DOI 10.1016/j.jss.2015.03.018

    View details for Web of Science ID 000355103700014

    View details for PubMedID 25888499

  • Crowd-Sourced Assessment of Technical Skills: An Adjunct to Urology Resident Surgical Simulation Training JOURNAL OF ENDOUROLOGY Holst, D., Kowalewski, T. M., White, L. W., Brand, T. C., Harper, J. D., Sorenson, M. D., Kirsch, S., Lendvay, T. S. 2015; 29 (5): 604-609


    Crowdsourcing is the practice of obtaining services from a large group of people, typically an online community. Validated methods of evaluating surgical video are time-intensive, expensive, and involve participation of multiple expert surgeons. We sought to obtain valid performance scores of urologic trainees and faculty on a dry-laboratory robotic surgery task module by using crowdsourcing through a web-based grading tool called Crowd Sourced Assessment of Technical Skill (CSATS).IRB approval was granted to test the technical skills grading accuracy of Mechanical Turk™ crowd-workers compared to three expert faculty surgeon graders. The two groups assessed dry-laboratory robotic surgical suturing performances of three urology residents (PGY-2, -4, -5) and two faculty using three performance domains from the validated Global Evaluative Assessment of Robotic Skills assessment tool.After an average of 2 hours 50 minutes, each of the five videos received 50 crowd-worker assessments. The inter-rater reliability (IRR) between the surgeons and crowd was 0.91 using Cronbach's alpha statistic (confidence intervals=0.20-0.92), indicating an agreement level between the two groups of "excellent." The crowds were able to discriminate the surgical level, and both the crowds and the expert faculty surgeon graders scored one senior trainee's performance above a faculty's performance.Surgery-naive crowd-workers can rapidly assess varying levels of surgical skill accurately relative to a panel of faculty raters. The crowds provided rapid feedback and were inexpensive. CSATS may be a valuable adjunct to surgical simulation training as requirements for more granular and iterative performance tracking of trainees become mandated and commonplace.

    View details for DOI 10.1089/end.2014.0616

    View details for Web of Science ID 000354037000020

    View details for PubMedID 25356517

  • Crowd-Sourced Assessment of Technical Skill (C-SATS): A Valid Method for Discriminating Basic Robotic Surgery Skills. Journal of endourology / Endourological Society White, L. W., Kowalewski, T. M., Dockter, R. L., Comstock, B., Hannaford, B., Lendvay, T. 2015


    A surgeon's skill in the operating room has been shown to correlate with a patient's clinical outcome. The prompt, accurate assessment of surgical skill remains a challenge, in part, because expert faculty reviewers are often unavailable. By harnessing the power of large, readily-available crowds via the Internet, rapid, accurate, low-cost assessments may be achieved. We hypothesized that assessments provided by crowdworkers highly correlate with expert surgeons' assessments.A group of 49 surgeons from two hospitals performed two dry-lab robotic surgical skill assessment tasks. The performance of these tasks was video recorded and posted online for evaluation using Amazon Mechanical Turk ™. The surgical tasks in each video were graded by (n=30) varying crowdworkers and (n=3) experts using a modified Global Evaluative Assessment of Robotic Skills (GEARS) grading tool and the mean scores were compared using Cronbach's alpha statistic.GEARS evaluations from the crowd were obtained for each video and task and compared with the GEARS ratings from the expert surgeons. The crowd-based performance scores agreed with the performance assessments by experts with a Cronbach's alpha of 0.84 and 0.92 for the two tasks, respectively.The assessment of surgical skill by crowdworkers resulted in a high degree of agreement with the scores provided by expert surgeons in the evaluation of basic robotic surgical dry-lab tasks. Crowd responses cost less and were much faster to acquire. This study provides evidence that crowds may provide an adjunctive method for rapidly providing feedback of skills to training and practicing surgeons.

    View details for DOI 10.1089/end.2015.0191

    View details for PubMedID 26057232

  • Preliminary Articulable Probe Designs With RAVEN and Challenges: Image-Guided Robotic Surgery Multitool System JOURNAL OF MEDICAL DEVICES-TRANSACTIONS OF THE ASME Yoon, W. J., Velasquez, C. A., White, L. W., Hannaford, B., Kim, Y. S., Lendvay, T. S. 2014; 8 (1)

    View details for DOI 10.1115/1.4025908

    View details for Web of Science ID 000330355200014

  • Raven surgical robot training in preparation for da vinci. Studies in health technology and informatics Glassman, D., White, L., Lewis, A., King, H., Clarke, A., Glassman, T., Comstock, B., Hannaford, B., Lendvay, T. S. 2014; 196: 135-141


    The rapid adoption of robotic assisted surgery challenges the pace at which adequate robotic training can occur due to access limitations to the da Vinci robot. Thirty medical students completed a randomized controlled trial evaluating whether the Raven robot could be used as an alternative training tool for the Fundamentals of Laparoscopic Surgery (FLS) block transfer task on the da Vinci robot. Two groups, one trained on the da Vinci and one trained on the Raven, were tested on a criterion FLS block transfer task on the da Vinci. After robotic FLS block transfer proficiency training there was no statistically significant difference between path length (p=0.39) and economy of motion scores (p=0.06) between the two groups, but those trained on the da Vinci did have faster task times (p=0.01). These results provide evidence for the value of using the Raven robot for training prior to using the da Vinci surgical system for similar tasks.

    View details for PubMedID 24732494

  • SurgTrak - A Universal Platform for Quantitative Surgical Data Capture JOURNAL OF MEDICAL DEVICES-TRANSACTIONS OF THE ASME Ruda, K., Beekman, D., White, L. W., Lendvay, T. S., Kowalewski, T. M. 2013; 7 (3)

    View details for DOI 10.1115/1.4024525

    View details for Web of Science ID 000326119200023

  • Virtual Reality Robotic Surgery Warm-Up Improves Task Performance in a Dry Laboratory Environment: A Prospective Randomized Controlled Study JOURNAL OF THE AMERICAN COLLEGE OF SURGEONS Lendvay, T. S., Brand, T. C., White, L., Kowalewski, T., Jonnadula, S., Mercer, L. D., Khorsand, D., Andros, J., Hannaford, B., Satava, R. M. 2013; 216 (6): 1181-1192


    Preoperative simulation warm-up has been shown to improve performance and reduce errors in novice and experienced surgeons, yet existing studies have only investigated conventional laparoscopy. We hypothesized that a brief virtual reality (VR) robotic warm-up would enhance robotic task performance and reduce errors.In a 2-center randomized trial, 51 residents and experienced minimally invasive surgery faculty in General Surgery, Urology, and Gynecology underwent a validated robotic surgery proficiency curriculum on a VR robotic simulator and on the da Vinci surgical robot (Intuitive Surgical Inc). Once they successfully achieved performance benchmarks, surgeons were randomized to either receive a 3- to 5-minute VR simulator warm-up or read a leisure book for 10 minutes before performing similar and dissimilar (intracorporeal suturing) robotic surgery tasks. The primary outcomes compared were task time, tool path length, economy of motion, technical, and cognitive errors.Task time (-29.29 seconds, p = 0.001; 95% CI, -47.03 to -11.56), path length (-79.87 mm; p = 0.014; 95% CI, -144.48 to -15.25), and cognitive errors were reduced in the warm-up group compared with the control group for similar tasks. Global technical errors in intracorporeal suturing (0.32; p = 0.020; 95% CI, 0.06-0.59) were reduced after the dissimilar VR task. When surgeons were stratified by earlier robotic and laparoscopic clinical experience, the more experienced surgeons (n = 17) demonstrated significant improvements from warm-up in task time (-53.5 seconds; p = 0.001; 95% CI, -83.9 to -23.0) and economy of motion (0.63 mm/s; p = 0.007; 95% CI, 0.18-1.09), and improvement in these metrics was not statistically significantly appreciated in the less-experienced cohort (n = 34).We observed significant performance improvement and error reduction rates among surgeons of varying experience after VR warm-up for basic robotic surgery tasks. In addition, the VR warm-up reduced errors on a more complex task (robotic suturing), suggesting the generalizability of the warm-up.

    View details for DOI 10.1016/j.jamcollsurg.2013.02.012

    View details for Web of Science ID 000319039900020

    View details for PubMedID 23583618

  • Raven-II: An Open Platform for Surgical Robotics Research IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING Hannaford, B., Rosen, J., Friedman, D. W., King, H., Roan, P., Cheng, L., Glozman, D., Ma, J., Kosari, S. N., White, L. 2013; 60 (4): 954-959


    The Raven-II is a platform for collaborative research on advances in surgical robotics. Seven universities have begun research using this platform. The Raven-II system has two 3-DOF spherical positioning mechanisms capable of attaching interchangeable four DOF instruments. The Raven-II software is based on open standards such as Linux and ROS to maximally facilitate software development. The mechanism is robust enough for repeated experiments and animal surgery experiments, but is not engineered to sufficient safety standards for human use. Mechanisms in place for interaction among the user community and dissemination of results include an electronic forum, an online software SVN repository, and meetings and workshops at major robotics conferences.

    View details for DOI 10.1109/TBME.2012.2228858

    View details for Web of Science ID 000316812200011

    View details for PubMedID 23204264

  • Content and Construct Validation of a Robotic Surgery Curriculum Using an Electromagnetic Instrument Tracker JOURNAL OF UROLOGY Tausch, T. J., Kowalewski, T. M., White, L. W., McDonough, P. S., Brand, T. C., Lendvay, T. S. 2012; 188 (3): 919-923


    Rapid adoption of robot-assisted surgery has outpaced our ability to train novice roboticists. Objective metrics are required to adequately assess robotic surgical skills and yet surrogates for proficiency, such as economy of motion and tool path metrics, are not readily accessible directly from the da Vinci® robot system. The trakSTAR™ Tool Tip Tracker is a widely available, cost-effective electromagnetic position sensing mechanism by which objective proficiency metrics can be quantified. We validated a robotic surgery curriculum using the trakSTAR device to objectively capture robotic task proficiency metrics.Through an institutional review board approved study 10 subjects were recruited from 2 surgical experience groups (novice and experienced). All subjects completed 3 technical skills modules, including block transfer, intracorporeal suturing/knot tying (fundamentals of laparoscopic surgery) and ring tower transfer, using the da Vinci robot with the trakSTAR device affixed to the robotic instruments. Recorded objective metrics included task time and path length, which were used to calculate economy of motion. Student t test statistics were performed using STATA®.The novice and experienced groups consisted of 5 subjects each. The experienced group outperformed the novice group in all 3 tasks. Experienced surgeons described the simulator platform as useful for training and agreed with incorporating it into a residency curriculum.Robotic surgery curricula can be validated by an off-the-shelf instrument tracking system. This platform allows surgical educators to objectively assess trainees and may provide credentialing offices with a means of objectively assessing any surgical staff member seeking robotic surgery privileges at an institution.

    View details for DOI 10.1016/j.juro.2012.05.005

    View details for Web of Science ID 000307551200091

    View details for PubMedID 22819403