Generative Artificial Intelligence (AI) Policy

Artificial Intelligence (AI) refers to a broad range of technologies that enable computers and machines to simulate human learning through advanced techniques and processes that perform complex tasks, such as large language models (LLMs), machine learning (ML), and natural language processing (NLP). AI encompasses a range of technologies that enable machines to interpret data, recognize patterns, make decisions, and solve problems. In the context of medicine, this may include a range of technologies and applications that enhance clinical decision-making, improve patient care, and streamline healthcare operations.

This policy aims to create a framework for the responsible integration of generative AI in the MD and MSPA Programs, ensuring that students develop the necessary skills and knowledge to utilize these tools effectively while maintaining the highest standards of academic integrity and patient care.

The MD and MSPA Programs emphasize the importance of developing clinical reasoning and skills. While artificial intelligence (AI) can serve as a valuable resource for learning and exploration, it is crucial that students engage actively with the material, applying their knowledge and critical thinking to clinical scenarios. AI can be a collaborative tool that enhances understanding and skills, but not a substitute for the rigorous training and cognitive skills necessary for effective medical practice.

The intent is to support the judicious use of AI as educational tools while safeguarding academic integrity, patient confidentiality, and the development of critical reasoning and clinical skills. AI should enhance, not replace, the foundational knowledge and clinical competencies essential for medical practice.

The guidance and policy will be reviewed and revised as necessary to reflect changes in technology, educational practices, and institutional standards.

Permitted Uses and Privacy

  • Learning and Clarification: Students may utilize AI to enhance their understanding of medical concepts, definitions, and for grammar/style editing, provided it does not conflict with specific assignment instructions or be inconsistent with faculty-authorized tasks as detailed below.
  • Faculty-Authorized Tasks: Any use of AI must align with explicit faculty guidance as outlined in course syllabi. Faculty may designate appropriate contexts for AI use in assignments and projects.
  • Platforms: Stanford has several tools that have been specifically designed for use within the Stanford Medicine community that are compliant with institutional policies and suitable for handling sensitive information (e.g., patient health information). When permitted for use, please use these tools to maintain confidentiality and compliance with Stanford Medicine policies.

 

Responsible Use of AI Tools

While we recognize that restricting the use of AI tools cannot be strictly enforced, we expect students to optimize their learning progress, which is likely best achieved through a balanced approach that includes both AI use and traditional methods, and students will be held accountable for demonstrating competency in clinical skills and reasoning without the aid of AI tools.

  • Assessments: The use of AI tools is discouraged for any activities in which students are evaluated on their own knowledge or skills, unless explicitly granted permission by the faculty. Unless explicitly advised by individual instructors, the use of AI tools is prohibited for any “closed book” exams or assignments where internet use is restricted.
  • Clinical Documentation: Use of AI for clinical documentation must adhere to the policies and guidelines for the clinical site and comply with HIPAA regulations. Unless supported by the Electronic Health Record, students should refrain from using AI for clinical documentation, including writing the History and Physical (H&Ps) unless permitted by the clerkship.
  • Patient Confidential Data: The use of patient-identifying information or protected health information (PHI) in public AI tools is strictly forbidden.
  • Note that individual policies around AI use may vary by course and clerkships and this information will be outlined in the course syllabus. Students are responsible for verifying with faculty whether the use of generative AI tools is permissible for specific assignments or tasks.

 

Acknowledgement and Citation

  • Any substantial contributions from AI tools on assignments, presentations, and scholarly abstracts or proposals must be disclosed and properly cited. This includes the tool name, model/version, date, query, and output excerpt. For example: ChatGPT (GPT-4; June 10, 2025), prompt: “Explain ARDS pathophysiology,” output used in Section 2.
  • AI tools may not be listed as authors on scholarly works.

 

Accuracy & Accountability

  • Students are responsible for any AI-generated content they submit, even if flawed or biased. Verification of information and judgment of quality is expected.

 

Ethics & Education

  • Students should develop AI literacy, understanding the limitations, potential for inaccuracies (hallucinations), biases, and privacy risks associated with AI.
  • AI education will be integrated into the curriculum to provide students with additional guidance and training on the proper and effective use of AI.

 

All MD and MSPA students are expected to comply with this policy. Violations may result in disciplinary action in accordance with the School of Medicine’s policies on academic integrity and professional conduct.

For more information refer to Stanford University's Generative AI Policy Guidance.

updated August 2025