AI Policy

Introduction

Artificial intelligence (AI) exhibits the potential to benefit student education and healthcare in exciting and transformative ways.  As AI technologies become increasingly integrated into healthcare, education, and research, this communication aims to provide clear guidelines to ensure AI is used in alignment with the vision, mission, and values of Marshall B. Ketchum University (MBKU).

Purpose

This communication establishes standards for the responsible, ethical, and transparent use of AI at MBKU.  It applies to all university students, faculty, staff, and administrators.

Guiding Principles

AI use at MBKU must reflect the institution's core values of accountability, caring, excellence, innovation, and respect.

The adoption of AI technologies should support academic integrity, adhere to existing policies, prioritize patient safety, and promote ethical responsibility.  It should also enhance, rather than diminish, the MBKU institutional learning outcomes of communication, analytical reasoning and problem solving, interprofessional health education, health information, literacy and lifelong learning, and professionalism.

To that end, MBKU community members are expected to respect and adhere to the following user responsibilities:

User Responsibilities:

  • Users must ensure that AI-generated or informed content is accurate, credible, and transparently identified.
  • AI tools must be used in compliance with all university policies, as well as applicable data security standards and federal regulations such as HIPAA and FERPA.
  • Uploading Protected Health Information (PHI) into AI tools is strictly prohibited without prior authorization from the Information Technology department.
  • Uploading password-protected information from MBKU platforms to AI tools is prohibited without prior authorization. This could expose unauthorized, proprietary, or sensitive information to the public.
    • Creating study questions for self-assessment and learning
    • Improving accessibility (i.e., Blackboard Ally resources)
    • Adaptive learning (i.e., personalizing learning experiences and tutoring for individual learners)
    • Image or video creation (i.e., teaching complex topics with procedural animations, 3D models integration).
  • Faculty may use AI to augment grading student assignments while abiding by FERPA requirements. 
  • Faculty may use AI to help create course content, but are responsible for ensuring the accuracy and relevancy of the final course materials.
  • Faculty are encouraged to clearly communicate expectations regarding AI use in personal study, assignments, and examinations.
  • Faculty may use AI tools to help detect plagiarism.
  • Community members may use AI-powered screen reading or automatic closed-captioning for accessibility needs.

Research and Scholarship

  • Community members are expected to maintain transparency regarding the use of AI tools for research and to comply with academic journal policies, including proper citation when AI is used for writing or editorial assistance. See https://apastyle.apa.org/blog/cite-generative-ai-references; AMA Manual of Style, 11th edition, Section 14.5.2
  • AI may be employed in various aspects of scholarly work (e.g., data analysis, literature review, or experimental design).  Still, researchers remain fully responsible for ensuring all work's accuracy, originality, and integrity.

Administration and Service

  • AI may be used to support students, faculty, staff, and administrative functions while adhering to all university policies and federal regulations:
    • Writing assistance outside of academic assignments
    • Automating repetitive tasks
    • Enhancing the student or prospective student experience
    • Resource management