Happy Monday! Diverse university policies on AI
My weekly exploration of readings on AI! This week I am focusing on policies universities have undertaken AI policy.
Brown has appointed an Associate Provost for AI: Professor Michael Littman will undertake the position full-time in the summer, focusing on the advancement of AI research and expanding educational opportunities and implementation across the University. At the Critical AI Learning Community, we had the unique opportunity to engage in a conversation with him before he takes on his role full-time. Before the conversation, I wanted to educate myself on and share with you AI policies that other universities have undertaken. Considering that we don’t have a unified university plan or policy for AI yet (and don’t know if there is going to be one with the guidance from Professor Littman), I thought it would be valuable to know what other schools have already undertaken. I found many different policies, but here are 4 that I liked the most that all allow AI usage, but in pretty different ways.
Institutional integration and research leadership
University of Michigan (Go Blue :) launched AI Institutes at Michigan under Vision 2034, positioning AI as a pillar for cross-disciplinary research and societal impact. Their general framing and goal is to become a hub that embraces innovation in medicine, engineering, social science, and the arts. The goals of the University are to expand AI and data science research, coordinate internal and external partnerships, influence ethical standards, and secure talent and funding. They are updating/developing their infrastructure to coordinate already-existing centers, like the AI Lab, MIDAS, Quantum Research Institute.
Guided ethical use and program-level autonomy
The University of North Carolina is embracing the motto: “AI should help you think. Not think for you.” Their policy provides guidance across student use, teaching, and research. However, they also emphasize that programs may tailor rules for these comprehensive exams, and TA conduct. They address the concerns of data leakage, publishing limitations, IP challenges; UNC also specifies that students must discuss AI use with advisors and disclose usage in citations, appendix, and methods.
Instructor discretion and policy toolkits
Duke’s policy emphasises instructor discretion, with the university encouraging faculty to develop individualized policies on generative AI use, grounded in their course objectives, disciplinary norms, and pedagogical values. While the Duke Community Standard has been updated to classify unauthorized AI use as a form of academic dishonesty, faculty are provided with frameworks, examples, and AI literacy resources to inform and support their own decisions, ranging from full prohibition to permissive use with acknowledgment. This flexible model encourages critical conversations about integrity, intellectual development, and the role of AI in the learning process.
Developing closed-loop AI systems for teaching innovation
UCLA is pioneering the development and deployment of the AI platform Kudu to support course delivery. An upcoming Comp Lit 2BW course will be the first in the College Division of Humanities where faculty will use Kudu to generate textbooks, assignments, and TA resources. The goal is to create a consistent, accessible learning environment that enhances analytical engagement while minimizing dependence on public generative AI tools like ChatGPT. This model offers guardrails for safe and context-specific AI use, while streamlining content delivery and freeing instructors to focus on critical thinking and student mentorship.
These four examples show very different approaches to AI implementation in universities, none fully rejecting it, though. All underscore the need for adaptable, context-sensitive frameworks that still uphold shared principles of transparency, critical thinking, and responsible use; I am sure we need them at Brown as well.
Also, an interesting dimension of the discussion is that with the AI race, the role of higher education becomes even more urgent: aligning short-term responsiveness with long-term societal vision, while forging partnerships across government, academia, and industry. Bouncing off last week’s newsletter about AI and Work, I truly believe that Brown must teach students to work with AI because at work, we are already expected to do it. University should be the place where we learn how to work with technology ethically, safely, and with agency over our own product… but still work with it! I am excited and curious to see where Brown will take AI leadership!