Professor presenting on sleek screen with syllabus and AI audit notes in futuristic classroom

Texas A&M Tests AI to Audit Courses, Texas State Pushes Faculty to Rewrite Syllabi

In a move that has drawn both praise and criticism, Texas A&M University System officials are experimenting with artificial intelligence to review course content across its 12 campuses.

A&M’s AI‑Powered Audit

On September 25, Korry Castillo, the system’s chief strategy officer, asked an AI tool to find how many courses discuss feminism at one regional campus. Each time she phrased the query differently, the tool returned a different number. “Either the tool is learning from my previous queries,” Castillo wrote, “or we need to fine tune our requests to get the best results.”

Futuristic typewriter emitting glowing blue letters with white text likely next word and a red word on a black background.

Castillo’s test comes as the system works to audit courses after a controversial gender‑identity lesson at the flagship campus led to a professor’s firing and the university president’s resignation. The university said the lesson’s content did not match its catalog description, and the audit is meant to ensure students know what they are signing up for.

Vice Chancellor for Academic Affairs James Hallmark explained that the audit will use “AI‑assisted tools” to examine course data under “consistent, evidence‑based criteria.” Regent Sam Torn called the new process “real governance,” noting that Texas A&M was “stepping up first, setting the model that others will follow.”

The board approved rules that require presidents to sign off on any course that could be seen as advocating for “race and gender ideology” and forbid professors from teaching material not on the approved syllabus.

Chris Bryan, the system’s vice chancellor for marketing and communications, said Texas A&M is using OpenAI services through an existing subscription to aid the audit. He added that “any decisions about appropriateness, alignment with degree programs, or student outcomes will be made by people, not software.”

Castillo told colleagues that about 20 system employees would use the tool to make hundreds of queries each semester. When she reported the varying results, deputy chief information officer Mark Schultz warned that the tool carries “an inherent risk of inaccuracy.” Schultz said some inaccuracies could be mitigated with training but probably could not be fully eliminated. Bryan said the system is still testing baseline conversations to validate the tool’s accuracy, relevance, and repeatability.

Expert Views on AI Reliability

Computational linguist Emily Bender of the University of Washington explained that AI tools generate responses by predicting the next word, not by understanding content. “These systems are fundamentally systems for repeatedly answering the question ‘what is the likely next word’ and that’s it,” Bender said.

She noted that small changes in phrasing can produce different results, and users can nudge the model toward desired answers. Co‑director of the Critical Internet Studies Institute Chris Gilliard warned that the model’s “sycophancy” can lead users to shape responses. He said, “Very often, a thing that happens when people use this technology is if you chide or correct the machine, it will say, ‘Oh, I’m sorry’ or like ‘you’re right,’ so you can often goad these systems into getting the answer you desire.”

Baylor University professor T. Philip Nichols called keyword searches a “blunt instrument” that can’t capture how a topic is actually taught. He said, “Those pedagogical choices of an instructor might not be present in a syllabus, so to just feed that into a chatbot and say, ‘Is this topic mentioned?’ tells you nothing about how it’s talked about or in what way.”

Philosophy professor Martin Peterson, who studies the ethics of technology, said faculty have not been asked to weigh in on the tool, including members of the university’s AI council. He added that the council’s ethics and governance committee is charged with setting standards for responsible AI use. Peterson said he is “a little more open to the idea that some such tool could perhaps be used,” but cautioned that “we have to do our homework before we start using the tool.”

Texas State’s AI‑Assisted Course Rewrites

At Texas State University, administrators have ordered faculty to revise syllabi and suggested using an AI writing assistant. In October, 280 courses were flagged for review. Faculty were told to rewrite titles, descriptions, and learning outcomes to remove wording the university said was not neutral.

The College of Liberal Arts flagged courses such as Intro to Diversity, Social Inequality, Freedom in America, Southwest in Film, and Chinese‑English Translation for neutrality concerns. Faculty had until December 10 to complete rewrites, with a second‑level review in January and a full catalog evaluation by June.

Administrators provided a guide that discouraged learning outcomes describing students “measure or require belief, attitude or activism.” They also supplied a prompt for the AI assistant that instructs the chatbot to “identify any language that signals advocacy, prescriptive conclusions, affective outcomes or ideological commitments” and generate three alternative versions that remove those elements.

Jayme Blaschke, assistant director of media relations, described the internal review as “thorough” and “deliberative,” but did not say whether any classes have been revised or removed. She declined to explain how courses were initially flagged or who wrote the neutrality expectations.

Faculty reactions have highlighted concerns about the shift in curriculum control. Assistant professor of anthropology Aimee Villarreal, president of Texas State’s American Association of University Professors chapter, said the audit allows administrators to monitor how faculty describe their disciplines. She noted the pressure to revise quickly or risk removal from the spring schedule has pushed some faculty toward using the AI assistant. Villarreal said, “I love what I do, and it’s very sad to see the core of what I do being undermined in this way.”

Nichols warned that the trend represents a larger threat: “This is a kind of de‑professionalizing of what we do in classrooms, where we’re narrowing the horizon of what’s possible.” He added that giving up this autonomy would undermine the purpose of universities.

Key Takeaways

  • Texas A&M is testing an AI tool to audit courses, but the tool’s inconsistent results raise concerns about accuracy.
  • Texas State has mandated faculty to rewrite syllabi, offering an AI assistant to help remove politically charged language.
  • Experts caution that AI systems do not understand content and can be nudged to produce desired answers, potentially shifting control from faculty to administrators.

The two universities’ experiments illustrate a growing trend of using AI to scrutinize academic content, sparking debate over the balance between transparency, accountability, and academic freedom.

Closing

As Texas A&M and Texas State move forward with AI‑driven audits and revisions, stakeholders will watch closely to see whether these tools can reliably support curricular oversight without eroding the professional judgment of faculty.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *