'We can’t put our heads in the sand'
CMU Academic Senate revisits AI's future
As artificial intelligence continues to reshape higher education, Central Michigan University faculty leaders and administrators discussed the university’s approach to AI, highlighting both its opportunities and its risks to teaching methods during the Academic Senate meeting on April 7.
Ben Andera, CMU’s executive director of academic and research computing and newly appointed special advisor for artificial intelligence, presented the university’s evolving strategy when it comes to artificial intelligence.
“We've been talking about embracing AI," he said. "The idea is that we can't put our heads in the sand. It's not going away."
Andera said the purpose of his new role is not to “chase a trend,” but to help CMU move forward collectively rather than through isolated efforts across departments.
Building a campus-wide AI framework
Andera explained the work CMU has already done through its Embracing AI initiative, a grant-funded project that brought together groups focused on university policies, training and campus technology needs.
Those groups gave university leaders ten recommendations, including creating a plan for how CMU will use AI, setting rules and oversight for it, offering training based on different job roles and making sure the technology is secure.
One of the biggest immediate changes, Andera said, is that all CMU faculty, staff and students now have access to Microsoft Copilot through the university’s system.
“Every faculty, staff and student has access to Copilot chat,” Andera said. “You’ll have access to our enterprise data protection version, where you can put data in and know it’s protected.”
This means members of the CMU community can use the AI tool through a university-approved version designed to protect school-related information.
Faculty raise concerns over ethics and classroom impact
Several senators questioned whether the presentation sufficiently addressed the harms associated with AI.
Senator Alan Rudy, faculty in the School of Politics, Society, Justice and Public Service, pointed to issues with students' overreliance on AI.
“I’m looking at students losing existing skills using AI," he said. "I’m looking at the ecological nightmare of data centers and the environmental injustice of them.”
Rudy said that while AI can be a powerful analytical tool, the presentation did not visibly center those risks.
“I didn’t see a moment of that in this,” Rudy said. “I’m deeply concerned that this kind of issue isn’t an integral, invisible part of the project. It's also being used to eliminate entry-level jobs for young professionals.”
Moving forward
In response, Andera said the university is still in the early stages of shaping its AI governance and policy structures.
Andera said the way he wants to combat these concerns is by having a governance committee. This group is meant to include voices from across campus and help guide decisions on:
- ethical use of AI
- classroom and teaching impacts
- research concerns
- data security
“We need all perspectives, all areas helping us think through this,” he said.
He said his role is intended to build a communication bridge between faculty, administration and technology teams as AI tools continue to emerge.
“This future is going to be full of all kinds of changes,” Andera said. “And that is what I’m here for, to support you.”

