'Keep your finger on the pulse of change'


Asim Ali and panelists discuss AI with faculty and students at AI Symposium


d_photo_aisymposiumpanel_03-15-24_0_1
Panelists at the AI Symposium take audience questions near the end of the event on March 15, 2024. From left to right: Asim Ali, Jack Day, Abby Mcguire, Matthew Katz and Cathy Willermet. (CM Life | Courtney Boyd)

Central Michigan University's department of Curriculum and Instructional Support have been holding various conversations about artificial intelligence across campus. Faculty resource links and events, student events and presentations at Board of Trustees meetings have all occurred over the last two semesters.

On March 15, the department held what it called its “pinnacle” of yearlong events: “Unveiling the Future: The Role of Artificial Intelligence in Higher Education Symposium.” The goal of the event was to discuss both the possible benefits and pitfalls of bringing AI into classrooms and curriculums at the university. 

Although a majority of the events throughout the day were aimed at faculty and closed to students, there were a few open to students such as keynote speakers and an open panel discussion.

Keynote Speaker

Asim Ali is the executive director of the Biggio Center for the Enhancement of Teaching and Learning at Auburn University. He spoke about the rapid increase of AI usage in the world. He also focused on how the integration of AI into society might affect education.

Ali advances the Biggio Center’s mission by providing professional development programs, services and resources to improve teaching methods in the classroom.

Ali said that university faculty from various fields should incorporate AI into the classroom, as “we are falling behind” as a society.

He said that it is important to teach students about ethical usage of AI so that as its use becomes more wide-spread, it can be used constructively in society.

AI can enhance deeper learning and improve the work of students who are lower-achieving, Ali said, based on a study.

He acknowledged that the role of AI in society is doubted, and sometimes feared. As a part of the presentation, Ali asked the crowd to rate their feelings about AI as either red (dangerous), yellow (uncertain) or green; he said he was surprised that nobody in the room ranked their feelings as red.

Ali said he is nervous about some forms of AI, while he fully supports many uses of the technology in society. He recently experienced uncertainty about artificial interlligence.

"I got in a Waymo car. It was an Uber with no driver. That was kind of like one of those red, yellow edge cases for me. And I was like, all right, either this will make the news for 'see, we told that guy not to be doing all this stuff;' or, you know, nobody will know.

"And thankfully, it was the latter."

Ali said that one reason people fear AI is because of the science fiction movies about AI, such as the Terminator, that consider all possibilities without being prompted by humans and act without their consent. He said that there is “actually some good conversation about if [that level of AI] could exist.”

He said an average AI uses a megawatt of energy, far too much energy for AI to be cost-effective at many jobs. AI can be trained by humans to understand language and even create original writing or art, but they cannot learn without human feedback, Ali said.

LLMs, or large language models, are fed large amounts of data from the internet and then trained to interpret what they learn. ChatGPT can creatively generate new content as if it was made by a human, but it must be pre-trained. It also must be designed with an architecture that understands the connection between words and contexts to create original content.

The danger of AI is not that it will replace humanity, Ali said. The danger is that humans will not know how to ethically use it when it becomes more available.

Ali talked about search engines such as Google, and how inaccurate information on the internet can be. He said that AI is not biased; humans are.

Over half of university students are already using some form of AI. Ali said that in the right scenario, when students are under pressure and the assignment is poorly described, some students will use AI to cheat. 

He said that the solution isn’t excluding the use of AI completely; instead Ali said that AI should be incorporated into some classwork in creative ways. Ali said that he has started to introduce ChatGPT into lesson plans at his college. He said that AI should be used to enhance learning and to personalize lessons for students in a particular field of study.

Ali and his team launched a class called Teaching with AI in March 2023. He said it is fully asynchronous and self-paced. Ali said that over 80 institutions in the U.S and Canada now use the Teaching with AI course, and some international institutions outside Canada use it as well.

He also said that there is a protected version of Copilot available through CMU, which allows for private use of the AI without the AI remembering the student's information.

Some common AI that the public can use include:

  • ChatGPT Gemini (by Google)
  • Anthrop/c
  • Copilot (by Microsoft)
  • Midjourney
  •  Adobe Firefly
  • Consensus
  • ResearchRabbit
  • Elicit

Ali said he has an idea of how AI might change peoples’ lives in the future.

“The future of generative AI is not these publicly available models, the future of generative AI is you, each of you, having your own LLM that’s trained by you and is able to make you more effective,” Ali said. “And I want you to be ready for that because it’s coming.

“And I don’t mean years, I mean like, weeks or months. There are institutions in this very state; there are institutions like Auburn, where faculty are training their own LLMs.”

CMU Faculty Panel Discussion 

The open panel discussion was the final event of the day, facilitated by that Ali facilitated as well. The panel had four faculty members from the AI Faculty Learning Communities (FLCs), communities that have been learning about AI and discussing ways to implement them at CMU over the past academic year.

“These FLCs were created over the summer to overcome some of the barriers with AI and learn ways we can implement AI here (at CMU),” Brooke Moore, the Director of Instructional Development for Curriculum and Instructional Support, said.

Moore said that these faculty members were very enthusiastic and interested in the topic today, and that her department was excited to see these conversations developing at CMU as well as to highlight the work that the FLC panelists had done over the last two semesters.

These panelists were:

  • Jack Day, a professor in Human Development and Family Studies
  • Abby Mcguire, a professor in the Master of Science in Administration program
  • Matthew Katz, a philosophy professor
  • Cathy Willermet, a biology and anthropology professor.

The panel discussed their own personal feelings on AI and how they are implementing them into their classrooms. Some members expressed that they believe it could advance students' learning if included in the curriculums and careers.

“It’s a developmental skill,” Mcguire said. “Do we want (students) leaving with the knowledge of how it works, or pretending it doesn’t exist?”

Other panelists and audience members expressed their concerns with academic integrity, as well as that of original thought and articulation. Katz said that the humanities professions should be considered in the conversation, because AI can in a sense take critical thinking and understanding of an assignment out of the equation.

“Everybody understands that you can use a system and see what answer it comes up with,” he said. “But we might not know why it gave that particular answer. … When you think about how the idea fits together, it (AI) doesn’t really display understanding, and that’s something I could see becoming a long term problem.”

As a result, other panelists stressed the idea of students still double checking their work and not taking answers produced by AI at face value.

“You should always be checking your answers,” Willermet said. “You have to have some thoughtful analysis on the back end to see the results of your answers.”

Day said that those using AI for academic use should be “critical consumers” of it.

“It’s never going to be a perfect system, because we are training it,” he said. 

The panel also addressed a variety of ethical concerns surrounding AI. The concept of diversity, stereotyping and user biases was one of these topics. Mcguire used an example from one of her classes where students made AI images using prompts. She found that simple words like “professor” would generate glasses and books, and that “Midwest” generally created a white person. 

“There’s absolutely a kind of dominant cultural narrative with generative AI,” she said. “And we don’t know where these are coming from in the system.”

Despite these differing perspectives, most of those in attendance agreed that AI is the future and that it’s important for faculty at CMU to gain an understanding of it.

“There’s this quote that says ‘AI is not coming for job, but someone who can use AI is,’” Mcguire said. “It’s important to kinda of keep your finger on the pulse of change, follow all those big companies … to see what tools they’re using.”

At the end of the event, Ali asked those in attendance to do two things: to stay curious about AI and to know there are resources on campus.

“There are people on your campus who are working on these things and are asking the tough questions,” he said. “Engage with them, because that’s really important.”

Share: