With an estimated 100 million monthly active users of programs like ChatGPT, the use of AI in an academic setting is becoming a more prominent conversation for students and faculty alike.
Department chair of teaching and learning Pat McGuire discussed UCCS’ regulating policies surrounding AI use, his personal outlook on AI and additional information regarding the different uses of AI in the classroom.
McGuire thinks AI has a place in academia but thinks there is still a lot to learn about it.
“It’s just become so much more ubiquitous — that’s why it’s constantly evolving,” he said. “I would consider artificial intelligence to be a subset of instructional technology, it’s just a newer field that’s kind of emerging in the last three to five years.”
Generative AI is commonly used by students to complete assignments in different classrooms. Many students use it to create images, complete equations or generate ideas for papers.
ChatGPT is a widely recognized form of generative AI in education and has been an accessible resource after it was made publicly available last November, which has led to faculty navigating when it is and isn’t acceptable to use.
The faculty resource center website has a subsection that informs faculty members on how AI can be used in their classrooms based on their preferences as it has become a regularly integrated part of education.
According to the website, the following activities are acceptable:
- Brainstorming your ideas
- Refining your research questions
- Looking for information on a topic
- Outlining and organizing your thoughts
- Checking grammar and style
The website lists the following activites as unacceptable:
- Using AI in place of yourself in the classroom, such as creating discussion board replies or content that will be put in a Teams or Zoom chat
- Completing group work with AI unless everyone in the group agrees you can use the tool
- Using it to write entire sentences, paragraphs or papers to complete class assignments
To detect the possibility of generative AI in writing, UCCS uses Turnitin’s AI detection software, which flags potential AI use by breaking it into segments of about five to ten sentences and then overlaps the segments with each other to capture each sentence in context.
The segments are run against the AI detection model to determine if it was written by AI then given a percentage showing how much of the writing was likely AI-generated.
The program provides the ability to offer faculty more control over how students can use AI appropriately, but the detection capabilities may not yield accurate results consistently because it is also a newly developing technology.
According to an AI writing detection software blog post from Annie Chechitelli, Turnitin’s chief product officer, there is a higher incidence of false positives in cases where there is less than 20% of AI writing detected in a text.
Chechitelli said that in cases where the detected AI writing percentage is higher than 20%, the document false positive rate is less than 1%.
“As AI evolves, I think we continually need to have these discussions as faculty, but we as a department have at least started to identify that it exists and try to provide some guidance for our students and faculty about what’s acceptable and unacceptable.” McGuire said.
McGuire left off with his position that AI in academia is neither firmly positive or negative because it was not designed for malicious purposes, and whether it is used appropriately or not is in the hands of the user.
Photo from unsplash.com.