The following article talks about AI regulations on campus. It includes new regulations for AI usage on campus as well as student and faculty perspectives on the use of AI.
There’s only one catch: it was written by Chat GPT.
Given rising concern about AI stealing our jobs, I was curious to see how well AI could do mine. I presented Chat GPT with the challenge to “write an article for UCCS student newspaper The Scribe about AI regulations on campus in the style of Tori Smith.” This is the article it wrote with my edits in red:
The Growing Role of AI at UCCS: Balancing Innovation with Regulation
Artificial Intelligence (AI) is no longer a futuristic concept; it’s embedded in our daily lives, from ChatGPT writing essays to facial recognition unlocking phones. At UCCS, AI tools have sparked conversations among students and faculty about their potential to enhance learning — and the challenges of ensuring fair and ethical use.
- The first paragraph of a news article is called the lede. It’s typically one to two sentences and draws the reader in. This lede is a little too long, and it doesn’t tell the reader what the article is about. I also would have omitted the first sentence entirely. It’s an overgeneralization and may not ring true for less technically savvy people.
The University of Colorado Colorado Springs has embraced AI’s possibilities in the classroom, but with its benefits come necessary regulations. “AI is a tool, not a substitute for critical thinking,” said Dr. Sarah Keller, an English professor. Dr. Keller has encouraged students to use AI tools like Grammarly or ChatGPT to brainstorm ideas but warns against relying on them for final submissions. “Learning happens in the struggle, not the shortcut,” she emphasized.
- In case you didn’t know, the full name of the school you attend is University of Colorado Colorado Springs. Just saying UCCS would be entirely too confusing.
- As much as I love her perspective, Dr. Sarah Keller is not a real professor at UCCS.
- In journalistic writing, we use the full name and title on first reference, and then every additional reference to that person should just be their last name, no title.
- When attributing quotes in news writing, you should always use the word said. It may be boring but anything else implies a connotation the speaker might not consent to. What if fictional Keller doesn’t think she “emphasized” that point?
- Other than that, the second quote from Keller tightly sums up her argument and captures a strong voice. It’s a shame it’s not real.
UCCS recently updated its academic integrity policy to address AI tools explicitly. The policy now clarifies that using generative AI without proper citation constitutes academic dishonesty. Students caught submitting AI-generated work as their own face penalties ranging from assignment failure to disciplinary action.
- UCCS has NOT recently updated its academic integrity policy. According to the Faculty Resource Center website, the university does not have a set policy and instead has left it up to professors and instructors to regulate AI on a class-by-class basis in their syllabi.
For students like engineering sophomore Marcus Kim, these regulations feel necessary. “AI is amazing, but it’s like using a calculator in math class — you still need to show your work,” Kim said. However, not all students share his enthusiasm for these rules. Political science major Lila Ortiz expressed concerns about the policy’s vagueness. “It’s hard to know where the line is. Can I use AI to edit my grammar, or is that cheating?”
- I was curious how Chat GPT was coming up with names for these fictional students, so I asked. It told me that it picked names that were diverse and believable for people in their late teens or early twenties. I did find it a little interesting that the engineering student was given a traditionally Asian surname, though.
To address these concerns, UCCS is hosting workshops to educate students and faculty on appropriate AI use. The sessions focus on identifying when and how to integrate AI ethically into coursework. A recent session, led by the UCCS Ethics Center, attracted over 100 participants, highlighting the campus community’s interest in navigating this evolving landscape.
- There is no such thing as a UCCS Ethics Center. The closest thing I could find would be the Ethics and Compliance Program which has no mention of AI regulation, although, to be fair, it was most recently updated in 2021.
- The funniest part of this paragraph is that 100 participants attended the most recent fake informational session. In my time working for the Scribe, I have only seen a handful of events have over 100 participants, and those events usually had free food or expensive bingo prizes.
Beyond academics, AI’s presence is expanding across campus operations. The Office of Information Technology (OIT) has implemented AI-driven systems to streamline tasks like tech support and network monitoring. These initiatives aim to reduce response times and enhance efficiency, but they also raise questions about data privacy. “We prioritize student data protection and transparency in all AI integrations,” said an OIT spokesperson.
- This is … partially true. According to their website, OIT has been exploring the use of AI tools to help with certain tasks, although tech support or network monitoring are not specifically mentioned. I’m not sure if Chat GPT knew this information or if it was just a lucky guess.
- The use of “OIT spokesperson” strikes me as odd as Chat GPT had no problem inventing fake people in the above paragraphs. What is this fake OIT spokesperson hiding? Why do they refuse to be named?
The debate over AI at UCCS reflects broader societal conversations. As technology evolves, so too must our understanding of its implications. UCCS is taking a proactive approach, encouraging dialogue and education while implementing guardrails to ensure AI is a tool for growth, not a crutch.
What remains clear is that the future of AI at UCCS — and beyond — is up to us. It’s not just about how we regulate technology but how we adapt and grow alongside it.
- Part of the prompt I gave Chat GPT was to write the article in my style. I asked it what traits of my writing it incorporated into this article. One of the answers it gave me was putting a call to action at the end of the article, which is true. I do like to do that. But typically, especially in news, my call to action is a link to somewhere or someone with more information. I would draw a conclusion for my audience in a news article. They’re much too intelligent for that ;).
- “So too” is awkward. Microsoft thinks I should change it.
Final Thoughts
OK, we’re done letting the robot talk now. First of all, this article was … kind of good. It’s organized well, the quotes are strong and, aside from a few errors, the writing follows journalistic style.
Here’s why I feel OK about my job and the future of journalism in general. After I read the article a couple more times, I realized it actually wasn’t as good as I thought. The problem with the article is there’s no attribution for the information it gives, which would make sense since almost none of it is true.
For Chat GPT to write me a good news article, I would still have to go out and conduct interviews and do my own research and then feed it the information I want it to report on. If UCCS were implementing a new academic integrity policy, and there was not yet any available reporting on that policy, I would have to tell Chat GPT about the policy before it could write anything.
AI might be a useful tool in journalism and writing in general when it comes to getting help with wording or organization. But when it comes to content, in its current state, a human being is still necessary to do a lot of the heavy lifting. So, I shall sleep peacefully tonight knowing that Chat GPT will not be the next n news editor for The Scribe.
Photo courtesy of AI Studios.