AI is Destroying the University and Learning Itself

Introversion

Pie aren't squared, pie are round!
Kind Benefactor
Super Member
Registered
Joined
Apr 17, 2013
Messages
15,312
Reaction score
28,418
Location
Massachusetts
Students use AI to write papers, professors use AI to grade them, degrees become meaningless, and tech companies make fortunes. Welcome to the death of higher education.

I used to think that the hype surrounding artificial intelligence was just that—hype. I was skeptical when ChatGPT made its debut. The media frenzy, the breathless proclamations of a new era—it all felt familiar. I assumed it would blow over like every tech fad before it.

I was wrong. But not in the way you might think.

The panic came first. Faculty meetings erupted in dread: “How will we detect plagiarism now?" “Is this the end of the college essay?” “Should we go back to blue books and proctored exams?” My business school colleagues suddenly behaved as if cheating had just been invented.

Then, almost overnight, the hand-wringing turned into hand-rubbing. The same professors forecasting academic doom were now giddily rebranding themselves as “AI-ready educators.” Across campus, workshops like “Building AI Skills and Knowledge in the Classroom” and “AI Literacy Essentials” popped up like mushrooms after rain. The initial panic about plagiarism gave way to a resigned embrace: “If you can’t beat ‘em, join ‘em.”

This about-face wasn’t unique to my campus. The California State University (CSU) system—America’s largest public university system with 23 campuses and nearly half a million students—went all-in, announcing a $17 million partnership with OpenAI. CSU would become the nation’s first “AI-Empowered” university system, offering free ChatGPT Edu (a campus-branded version designed for educational institutions) to every student and employee. The press release gushed about “personalized, future-focused learning tools” and preparing students for an “AI-driven economy.”

The timing was surreal. CSU unveiled its grand technological gesture just as it proposed slashing $375 million from its budget. While administrators cut ribbons on their AI initiative, they were also cutting faculty positions, entire academic programs, and student services. AtCSU East Bay, general layoff notices were issued twice within a year, hitting departments like General Studies and Modern Languages. My own alma mater, Sonoma State, faced a $24 million deficit and announced plans to eliminate 23 academic programs—including philosophy, economics, and physics—and to cut over 130 faculty positions, more than a quarter of its teaching staff.

At San Francisco State University, the provost’s office formally notified our union, the California Faculty Association (CFA) of potential layoffs—an announcement that sent shockwaves through campus as faculty tried to reconcile budget cuts with the administration’s AI enthusiasm. The irony was hard to miss: the same month our union received layoff threats, OpenAI’s education evangelists set up shop in the university library to recruit faculty into the gospel of automated learning.

The math is brutal and the juxtaposition stark: millions for OpenAI while pink slips go out to longtime lecturers. The CSU isn’t investing in education—it’s outsourcing it, paying premium prices for a chatbot many students were already using for free.
 

Introversion

Pie aren't squared, pie are round!
Kind Benefactor
Super Member
Registered
Joined
Apr 17, 2013
Messages
15,312
Reaction score
28,418
Location
Massachusetts
Sigh. From that article.

The ouroboros just got darker. In October 2025, Perplexity AI launched a Facebook Ad for its new Comet browser featuring a teenage influencer bragging about how he’ll use the app to cheat on every quiz and assignment—and it wasn’t parody. The company literally paid to broadcast academic dishonesty as a selling point. Marc Watkins, writing on his Substack, called it “a new low,” noting that Perplexity’s own CEO seemed unaware his marketing team was glamorizing fraud.

If this sounds like satire, it isn’t: the same week that ad dropped, a faculty member in our College of Business emailed all professors and students, enthusiastically promoting a free one-year Perplexity Pro account “with some additional interesting features!” Yes—even more effective ways to cheat. It’s hard to script a clearer emblem of what I’ve called education’s auto-cannibalism: universities consuming their own purpose while cheerfully marketing the tools of their undoing.
 

Comradedima1

The Town Whippersnapper
Super Member
Registered
Joined
Apr 4, 2024
Messages
488
Reaction score
1,263
Location
Upper Left, USA
Speaking as someone currently in college who has tried AI for an assignment or two (it was allowed for those assignments), don't bother. Half the time you have to force feed it new prompts to finally get the answer you want, and the other half of the time it'd just be easier to glance at the textbook. I'll leave the hallucinations to 2 am red bull fueled writing binges.

Also, the hell are you paying tens of thousands of dollars to an institution for if you just use AI?
 

Bitterboots

wandering through the mazes
Super Member
Registered
Joined
Feb 6, 2024
Messages
1,527
Reaction score
2,224
If this sounds like satire, it isn’t: the same week that ad dropped, a faculty member in our College of Business emailed all professors and students, enthusiastically promoting a free one-year Perplexity Pro account “with some additional interesting features!” Yes—even more effective ways to cheat.

Was this not sent as a warning? Why on earth would a faculty member want their students to cheat????
 

Akvranel

Hiding in comfortable locations
Super Member
Registered
Joined
Aug 29, 2020
Messages
1,336
Reaction score
2,765
Location
Arizona
Was this not sent as a warning? Why on earth would a faculty member want their students to cheat????
I don't agree with this, but to play devil's advocate, I imagine these are some of the arguments:

Because it isn't "cheating" - AI is a tool designed to assist in writing, drawing, researching, calculating, data analysis, etcetera. It is designed to save time, similar to using a calculator instead of solving problems longhand.

Additionally, these students will have to use AI when they enter the workforce. Preparing students for work is part of the university's job. Denying them the opportunity to familiarize themselves with the technological tools they will be required to use will be a disservice to them, similar to teaching a coding class for a scripting language no longer in use.

And, at the end of the day, cheating is ultimately defined by the academic institution or, possibly, the individual professor. If the university says it isn't cheating, then it isn't.

Keep in mind, I don't agree with any of that. I'm from a liberal arts background, so I've often heard stressed the importance of teaching beyond the facts & into the process - i.e. learning how to think. Specific technical expertise is going to be important for many fields, but being able to think critically applies to any position. AI is just the next version of copying your friend's homework or looking up the answer to the test question online - it doesn't teach you anything.

I'm also doubtful, as @Comradedima1 pointed out, that using AI is actually saving any time once you factor in how challenging it is to get it to do what you want & necessary fact checking (which I'm sure plenty of people don't do the later step).

(ETA: Although, ironically, the more wrongly AI completes an assignment, the more you could potentially learn, as you'd then have to learn the subject in order to know what to correct. Not an argument in favor of AI, obviously, I just find that funny)
 

Comradedima1

The Town Whippersnapper
Super Member
Registered
Joined
Apr 4, 2024
Messages
488
Reaction score
1,263
Location
Upper Left, USA
Speaking from experience, most students don't do the later step.

I took programming classes when LLM's were first starting to break into the college system, and my prof's said "Don't use AI code, because we can spot it, and if your code gets things wrong, it's very hard for undergraduates to understand how the code is wrong." They also worked around AI by throwing a bunch of edge cases at us that we'd have to account for. Since then I have used it for programming but yeah, if you just slow down and look at the function cases or stack overflow, you'll probably solve the problem in a shorter amount of time than using AI.

Then last spring, I took a Eastern European science fiction course where the instructor had assignments where we'd use AI to do a variety of different things. Ask it to write a story for us, investigate fake news generated by AI, etc. One of the best courses I've taken at uni hands down, it showed how AI is really only good at throwing word vomit onto a screen, there is no depth, no emotion behind it, just words on a screen.
 

Akvranel

Hiding in comfortable locations
Super Member
Registered
Joined
Aug 29, 2020
Messages
1,336
Reaction score
2,765
Location
Arizona
Then last spring, I took a Eastern European science fiction course where the instructor had assignments where we'd use AI to do a variety of different things. Ask it to write a story for us, investigate fake news generated by AI, etc. One of the best courses I've taken at uni hands down, it showed how AI is really only good at throwing word vomit onto a screen, there is no depth, no emotion behind it, just words on a screen.
Simply curious here - was the professor making the point of having you use AI so as to see how it wasn't helpful, or was that something you noticed contrary to the professor's intention (i.e. did the professor like/dislike/was neutral about it)?
 

litdawg

Helping those who help themselves
Super Member
Registered
Joined
Feb 18, 2019
Messages
958
Reaction score
709
Location
California
Recently retired Cal State professor here--The AI koolaid was incredibly dispiriting. I'd been navigating around student use of it for a few years, especially in my critical thinking class. And, yes, I went back to Blue Books for midterms after our COVID online pivot ended.

But the end of education? That's overblown. The end of mass public higher education? Maybe. Probably.
 

Michael_Panetta

Registered
Joined
Mar 6, 2024
Messages
10
Reaction score
5
Literature as we know it is dead. There's no way to know that a piece of writing is 100% human. Even if a text was fully written by a human, there's no way to know if that person used AI peripherally. (For example, I know of an author who claims that her work is 100% human-written, but she uses ChatGPT for outlining, generating characters, building character arcs, etc.) Literature as we understood it is simply dead.

I don't say this lightly. Literature was like religion for me. I had spent my entire life reading and writing. I spent so many years trying to get published in science fiction magazines, and when it finally happened, generative AI became mainstream. I tried to "cope" with the usual talking points, but like a religious person struggling with atheism, I couldn’t keep reasoning my way out of the obvious.

I haven't written or read anything since August. I’m still mourning, but now that I’ve let go of the mental gymnastics and the agonizing, I’m at peace.
 

CMBright

A Diamond in the fluff.
Super Member
Registered
Joined
Aug 23, 2021
Messages
9,754
Reaction score
16,179
Location
Oklahoma
I don't use AI. Not for writing, not for research, not for character development, not for world building, not for SPaG checking, thanks to an AI free writing software platform. If any are interested, search for bibisco, with a free version and an upgraded version for a donation. I actively ignore AI results in internet searches.

As long as the laws and the courts allow it, there will be a significant number of writers who do use AI in various forms. There will be some who justify using AI for research or character development or world building because they sit down and write the story from that base.

Right now, there are ethical publishers or money driven publishers who require writers to swear they didn't use AI. I add money driven because AI writing cannot be copyrighted. That means anyone can copy and paste and sell it or give it away, costing the publisher money from sales. There are other publishers who don't, if Submission Grinder's excluding markets that allow AI in searches is any indication.

There are writers who don't use AI in any form. I am one. I suspect many on AW are the same. I know several AW writers have been burned by bots scraping their writing without permission or compensation to use for AI training sets and hate AI with a burning passion. I can't image those writers knowingly using AI in any fashion.

And one final thought. If every person who writes without AI quits, then all that will be left will be those who write with AI in some fashion. Then literature will be entirely an AI driven wasteland. I'll keep writing without AI. I might even get published. Without knowingly using AI in any form. That's the dream. Even without AI, getting published is a longshot.

And I just realized my post and the one above are a tangent from the original topic of university education and university policies regarding AI.
 

lizmonster

Your Friendly Neighborhood Spider-Mom
Moderator
Absolute Sage
Super Member
Registered
Joined
Jul 5, 2012
Messages
20,628
Reaction score
47,895
Location
Massachusetts
Website
elizabethbonesteel.com
And I just realized my post and the one above are a tangent from the original topic of university education and university policies regarding AI.

Oh, I don't think it's that much of a tangent. This stuff will only take over if we let it.

I've said that The Kid's university doesn't let them use LLMs for writing papers. I'm sure some students do anyway. The Kid doesn't - part of that is what she's heard from us, and part of it is her own integrity. But part of it, for her, is that her writing process would be completely disrupted if at any point she let an LLM produce anything. It would confuse her, and make the whole exercise take much, much longer.

These things are bad tools for the tasks they're being sold to do. And yeah, I think we need to stop treating them like they're inevitable.
 

egrabow_NTC

gotta have a good plot
Registered
Joined
Jul 24, 2009
Messages
44
Reaction score
94
Location
Orlando
Website
www.egrabow.com
AI is really only good at throwing word vomit onto a screen, there is no depth, no emotion behind it, just words on a screen.
I'm worried about the massive audiences for whom that's "good enough". (Or the future generations who will expect things in a certain non-human style.)

If I were an educator I'd probably start devaluing homework and try to keep as much work inside the classroom as possible. Stash the phones while they watch you work with pen and paper, or an offline laptop, or whatever.
 

phantom000

Super Member
Registered
Joined
Aug 9, 2011
Messages
375
Reaction score
428
Location
Arkansas
I suppose it would be too much to ask to build an cheat into generative AI?

If a student asks an AI to write their essay the AI just says "No, you are trying to cheat."
 
  • Like
Reactions: kinokonoronin

kinokonoronin

just a lil guy
Super Member
Registered
Joined
Jun 13, 2020
Messages
878
Reaction score
1,290
Location
US
I suppose it would be too much to ask to build an cheat into generative AI?

If a student asks an AI to write their essay the AI just says "No, you are trying to cheat."
Why would LLM companies do that? They're desperate for use cases, and lazy shortcuts productivity hacks for cheaters go-getters is one of the few real selling points for these things.
 

RedRajah

Special Snowflake? No. Hailstone
Moderator
Super Member
Registered
Joined
Feb 23, 2010
Messages
5,714
Reaction score
6,154
Website
www.fanfiction.net
No feasible way to code/adapt Newton's Three Laws of Robotics into AI/LLM, is there?
 

lizmonster

Your Friendly Neighborhood Spider-Mom
Moderator
Absolute Sage
Super Member
Registered
Joined
Jul 5, 2012
Messages
20,628
Reaction score
47,895
Location
Massachusetts
Website
elizabethbonesteel.com
No feasible way to code/adapt Newton's Three Laws of Robotics into AI/LLM, is there?
I know this is kinda-sorta a joke, but any kind of reasoning would involve an entirely separate sort of software that would have to be wound around LLMs, and even then it would have to include concrete criteria for things like "cheating" and "do no harm."

In other words, it would require human discernment to write and maintain.
 
  • Like
Reactions: kinokonoronin

RichardGarfinkle

Second Edition and Second Laughter
Super Moderator
Moderator
Kind Benefactor
Super Member
Registered
Joined
Jan 2, 2012
Messages
12,529
Reaction score
6,846
Location
Swimming in the Shallows
Website
www.richardgarfinkle.com
No feasible way to code/adapt Newton's Three Laws of Robotics into AI/LLM, is there?
That's Asimov's three laws. And they rely on the Robots knowing things and being able to determine what's human and what's harm.

Newton's three laws of motion can be applied to the hardware (e.g. by dropping rocks on the data centers), but not on the software.
 

RichardGarfinkle

Second Edition and Second Laughter
Super Moderator
Moderator
Kind Benefactor
Super Member
Registered
Joined
Jan 2, 2012
Messages
12,529
Reaction score
6,846
Location
Swimming in the Shallows
Website
www.richardgarfinkle.com
Asimov's Robots also don't do good work in writing and editing. In the short story, "Galley Slave" a robot is tasked with going over the galley proofs to an academic book. The robot alters the text because some of the contents attack the work and reputations of other academics. Asimov's three laws have no regard for truth only harm to people.