This is what you get if you hire professors who assign writing exercises but are incapable of writing basic coherent emails with punctuation, and are unable to read the very plain text below a ChatGPT response that it may not be accurate.
For years, we've had a strong trend to make even the basic workings of a computer appear like magic and users not bothering how it does what. Now it may be a bit too much to expect a teacher to understand how LLMs work and what the consequences are. (E.g., do they know how a file system works, what happens, when they move a document from one folder to another?) – We may have to revise how we present computers and how they should be perceived, from pseudo-magic ease of use to ease of understanding, what is done and how this is achieved.
The majority of normal users haven't really grasped stuff like how to use the filesystem—and I mean at a very basic level—the entire time I've been computing (since the early '90s). Let alone how it works. I'm not sure it's a trend unless the trend is "more people using computers". More of a static state of most people not understanding or caring to understand stuff that geeks like us have long forgotten we ever had to learn.
My favorite comparison has always been a basic working knowledge of how we go around in a kitchen. We need to know what an oven is, and what a refrigerator is, where the knives are and where the dishes, and what each of these tools are for and how they basically work and how to use them. In computing, we've had a strong trend towards the "magic cupboard", which "just knows" for you what is needed. No need to know what's in the kitchen and how stuff works, this would be just too much for your brain. Open the cupboard once and it's the oven, open it another time, there's your knife, open it yet another time and accidentally throw your freshly baked cake in the waste bin…
> No need to know what's in the kitchen and how stuff works, this would be too much for your brain. Open the cupboard once and it's the oven, open it another time, there's your knife, open it yet another time and accidentally throw your freshly baked cake in the waste bin…
What's wild about that trend is that if you watch low-ability computer users interact with that crap, it confuses the absolute shit out of them. A bad UI that is consistent in behavior and appearance and layout is 100x better for them than "helpful" on-launch pop-ups that sometimes appear, "smart" screens that try to re-arrange things to be helpful, and better than that "improved UX" redesign that moves everything around. And an ugly-but-consistent UI is way better for them than pretty but less-clear or less-consistent.
A great example is showing elderly people how to operate a computer. They want to write down step-by-step procedures as a ground truth to look up later. But there is no such thing and paradigms change from step to step and there isn't a consistent, single flow to achieve a given goal. On the other hand, there's just a handful of basic GUI principles, and a basic understanding of these is all it needs to navigate the system. But the system is actively trying to hide them and there is no need to question these principles or to raise an eagerness to learn them.
On a historical note, the Xerox Star did a great job in making clear that there are just lists: a menu is just a view on a list, just like an array of buttons, which is yet another view on the same thing, as is a property sheet, even an object is just a list of properties – operating the computer is navigating lists and there is couple of useful presentations (you can even switch them). And classic Mac OS did a great job in making functional associations accessible.
[Edit] In a sense, AI chat-bots are the final step in the evolution of not knowing how: a single entry point to summon what ever it may be that magic will provide for. And we won't know what this "whatever" may actually be, because – as an average person – we're perfectly shielded from the working principles. (That is, maybe the penultimate step: really, it should be your smart watch detecting what is that you want and there is no need to prompt or to articulate.)
> On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.
In this analogy, you really shouldn't know about figures. It's just "stuff", like other "stuff". The computer will know for you what's what and what to do with it… It knows best, after all…
Make it mandatory to use chatGPT for writing aid, do live tests to prove true knowledge.