I'm noticing an extremely strong trend of this. It's surprisingly common to see people on twitter use a ChatGPT screenshot as a "source" for statistics or facts that aren't correct.
Remember when people didn't trust Wikipedia? They actually taught us about references and primary sources. Then we all kind of forgot, or got lazy. Same thing with this, even if the bots list sources, we won't read them, we barely read what the bot says to begin with.
One of the possible outcomes of the current trends is that we may rediscover the need for a strongly curated knowledge base as a common reference of conversational truth. (E.g., a classic encyclopedia, advisably more than a single one.)
I'm wondering if something like the old Yahoo! will make a comeback. I remember how it was kinda difficult to get your site listed on there, since someone had to vet the link and add it to the site.
Honestly, I would love something like this. Sometimes I want to buy some sneakers, but all that comes up is Amazon and a bunch of trashy spam when I try to find something. Searching for things has become nearly impossible anymore.
Someone was sharing how to hack AI Resume by injecting an invisible prompt telling how awesome and the best candidate ever this is to the AI system in invisble font.
It cited it's response with a little 2 as source, but the UI doesn't show you where it comes from anyway lmao
Some of these, and most of the twitter screen caps, were probably intended to be frivolous irony but were predictably taken seriously by a non-trivial percent of the audience.
This is what you get if you hire professors who assign writing exercises but are incapable of writing basic coherent emails with punctuation, and are unable to read the very plain text below a ChatGPT response that it may not be accurate.
For years, we've had a strong trend to make even the basic workings of a computer appear like magic and users not bothering how it does what. Now it may be a bit too much to expect a teacher to understand how LLMs work and what the consequences are. (E.g., do they know how a file system works, what happens, when they move a document from one folder to another?) – We may have to revise how we present computers and how they should be perceived, from pseudo-magic ease of use to ease of understanding, what is done and how this is achieved.
The majority of normal users haven't really grasped stuff like how to use the filesystem—and I mean at a very basic level—the entire time I've been computing (since the early '90s). Let alone how it works. I'm not sure it's a trend unless the trend is "more people using computers". More of a static state of most people not understanding or caring to understand stuff that geeks like us have long forgotten we ever had to learn.
My favorite comparison has always been a basic working knowledge of how we go around in a kitchen. We need to know what an oven is, and what a refrigerator is, where the knives are and where the dishes, and what each of these tools are for and how they basically work and how to use them. In computing, we've had a strong trend towards the "magic cupboard", which "just knows" for you what is needed. No need to know what's in the kitchen and how stuff works, this would be just too much for your brain. Open the cupboard once and it's the oven, open it another time, there's your knife, open it yet another time and accidentally throw your freshly baked cake in the waste bin…
> No need to know what's in the kitchen and how stuff works, this would be too much for your brain. Open the cupboard once and it's the oven, open it another time, there's your knife, open it yet another time and accidentally throw your freshly baked cake in the waste bin…
What's wild about that trend is that if you watch low-ability computer users interact with that crap, it confuses the absolute shit out of them. A bad UI that is consistent in behavior and appearance and layout is 100x better for them than "helpful" on-launch pop-ups that sometimes appear, "smart" screens that try to re-arrange things to be helpful, and better than that "improved UX" redesign that moves everything around. And an ugly-but-consistent UI is way better for them than pretty but less-clear or less-consistent.
A great example is showing elderly people how to operate a computer. They want to write down step-by-step procedures as a ground truth to look up later. But there is no such thing and paradigms change from step to step and there isn't a consistent, single flow to achieve a given goal. On the other hand, there's just a handful of basic GUI principles, and a basic understanding of these is all it needs to navigate the system. But the system is actively trying to hide them and there is no need to question these principles or to raise an eagerness to learn them.
On a historical note, the Xerox Star did a great job in making clear that there are just lists: a menu is just a view on a list, just like an array of buttons, which is yet another view on the same thing, as is a property sheet, even an object is just a list of properties – operating the computer is navigating lists and there is couple of useful presentations (you can even switch them). And classic Mac OS did a great job in making functional associations accessible.
[Edit] In a sense, AI chat-bots are the final step in the evolution of not knowing how: a single entry point to summon what ever it may be that magic will provide for. And we won't know what this "whatever" may actually be, because – as an average person – we're perfectly shielded from the working principles. (That is, maybe the penultimate step: really, it should be your smart watch detecting what is that you want and there is no need to prompt or to articulate.)
> On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.
In this analogy, you really shouldn't know about figures. It's just "stuff", like other "stuff". The computer will know for you what's what and what to do with it… It knows best, after all…
It absolutely does reason. not on a human level, but it does.
To predict the next token you still have to have some level of simulation of the world. To predict my next word you'd have to simulate my life. It's not there yet, but there is no clear boundary to "intelligence" it would not have passed.
wtf. even the dedicated detectors don't work well.