Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You people frustrate me because you don't listen that I've tried to use AI to do help with my job and it fails horribly in every way. I see that it is useful to you, and that's great, but that doesn't make it useful for everybody... I don't understand why you must have everyone agree with you, and that it's "tires" you out to hear other people's contracting opinions. It feels like a religion.


I mean, it is trivial to show that it can do things literally impossible even 5 years ago. And you don't acknowledge that fact, and that's what drives me crazy.

It's like showing someone from 1980 a modern smart phone and them saying, yeah but it can't read my mind.


I'm not trying to pick on you or anything, but at the top of the thread you said "I mean, I can ask for obscure things with subtle nuance where I misspell words and mess up my question and it figures it out" and now you're saying "it is trivial to show that it can do things literally impossible even 5 years ago"

This leads me to believe that the issue is not that llm skeptics refuse to see, but that you are simply unaware of what is possible without them--because that sort of fuzzy search was SOTA for information retrieval and commonplace about 15 years ago (it was one of the early accomplishments of the "big data/data science" era) long before LLMs and deepnets were the new hotness.

This is the problem I have with the current crop of AI tools: what works isn't new and what's new isn't good.


It's also a red flag to hear "it is trivial to show that it can do things literally impossible even 5 years ago" 10 comments deep without anybody doing exactly that...


All of what LLMs do now was impossible 5 years ago. Like it is so self-evident, that I don't know how to take the request for examples seriously.


> What specifically was impossible 5 years ago that llms can do

> It's so self-evident, that I don't know how to take the request for examples seriously

Do you see why people are hesitant to believe people with outrageous claims and no examples


It's more like showing someone from 1980 a modern smart phone you call a Portable Mind Reader and them saying, "yeah, but it can't read my mind."


Are people really this hung up on the term “AI”? Who cares? The fact that this is a shockingly useful piece of technology has nothing to do with what it’s called.


Because the AI term makes people anthropomorphize those tools.

They "hallucinate", they "know", they "think".

They're just the result of matrix calculus on which your own pattern recognition capacities fool you into thinking there is intelligence there. There isn't. They don't hallucinate, their output is wrong.

The worst example I've seen of anthropomorphism was the blog from a searcher working on adverse prompting. The tool spewing "help me" words made them think they were hurting a living organism https://www.lesswrong.com/posts/MnYnCFgT3hF6LJPwn/why-white-...

Speaking with AI proponents feels like speaking with cryptocurrencies proponents: the more you learn about how things work, the more you understand they don't and just live in lalaland.


Because marketing keeps deeply overpromising things.

Maybe hype is overly beneficial to them but if you promise me 1500 and I get 1100 then I will underwhelmed.

And especially around LLM marketing hype is fairly extreme.


If you lived before the invention of cars, and if when they were invented, marketers all said "these will be able to fly soon" (which of course, we know now wouldn't have been true), you would be underwhelmed? You wouldn't think it was extremely transformative technology?


From where does the premise that "artificial intelligence" is supposed to be infallible and super human come from? I think 20th century science fiction did a good job of establishing the premise that artificial intelligence will be sometimes useful but will often fail in bizarre ways that seem interesting to humans. Misunderstandings orders, applying orders literally in a way humans never would, or just flat out going haywire. Asimov's stories, HAL9000, countless others. These were the popular media tropes about artificial intelligence and the "real deal" seems to line up with them remarkably well!

When businessmen sell me "artificial intelligence", I come prepared for lots of fuckery.


Have you considered you are using it incorrectly?


Have you considered that the problems you encounter in daily life just happen to be more present in the training data than problems other users encounter?

Stitching together well-known web technologies and protocols in well-known patterns, probably a good success rate.

Solving issues in legacy codebases using proprietary technologies and protocols, and non-standard patterns. Probably not such a good success rate.


Have you considered you have no knowledge of how I'm using AI?


Yes I have. I did some research and read some blog posts, and nothing really helped. Do you have any good resources I could look at maybe?


I think you would benefit from a personalized approach. If you like, send me a Loom or similar of you attempting to complete one software task with AI, that fails as you said, and I'll give you my feedback. Email in profile.


This is the correct approach. Also see: https://ezyang.github.io/ai-blindspots/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: