Image upscaling is not an LLM technology, using current-gen LLMs as conversational partners is highly undesirable for many reasons, and learning the basics of things IS indeed useful, but it doesn't even begin to offset the productivity losses that LLMs have caused by decimating what was left of the signal-to-noise ratio on the internet.
You haven't even tried to address my chief concern about QUALITY of information at all. I'm perfectly aware that you can ask ChatGPT to do anything, you can ask it to plan your wedding, you can ask it do decorate your house, you can ask if two medications are safe to consume together, you can ask it for relationship advice, you can ask it if your dating profile looks appealing, you can ask it to help diagnose you with a medical conditions, you can ask it to analyze a spreadsheet.
It's going to come back with an answer for all of those, but if you're someone who cares about correctness, quality, and anything that's actually real, you'll have a sinking feeling in your gut doubting the answer you received. Does it actually understand anything about human relationships, or is it giving you relationship advice based on a million Reddit threads it was trained on? Does it actually understand anything about anything, or are you just getting the statistically likely answer based on terabytes of casual human conversation with all of their misunderstandings, myths, falsehoods, lies, and confident incompetence? Is it just telling me what I want to hear?
> Literally same shit my parents would say while I was cross-checking multiple websites for information and they were watching the only TV channel that our antenna would pick up.
Interesting analogy, because I am the one who's still trying to cross-check multiple websites of information while you blissfully watch your only available TV channel.
Image upscaling is not an LLM technology, using current-gen LLMs as conversational partners is highly undesirable for many reasons, and learning the basics of things IS indeed useful, but it doesn't even begin to offset the productivity losses that LLMs have caused by decimating what was left of the signal-to-noise ratio on the internet.
You haven't even tried to address my chief concern about QUALITY of information at all. I'm perfectly aware that you can ask ChatGPT to do anything, you can ask it to plan your wedding, you can ask it do decorate your house, you can ask if two medications are safe to consume together, you can ask it for relationship advice, you can ask it if your dating profile looks appealing, you can ask it to help diagnose you with a medical conditions, you can ask it to analyze a spreadsheet.
It's going to come back with an answer for all of those, but if you're someone who cares about correctness, quality, and anything that's actually real, you'll have a sinking feeling in your gut doubting the answer you received. Does it actually understand anything about human relationships, or is it giving you relationship advice based on a million Reddit threads it was trained on? Does it actually understand anything about anything, or are you just getting the statistically likely answer based on terabytes of casual human conversation with all of their misunderstandings, myths, falsehoods, lies, and confident incompetence? Is it just telling me what I want to hear?
> Literally same shit my parents would say while I was cross-checking multiple websites for information and they were watching the only TV channel that our antenna would pick up.
Interesting analogy, because I am the one who's still trying to cross-check multiple websites of information while you blissfully watch your only available TV channel.