Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

ChatGPT has been an incredible tool for me when I’m coding. But outside of code, I’ve struggled to find non trivial use cases for it.

A lot of non tech people I’ve spoken to have the same experience. At best, it’s a slightly smarter Google.

Like my wife wanted to prepare a few questions for a class assignment. ChatGPT wasn’t of that much help because she had to prepare questions that were contextualized to the skill level of her average students as well as the content they’ve already covered.

While I can see the trajectory of AI’s eventual growth, this lack of immediate use cases neuters any meaningful debate on, say, AI alignment and ethics. After all, why would you bother about the ethics of a glorified toy?



"Slightly smarter Google" is a trillion dollar industry in itself. And with the rates at which the models are getting better where do you see them in a decade or two?


one of the sibling thread mentioned: >- The training set is inherently biased; human knowledge and perspectives not represented in this set could be systematically wiped from public discourse.

What you are saying make sense. I find internet to be a great place to find tech stuff, not so much others. Of course this also make sense, internet very much represent a certain group of people who are tech savy, or good at publishing stuff on the web.


This is a very legit fear.

The bulk of my country’s internet users came online within the last 5 years. They’re all almost uniformly mobile users. Almost all their interactions with the internet are through walled garden apps.

Whatever knowledge these folks - who, on average, tend to be poorer, rural, less educated - have created would largely be inaccessible to AI datasets.


The "Chat" part of ChatGPT is one interface.

I've found the most use for it in doing zero shot or few shot classifications of natural language without needing to build and run a model on my own.

For example - Show HN: GPT Classifies HN Titles https://news.ycombinator.com/item?id=34156626

    Classify following titles into one or more of {US Politics}, {World Politics}, {Technology}, {Security}, {Current Events}, {Boasting}, {Curiosity}, {Unknown}
Which, when I ran it back then produced:

    34152137    Technology: Lite XL: A lightweight text editor written in C and Lua
    34151880    Curiosity:  Hydrochloric acid is more complicated than you think
    34156235    World Politics, Current Events: Apple Japan hit with $98M in back taxes for missing duty-free abuses
    34152333    Technology: The Architecture of the Lisa Personal Computer (1984) [pdf]
    34151951    Curiosity:  The Zen anti-interpretation of quantum mechanics (2021)
    34155009    Unknown:    Added 49 volumes of Arkansas, Mississippi, and Tennessee law
    ...
Another one that someone made on HN data - Show HN: A structured list of jobs from “Who is hiring?”, parsed with GPT https://news.ycombinator.com/item?id=35259897

The direct API interface is incredibly useful. The chat interface is useful for an expiatory domain into the classification and knowledge contained within the model (be wary of hallucinations), but the direct calls where you know the information you have and want - its classification and parsing of unstructured data is very powerful.


If you use the content they’ve covered as a context you’d maybe get good questions. It’s a bit non-trivial to do yourself but a few startups have posted here recently offering a service that makes it easy to do.


I would think an ongoing conversation would also get progressively more useful.


The worry is not that chatgpt will take over the world. It is that a future system will be unaligned with human interests and once it is created by gradient descent (the internals of the system are not understood by anyone - they're just matrices) there will be no guarantee that humanity will be safe. By looking at the power of gpt4 we have no clear idea of how fast it will continue to improve.


“this lack of immediate use cases neuters any meaningful debate on, say, AI alignment and ethics”

You seem to be ignoring Stable Diffusion in your view of AI and LLMs will be extended via LangChain and ChatGPT plugins so saying we can’t talk about the implications of granting them more functions until after it happens seems irresponsible.


I'm not saying that we shouldn't talk about AI responsibility and ethics.

I'm saying that getting more people interested in AI currently has been tough because the use cases aren't immediately revolutionary for non tech people (even StableDiffusion requires some command of prompt engineering)


Thanks for clarifying




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: