Hacker News new | past | comments | ask | show | jobs | submit | more venv's comments login

Alternatively, run the dashboard script via SSH to get the output ($ ssh "host" "command").


If someone consumes what you call useless content, why would they use AI to produce anything else? It's not as if there isn't useful content around, already.


Just referring to entertainment (ie not educational videos), the clickbait social media entertainment is easier to consume and is readily recommended, you know you have at least a little bit of interest in it. It just tends to be lower quality entertainment than what you know is possible. Long form entertainment is higher investment and higher risk in that if you don't like it, there's more energy/time expended.

So there's a choice between low risk entertainment that caps out at like 20% of the value of what your favorite thing is, and higher risk entertainment that very well could still be terrible but might have a chance of being good.

AI bridges that gap to de-risk the high investment entertainment so you no longer have to make that choice.


Maybe, but it would still be better if the situation didn't detoriate further.


What you are describing, is not replication.


Even this would require somehow verifying the raw data. It's plausible a bad actor could "reverse engineer" their data from a pre-determined conclusion.

But yes, overall more openness is good. Still, the cost losing trust in society is very high (as you need to verify everything).


> It's plausible a bad actor could "reverse engineer" their data from a pre-determined conclusion.

I've already heard of someone planning a product ( initially targeted at lazy^H^H^H^Hbusy high schoolers and undergrads) that will use AI to reverse discover citations that fit a predetermined narrative in a research paper. Write whatever the hell you want, and the AI will do its best to backsolve a pile of citations that support your unsourced claims and arguments. The founder, and I use that term very generously, expects the academic community to net support this because it will boost citation counts for the vast majority of low citation, low visibility works.


Sorry, but you have a logical fallacy there (affirming the consequent). It does not follow, that humans and LLMs have all same properties, even if they have one same property. ("sentient being is text predictor" does not follow from "text predictor is sentient being").


that is true, I meant that if sentient being is a text predictor then it is possible to build sentience with a text predictor


Plenty of laws governing knives...


But so far, no law that says that a kid needs a licence to be allowed to use a knife at home.


It will explain anything immediately. Correctly, not necessarily. Did you check the explanation holds water? Seems like it's back to googling anyhow.


Well, not because of emphasizing, but because of there being a viable mechanism in the human case (reasoning being, one can only know that oneself has qualia, but since those likely arise in the brain, and other humans have similar brains, most likely they have similar qualia). For more reading see: https://en.wikipedia.org/wiki/Philosophical_zombie https://en.wikipedia.org/wiki/Hard_problem_of_consciousness

It is important to note, that neural networks and brains are very different.


That is what I'd call empathizing though. You can 'put yourself in the other person's shoes', because of the expectation that your experiences are somewhat similar (thanks to similarly capable brains).

But we have no idea what qualia actually _are_, seen from the outside, we only know what it feels like to experience them. That, I think, makes it difficult to argue that a 'simulation of having qualia' is fundamentally any different to having them.


Same with a computer. It can't "actually" see what it "is," but you can attach a webcam and microphone and show it itself, and look around the world.

Thus we "are" what we experience, not what we perceive ourselves to "be": what we think of as "the universe" is actually the inside of our actual mind, while what we think of as our physical body is more like a "My Computer" icon with some limited device management.

Note that this existential confusion seems tied to a concept of "being," and mostly goes away when thinking instead in E-Prime: https://en.wikipedia.org/wiki/E-Prime


Absolutely this, well put. I guess enough people misunderstand AI models to the point of treating them like they are not software. I guess this validates Clarke's third law (Any sufficiently advanced technology is indistinguishable from magic).


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: