"DeepSeek’s policy states that it stores the information for 'further training' of the chatbot in Chinese servers. While it’s not something to get panicked about (most of the applications follow the same principle, despite not being overly open about it)"
Because of how corporations and state are tightly fused in China's governance.
> A Leninist system features an authoritarian regime in which the ruling elite monopolizes political power in the name of a revolutionary ideology through a highly articulated party structure that parallels, penetrates, and dominates the state at all levels and extends to workplaces, residential areas, and local institutions.
Yes. These are not comparable political systems. In the US, the information you share can be accessed by law enforcement with the approval of a judge if there's a crime suspected. But in cases where the government improperly accesses your data, they actually destroy their own case against you, because anything from that poisoned tree of evidence can be thrown out in court. Even when governmental power is abused in the US, it is nothing like the routine surveillance and suppression that chills free thought and speech in a totalitarian dictatorship like China.
I'm sorry, but your idea of how the US works is a complete fairytale. You need to get a serious reality check on how the US actually works in real life. The law in the US is applied selectively (depending on the profiles involved, severity of case, political backdrop, etc). There's plenty of corruption, misaligned incentives, and corporate meddling. I can't count the number of cases from the past 30+ years that demonstrate this.
Also weird how people pretend Snowden wasn't just trying to draw equivalence between the US and the dictatorship where he currently resides, on behalf of said dictatorship.
> "CCP did not invade Iraq, Libya, Afganistan, bomb Syria or support the Palestinian Genocide."
1. There has been no genocide in Palestine.
2. CCP meddles in other countries to equal if not worse degrees - both militarily and politically/economically. Routinely imprisons and erases millions of own citizens. Works to annex territories that aren't part of China (today). Funds and arms Russia, Iran, Syria...
You seem like the kind of person that selectively applies and practices their morals, depending on whether the story aligns with your agenda.
"There's an obvious problem with the concept of training on user prompts; how would training on a bunch of questions cause it to know the answers?"
I imagine by analysing the chat?
If the user says thanks in the end, or gives a thumps up, it likely was a useful and correct answer, that could be included in further training. Or at least considered for future training and I cannot imagine them not considering and experimenting with it.
User queries were at least historically useful to train smaller models from larger models. You need to know the kind of questions real people ask to train a model that’s good at answering those questions
Back when I started using LLMs for writing code I would type out long, gently phrased explanations about why it was wrong, as if I was teaching a pupil, hoping it would help. I'm sure a lot of us did. If they can parse and mine those prompts, they'll have a nice little metacorpus to build on.
Now I just tell it to stop being stupid over and over until it does a good job. I wonder if it would improve the model to keep all of the beratement in the training data.
Edit: Apparently a 'metacorpus' is a swollen nematode ass. My sincerest apologies, bros.
The bigger question is what ELSE are Anthropic/OpenAI/et al. doing with your data? Training is just one of many ways to exploit users’ data. Some of the other possibilities are truly chilling.
IMO the more interesting question is why low-quality stuff like this keeps getting upvoted here. Feels like any submission that has AI in it automatically gets to the front page no matter the quality. Sad state of HN. I just can't imagine that people actually read this stuff and then decide to upvote because they found it useful. It's probably upvoted by people/bots who only read the title.
The whole reason I come to HN in the first place is to filter out BS clickbait articles exactly like this one, not to have them fill the front page.
It's certainly AI-generated garbage. But it seems to have slipped from first place to 20th in the time it took to read your comment. If it was ranked up by bots, and say 50 fake accounts, they mistimed the velocity.
I'm certainly not an extreme HN old timer, but I've been visiting for a fair number of years and I've seen this sort of complaint since I started, while article quality doesn't seem to have gone down noticeably. In fact, the site rules even caution against complaining that HN is "becoming Reddit", which is essentially the old version of this comment. The fact is that, even here, there will always be a few poor quality articles that slip through.
BTW, pointing out that a particular article is poor, like qeternity's comment, is worthwhile. It's just comments that complain all of HN is going downhill that are tiresome.
Article quality has IMO gone down considerably in the last 2-3 years ever since LLMs became a thing. Probably not because LLM articles are upvoted by humans, but more likely because it's much easier to create and manage realistic bot fake accounts with LLMs.
We're at a point where it's impossible to tell which users are bots and which are human by looking at their comments.
I’m an old-timer so I’ve seen multiple cycles of the front page being dominated by a PR blitz. Sometimes it’s startup/money-driven (e.g. mobile applications via smartphone adoption), sometimes it’s a community that organizes elsewhere to promote something to the HN readership in a disciplined way (e.g. Rust), sometimes it’s both (e.g. crypto).
What feels different about this one is that it seems very “top down”, it has the flavor of almost lossless transmission of PR/fundraise diktat from VC/frontier vendor exec/institutional NVIDIA-long fund to militant AGI-next-year-ism at the workaday HN commenter level.
Maybe the powers that be genuinely know something the rest of us don’t, maybe they’re just pot committed (consistent with public evidence), I’m not sure. It’s been kind of a while since the GPT3 -> GPT4 discontinuous event that looked like the first sample from an exponential capability curve. Since then it’s been like, it can use a mouse now. Well, it can kinda use a mouse now. Hey that sounds a lot like the robot in Her.
But whatever the reason, this one is for all the marbles.
How about you go study first, instead of just trying to crank out AI generated slop. Absolutely no point in helping you correct your articles when they should instead be left obviously bad, so that they can be flagged.
How could such systems prevent purposeful declining/refusal of correct answers, followed (within the "chat") by demand for "corrections" in misleading ways, and stopping the "chat" only when the answer is obviously wrong? Couldn't instructions in one such solution, meant to DoS (not in volume, but in malicious purposely constructed "conversations") the competitor, lead to an overall degradation of quality in all, eventually?
https://www.lesswrong.com/posts/a9GR7m4nyBsqjjL8d/deepseek-r...
https://newsletter.languagemodels.co/p/the-illustrated-deeps...