What stops adults from giving children drugs and alcohol?
You put severe penalties on the crime, then you catch people doing the crime. Offer a reward fo catching people, and I'm sure a few kids will hand people in for the reward. They'll be able to prove they got a token from someone (as they'll have it), then we investigate.
Can you explain why you think the online world should be so different to the physical world, which is full of places where I need to use an ID to get age-limited items? I really don't feel monsters were made by stopping children buying porn and alcohol. Do you think those age restrictions should be removed too?
Particularly as more of society moves from physical to virtual.
If bouncers copied my id, my home address and a bunch of private data every time I went to a bar I'd never go out.
This whole premise is absurd. There is tons of research and empirical and historical evidence that living in a surveillance state stifled free expression and thus narrows the richness of human creation and experimentation.
How old are you that you think constant surveillance is any kind of way to live? It's a thin gruel of a life.
This seems like such a lost cause to carry on about.
The fact that the post originates from a what appears to be an furry-aligned individual is probably not going to help get a majority of people to be sympathetic.
There appears to be no formidable organized resistance against the recent decades of surveillance boom.
With tech and many tech employees actively accelerating surveillance.
Horrible? yes. And extremely unlikely to be rolled back anytime soon.
(Disagree? I'd love to believe you are right!)
Lost causes are worth fighting for and keeping in the public eye, in the history of ideas being written off or challenges being considered impossible to overcome many end up swinging back hard the other way as long as they are ripe (the framework is preserved and still championed) and there is an inciting incident that swings public sentiment.
Defeatist attitudes and throwing in the towel almost never makes sense, engaging at a lower degree of time commitment is sensible for some, the meta commentary about it being hopeless is one of the worst types of self defeating comments a person can make, especially if they aren't in opposition, whose time are you trying to optimize with the comment? To what point and purpose are you saying this except to further deflate sails on an already still day?
The zeitgeist isn't purely rational or stable, change is often non linear, I've seen small subcultures with "impossible" headwinds completely own the space within my lifetime, we're just at the heel turn now and it's not universally popular, many people don't speak up because they are just getting vpns or moving to other forms of non-violent non-compliance.
I suspect a lot of doomposting online is someone writing down their negative self talk hoping some stranger will finally provide a convincing argument that they can use to fight their own feelings on the matter. It's like... involuntary group therapy?
You keep making this comparison, but it's not appropriate. The closest real-world analogy: in order to buy alcohol, you need to wear a tracking bracelet at all times, and be identified at every store you enter, even you you choose to purchase nothing. If our automated systems can't identify you with certainty, you'll be limited to only being able to do things a child could do.
And the real world has a huge gap between a child and an adult. If an 8-year old walked into Home Depot and bought a circular saw, there's no law against it, but the store might have questions. If a 14-year old did it, you might get a different result. At 17, they'd almost certainly let you.
The real world has people that are observing things and using judgement. Submitting to automated age checks online is not that.
It's appropriate (to me) as a limit society has decided it wants, and we should consider if there is a reason similar limits should, or should not, apply to the internet. The whole article we are discussing is how that could be implemented in a much more privacy-sade way.
But my point is that it won't be. The laws are getting passed, and there is no privacy preservation, there are no ZKPs, there's nothing except "submit your ID". You keep holding out for good faith, but the folks making the rules aren't acting in good faith. I very much appreciate the discussion here, but I think we're coming into the discussion with a different set of priors, so even if our values match, we might not agree.
Just to emphasize the point, the EU's age verification laws are actively preventing Android users from utilizing third party app stores because the implementation is tied to Google Play integrity services.
> Can you explain why you think the online world should be so different to the physical world
When you show a bartender your ID to buy a beer they generally don’t photocopy it and store it along with your picture next to an itemized list of every beer you’ve ever drank
> online world should be so different to the physical world
If you take a step back, they are _very_ different, in myriad ways. But to answer your question very concretely: because we're turning the web into a "Paper's Please" scenario and the analogy with "I'm 12 but I can't walk into this smoke shop" doesn't hold. I shared a story on HN that didn't take off about how Google is now scanning _all_ YouTube accounts with AI, and if their AI thinks you're underage, your only recourse after the "kid limit" your account is to submit a government-issued ID to get your account back.
This has nothing to do with buying cigarettes and alcohol. This is about identifying everyone online (which advertisers would be thrilled about), and censoring speech. In short, the mechanisms being used online are significantly more intrusive than anything in the real world.
I'm happy to discuss they are very different, and I agree the current systems (in the UK in particular), are awful.
However, I think tech people risk losing this battle by saying (it seems to me, and in the post I originally replied to) "any attempt at any age checking on the internet is basically 1984", rather than "we need some way of checking some things, keeping people's privacy safe, this current system is awful."
Of course, if some people believe the internet should be 100% uncensored, no restrictions, they can have that viewpoint. But honestly, I don't agree.
I'm a huge proponent of legislation that requires sites so send a header that indicates they are serving adult content in that request. I'm also a huge proponent of basic endpoint security that allows a parent to put the device into a mode that checks for those headers and blocks the response.
This doesn't require any of the draconian 1984 measures that folks are insisting upon. The problem is there is no real incentive to implement true age-verification in this manner (this is why nobody has deployed ZKP), but rather to identify everyone. So while it would be ultra easy to imagine an onboarding scenario during device setup that asks:
1. Will this device be assigned to a child?
2. Supply the age so we can track when they cross over 18
3. Automatically reject responses with the adult header and lock down this setting
But Google and Apple won't do that, because they don't care, and the politicians won't bake it into their laws, because they don't are either: their goal is to alter culture, and protecting children is just an excuse.
The issue is, it's not feasible to enforce these sorts of bans because the internet is too vast. Yes, you can stop people from visiting PH or any of the big sites, but for every big porn sites, there will be thousands of fly by night ones looking to make a buck. Age verification laws create a market for such sites which can be ran out of jurisdictions that the law can't control.
So next, we better make the devices age gate their users with attestation and destroy people's ability to use open operating systems on the web. Maybe for good measure we tell ISPs to block any traffic to foreign sites unless the OS reports that attestation.
But people are using VPNs to bounce traffic to other countries anyway, so now we need to ban those. But people still send each other porn over encrypted channels so we need to make sure encrypted platforms implement our backdoor so we can read it all, on top of on-device scanning which further edges out any open source players left in the game.
If the author reads this, maybe for that first macro add an implementation without your package? I don't really know how hard this is to write "straight".
Why do you think Zig won't have the same issue? In my experience lack of packages seem to come from no standard place to put them (C++, then you end up with effective package stores like Boost), or just not popular enough.
Most languages I work on (Rust, Python, JavaScript, Haskell), have a huge number of packages.
I think that the "package explosion problem" actually does exist in C/C++ and Go but it's just hidden. Very often a project takes the form of various components, and you only depend on a subset of them. In these older-style languages these are all shipped as a single unit, but in newer languages these are shipped more explicitly so you can see what the shape of your dependency tree really looks like.
Boost is a great example of this, since it does a ton of different things, but the boundaries between components are not quite as obvious as having a "dependencies.lock" to look at. Tokio has a ton of different packages but often you only need a few of them.
Also often dependencies can be hidden by depending on a single system library, but that then internally contains a ton of stuff. Let's be real about dependencies: https://wiki.alopex.li/LetsBeRealAboutDependencies
The other thing, which is also a double-edged sword, is that "system dependencies" on Linux (generally) have only one version installed at a time and that version is expected to be suitable for all dependents. Distro managers/packagers often put in nontrivial work to make this happen. So you often install something new with few apparent dependencies because the rest are already installed. With dependency locking in Rust etc., you'll often "re-"install the same package many times because the version is slightly different than what you already had.
The main problem (and the reason I have no intention of doing any research involving AI programming), is even if you could get the research started, completed, and published in 4 months (and incredibly ambitious goal), 95% of the comments would just be "Oh, but you didn't consider FishGPT 5.2, and Pineapple AI, which came out 6 weeks ago, so any negative points are entirely out of date".
As someone who has written and graded a lot of University exams, I'm sure a decent number of students would write the wrong answer to that. A bunch of students would write 5 (adding all the numbers). Others would write "3 apples and 2 cats", which is technically not what I'm looking for (but personally I would give full marks for, some wouldn't).
Many students clear try to answer exams by pattern matching, and I've seen a lot of exams of students "matching" on a pattern based on one word on a question and doing something totally wrong.
Many professionals with lower skilled jobs sometimes lean too heavily on pattern matching too.
For example, customer service reps tend to often vaguely match your request with a possibly or only vaguely applicable templated response.
Technically savvy customers who tend to try explain problems in detail are probably more likely to get an actually non-applicable canned response as the CS rep gets frustrated with the amount of information and will latch onto the first phrase which relates to a templated response without really considering context.
My reply’s getting a little tangential now, but I feel this is good life advice, I’ve found I’m more likely to get decent customer service if I keep my requests as short as possible.
The first sentence needs to essentially state the issue I need help with. In some cases a bulleted list of things I’ve tried helps and then I’m sure to include essential info like an account number, e.g.
I’m getting error 13508 when I try log into my account. I’ve already tried the following solutions with no success:
The next step will be to walk you through clearing your browser cache and cookies.
Because the CS rep has no idea who you are, and your protestations of competency fall on deaf ears because they've dealt with 23325424 people in the last year that claimed to know what they're doing but actually didn't at all.
Their goal is to get through the script, because getting through the script is the only way to be sure that it's all been done the way it needs to be done. And if they don't run through the script, and refer you to the next level of support, and it turns out that you hadn't actually cleared your browser cache and cookies, then that's their fault and they get dinged for it.
I always approach these situations with this understanding; that the quickest way to get my problem solved is to help them work through their script. And every now and then, just occasionally, working through the script has shown up something simple and obvious that I'd totally missed despite my decades of experience.
The robots are even worse than the humans. Recently I got one when I called an ISP that insisted on calling back after restarting all the equipment and waiting 10 minutes. Never mind that the issue was entirely unrelated to the equipment. It had asked for a description of the problem but apparently couldn't actually do anything with that information. After refusing it enough times it simply hung up on me.
Obviously I don't do business with that company anymore.
However, I still think any irrelevant facts would upset a number of exam takers, and claiming it "clearly" wouldn't is far too strong a claim to make without evidence.
When you try wing your way through a question by pattern matching, then you are not applying intelligence. Your interests lie elsewhere and so you are just fumbling your way through the activity at hand just to get through it.
This is something that the rise of LLMs has highlighted for me. Sometimes, we don't care to apply our intelligence to a problem. I've come to think of myself as "acting like an LLM" when I do this.
It reminds me of Kahneman's "system 1" (fast) and "system 2" (slow) thinking. LLMs are system 1 - fast, intuitive, instinctual. Humans often think that way. But we can also break out system 2 when we choose to, and apply logic, reason, etc.
Other "LLM Like" behaviors: telling corny jokes based on puns, using thought-terminating cliches, freely associating irrelevant cultural references in serious discussion ...
I agree that poor test takers are easily distracted, and this is the reason that "word problems" are heavily emphasized in preparation for tests like the SAT or state proficiency exams.
But in general I do not think these models are claiming at being good at replicating the performance of a distracted or otherwise low performing pupil. I think they should be evaluated against humans who are capable of completing word problems containing context that is not inherently necessary to the math question. The reason those tests I mentioned use these word problems is that it's a way to evaluate someone's ability to think in abstract mathematical terms about everyday situations, which obviously involve lots of unimportant information the person must choose to consider or not.
tl;dr: I think a reasonably competent high school student could answer the apple and cat question, which is absolutely a reasonable bar for an LLM to clear. If university students are failing these questions, then they have not been taught test taking skills, which should be considered a mathematical failure just as unacceptable as that of the LLM, not a mitigating similarity for the latter.
These things are moving so quickly, but I teach a 2nd year combinatorics course, and about 3 months ago I tried th latest chatGPT and Deepseek -- they could answer very standard questions, but were wrong for more advanced questions, but often in quite subtle ways. I actually set a piece of homework "marking" chatGPT, which went well and students seemed to enjoy!
Luc Julia (one of the main Siri's creators) describe a very similar exercice in this interview [0](It's in french, although the au translation isn't too bad)
The gist of it, is that he describes this exercice he does with his students, where they ask chatgpt about Victor Hugo's biography, and then proceed to spot the errors made by Chatgtp.
This setup is simple, but there are very interesting mechanisms in place. The student get to learn about challenging facts, do fact checking, cross reference, etc. While also asserting the reference figure of the teacher, with the knowledge to take down chat gpt.
Very clever and approachable, and I've been unintentionally giving myself that exercise for awhile now. Who knows how long it will remain viable, though.
Wait, so how do I write mutually recursive functions, say for a parser? Do I have to manually do the recursion myself, and stick everything in one big uber-function?
Zig does allow this, that's what GP is saying. You don't actually need to relocate your stack, you can just declare a portion of your stack (i.e. what would otherwise be the next N frames) to be the stack you'll use for recursion, and thereafter use that to recurse in to.
Before Rust, I'd reached the personal conclusion that large-scale thread-safe software was almost impossible -- certainly it required the highest levels of software engineering. Multi-process code was a much more reasonable option for mere mortals.
Rust on the other hand solves that. There is code you can't write easily in Rust, but just yesterday I took a rust iteration, changed 'iter()' to 'par_iter()', and given it compiled I had high confidence it was going to work (which it did).
You put severe penalties on the crime, then you catch people doing the crime. Offer a reward fo catching people, and I'm sure a few kids will hand people in for the reward. They'll be able to prove they got a token from someone (as they'll have it), then we investigate.