So the important bit here is that the guns failed drop testing. And that's bad.
The rest of the article seems to misunderstand FMEA style "write down every conceivable bad scenario in the universe, how bad it is, and then what you have done to stop it", and then spins this as "look at all these horrible known issues they knew about". I hope a jury doesn't view it the same way, because it would be an epic bad for safety everywhere if engineers writing down a list of bad things to avoid and mitigate was forbidden by company lawyers.
Well, and then didn’t recall them - instead favoring the ‘voluntary upgrade’. And apparently even those ‘upgraded’ under that still have this other, even bigger issue.
That figure from GPT-5 seems to be slightly off, according to the Irish Times:
“At least 258 Irish-born soldiers have won the Medal of Honor since its inception. Of those, 148 won them during the civil war – 14 in one day when the Union Navy raided the Confederate port of Mobile, Alabama, in 1864.”
https://web.archive.org/web/20250504103715/https://www.irish...
As almost every other commenter here has said, this is just a bad article in practically every way. It's quite possible that the problem isn't smart phones, but this article completely fails to show this.
Even the suicide data that they decide is the proper measure of mental health, and according to them proves that teens don't have a problem, shows a 2x increase in teen girl suicide.
I'm going to so something I almost never do, and flag, since this is just bait. I would love to read a case for this with a better argument however.
In order to determine the valuation of companies, Bhatnagar typically applies the following formula: [(Twitter followers x Facebook fans) + (# of employees x 1000)] x (total likes + daily page views) + (monthly burn rate x Google’s stock price)-squared and then doubles if it they’re mobile first or if the CEO has run a business into the ground before.
I handle reports for a one million dollar bug bounty program.
AI spam is bad. We've also never had a valid report from an by an LLM (that we could tell).
People using them will take any being told why a bug report is not valid, questions, or asks for clarification and run them back through the same confused LLM. The second pass through generates even deeper nonsense.
It's making even responding with anything but "closed as spam" not worth the time.
I believe that one day there will be great code examining security tools. But people believe in their hearts that that day is today, and that they are riding the backs of fire breathing hack dragons. It's the people that concern me. They cannot tell the difference between truth and garbage.
This has been going for years before AI - they say we live in a "post-truth society". The generation and non-immediate-rejection of AI slop reports could be another manifestation of post-truth rather than a cause of it.
> I believe that one day there will be great code examining security tools.
As for programming, I think that we will simply continue to have incrementally better tools based on sane and appropriate technologies, as we have had forever.
What I'm sure about is that no such tool can come out of anything based on natural language, because it's simply the worst possible interface to interact with a computer.
people have been trying various iterations of "natural language programming" since programming languages were a thing. Even COBOL was supposed to be more natural than other languages of the era.
This sounds more like an influx of scammers than security researchers leaning too hard on AI tools. The main problem is the bounty structure. And I don’t think these influx of low quality reports will go away, or even get any less aggressive as long as there is money to attract the scammers. Perhaps these bug bounty programs need to develop an automatic pass/fail tester of all submitted bug code, to ensure the reporter really found a bug, before the report is submitted to the vendor.
It's unfortunately widespread. We don't offer bug bounties, but we still get obviously LLM-generated "security reports" which are just nonsense and waste our time. I think the motivation may be trying to get credit for contributing to open source projects.
Simply charge a fee to submit a report. At 1% of the payment for low bounties it's perfectly valid. Maybe progressively scale that down a bit as the bounty goes up. But still for a $50k bounty you know is correct it's only $500.
No need to make it a percentage ; charge $1 and the spammers will stop extremely quickly, since none of their reports are valid.
But I do think established individual and institutes should have free access ; leave a choice between going through an identification process and paying the fee. If it's such a big problem that you REALLY need to do something ; otherwise just keep marking as spam.
That's why they offer cash bounties. You don't need to charge a fee if there is no bounty (aka an actual good Samaritan situation), cuz then there's no incentive to flood it with slop
Another comment in this overall thread indicated that they still receive LLM slop despite not offering bounties. Clout can be as alluring a drug as money.
Why charge a fee? All you need is a reputation system where low reputation bounty hunters need a reputable person to vouch for them. If it turns out to be false, both take a hit. If true, the voucher gets to be a co-author and a share in the bounty.
gentle reminder that the median salary of a programmer in japan is 60k USD a year.
500 usd is a lot of money (i would not be able to afford it personally).
i suspect 1usd would do the job perfectly fine without cutting out normal non-american people.
Could also be made refundable when the bug report is found to be valid. Although of course the problem then becomes some kid somewhere who is into computers and hacking find something but can’t easily report it because the barrier to entry is too high now. I don’t think there is a good solution unfortunately.
That kid could find a security expert - it’s easy to do - and they could both validate it and post the money. I don’t think it would be hard to find someone with $10k with the right skill set.
Pick someone already rich so the reputational damage from stealing your bounty exceeds the temptation. The repeat speakers list at defcon would be a decent place to start.
The world of AI slop needs a human assertion component. Like. I'm real and stake a permanent reputation on the claim I'm making. An I'm actually human gate.
The improvement history of tools beside LLMs, I suspect. First we had syntax highlighting, and we were wondered. Now we have fuzzers and sandbox malware analysis, who knows what the future will bring?
These aren't spies first. They are often children of well to do, high loyalty group North Koreans. It's just a privileged job.
The skill and IQ level varies widely, from super smart to super unskilled. And these roughly get sorted out into different groups with different MO's. North Koreans aren't some uniformly skilled group. You could be targeted by a team of world class bytecode exploit geniuses who rehearses every move, or by the equivalent of Milton from Office Space.
Dissing Kim is something that is not currently widely permitted in NK. Just isn't worth personally.
Not saying no one from NK never will, but so far almost everyone will immediately stop the conversation at this point. There are plenty of crypto people who have monthly or weekly encounters with NK job applicants.
I find this answer highly implausible, not the least because maintaining cover doesn't count as dissing ("I infiltrated the org by telling them the lies they wanted to hear" is hacking 101). Also, North Koreans aren't dumb.
I find some people's attitude to NK hackers slightly schizophrenic: either they are a credible threat or they are amateurs. Which one is it?
> Dissing Kim is something that is not currently widely permitted in NK
This wouldn't be "widely", this would be a specific interaction with a hostile foreigner for the purpose of infiltrating them. It's not the same as being allowed to say this to fellow North Koreans.
> Not saying no one from NK never will, but so far almost everyone will immediately stop the conversation at this point.
Legitimate candidates would at this point too, so as a tactic this is useless.
> I find some people's attitude to NK hackers slightly schizophrenic: either they are a credible threat or they are amateurs. Which one is it?
I have no clue whether the proposed approach works, but there's a pretty coherent model that explains how it could, no schizophrenia needed: They are competent people in a cult.
Being unable/unwilling to diss Dear Leader even when it's advantageous to do so is very typical cult stuff. In fact, it's sort of why cults are dangerous. They compel people to do maladaptive things in service of the "ideals" of the group/leader.
This applies both to the spy directly (perhaps they would personally be unwilling to say such a thing), but also to their entire chain of command. Cults by their nature are not good at passing nuanced instruction like "you can say bad things about Dear Leader under these circumstances." Just because you're willing to diss KJU to get in the door doesn't mean you know your entire chain of superiors are cool with it.
So you're saying NK agents are completely different to, say, Soviet era agents, who could and would say anything as long as it furthered their mission?
Ok, fair enough. In common perception of NK, they do seem bizarre, not like the Soviets during the Cold War.
I think it's unwise to dismiss them as lunatics incapable of deceit. If I were a NK agent, I'd work towards this notion, "NK are incapable of lying if it would diss their leader, that's how we get them!". In fact, I would spread this notion in Reddit, like the OP mentioned.
By the way, this still leaves the easy way out of "why are you asking about Kim Jong Un in a job interview, is it because I'm Korean? I'd like to speak to your HR department please".
I'm just guessing but comparing the NK hacker to a late Cold War era Soviet professional spy is the wrong comparison. Maybe the closer comparison is asking a Soviet party member belonging to the professional middle class with a bit of spy training during the Great Purges to talk negatively about Stalin out of the blue.
Yeah I never got the impression that Soviets were as successfully isolated from the world as North Koreans are. But I’m not an expert on the matter!
I mean, I totally agree that this should not be relayed as a working method to identify spies haha. Just that it’s not beyond believability it’d work in some circumstances.
I am saying they are both a credible threat and many are amateurs. Those are not mutually exclusive.
You are talking about North Korea attackers from a theoretical point of view. For many people dealing with them is just a normal part of work. It's not an unknown that needs to be worked out logically from an armchair.
I'm saying this as someone who personally chatted with a North Korea persona that later tried to drop exploits on people, and the persona belonged to hacking group with at least one 50 million dollar heist. I've also seen the screenshots on many chats with North Koreans.
I don't consider screenshots evidence of anything, so I'll completely disregard that bit.
I'm curious about your personal experience though. Did you try this tactic, and did it work? And how sure are you these weren't random hackers or trolls, but actual NK agents?
> many are amateurs
So basically this would only get rid of the amateurs, low hanging fruit that would have been caught soon enough anyway, and do a "natural selection" of only the non-stupid NK hackers to infiltrate your org?
> And how sure are you these weren't random hackers or trolls, but actual NK agents?
"Agents" is way too big of a word. Just cogs in a corporate theft machine.
There's a lot of reasons I'm sure, but the biggest is because before a hack they asked for help doing something simple with a crypto address that was later used to test run the 50 million dollar theft that was North Korea. And also trying to drop North Korean linked malware is another data point.
This also hits my point about both dangerous and amateurs. They pulled off pretty sophisticated heist but, had to ask for help, asked for help using a crypto address tied to the theft, and blew the cover on an identity they had been building up for a year.
Here's a twitter thread I put together of both my conversation and others with this particular account:
The rest of the article seems to misunderstand FMEA style "write down every conceivable bad scenario in the universe, how bad it is, and then what you have done to stop it", and then spins this as "look at all these horrible known issues they knew about". I hope a jury doesn't view it the same way, because it would be an epic bad for safety everywhere if engineers writing down a list of bad things to avoid and mitigate was forbidden by company lawyers.
reply