Hacker News new | past | comments | ask | show | jobs | submit | deepfriedrice's comments login

The "critique" is nuts. Surely AI generated. If I didn't trust the domain, I'd assume the author to be incredible for seriously referencing something like this.

Look at the critique [0] and then look at the code [1].

[0] https://web.archive.org/web/20250423135719/https://github.co...

[1] https://github.com/ricci/async-ip-rotator/blob/master/src/as...


Yea clearly AI with the keyword bolding, numbered arguments, and so on. Feel like lots of AI produced content follow this structured response pattern.


It's uses a simple, purpose-focused template of a type that is a common recommendation for clear communication, outline numbering, and highlights keywords using monospaced text, as is common practice in technical writing. None of that is unusual for a human, especially writing something that they know is going to be high visibility, to do.

Modestly competent presentation is now getting portrayed as an "AI tell".


The format doesn’t itself indicate AI, but when combined with the fact that the critique is mostly nonsense it does appear to strongly suggest it.


It has excellent presentation, excess verbosity, and is wholly nonsensical. Read the code. It uses excessive whitespace doing things like function calls/declarations with one parameter per line, and so it's probably like 100 lines "real" code of mostly tight functions -- the presentation/objections make no sense whatsoever.

I was able to generate extremely comparable output from ChatGPT by telling it to create a hyper-negative review, engage in endless hyperbole, and focus on danger, threats, and the obvious inexperience of the person who wrote it. Such is the nature of LLMs it'd happily produce the similar sort of nonsense for even the cleanest and tightest code ever written. I'll just quote its conclusion because LLM verbosity is... verbose.

---

Conclusion This code is a ticking time bomb of security vulnerabilities, AWS billing horrors, concurrency demons, and maintenance black holes. It would fail any professional code review:

Security: Fails OWASP Top 10, opens SSRF, IP spoofing, credential leakage

Reliability: Race conditions, silent failures, unbounded threading

Maintainability: Spaghetti architecture, no documentation, magic literals

Recommendation: Reject outright. Demolish and rewrite from scratch with proper layering, input validation, secure defaults, IAM roles, structured logging, and robust error handling.

---

Oooo sick burn. /eyeroll


> I was able to generate extremely comparable output from ChatGPT by telling it

Just to check, you know that ChatGPT is fully built on human writing right?

Would it be ironic if I claim "what you write looks like what the tool can output, so you used the tool" if the tool was built to output stuff that looks like what you write.

Fun fact: anything you or me write looks like ChatGPT too. It could be surprising if people didn't spend billions and stole truckloads of scraped unlicensed content including content created by you and me to get the tool to literally do just this.


I’m not arguing that it’s unusual for humans to write in this manner, but when you use something like chatgpt with some frequency and see that as a common response template it’s an obvious pattern..


People say emdashes are a signal that something's from chatgpt also — yet people forget that the cliches or patterns of LLMs are learned from real-world patterns. What is common in something like ChatGPT has a good chance to also be common outside of it, and _lots_ of false positives (and false negatives) are bound to creep up frequently when trying to do any sort of pattern-based "detection" here.


I’ve never encountered emdashes in emails from my colleagues before ChatGPT was available, and it’s obvious now where there are emdashes, the content is at least in part AI generated. Same with semicolons. Yes, proper grammar and syntax use semicolons but in most casual business communication those rules are modified for simplicity.


Yes, emdashes are inserted automatically by iOS when a user inputs a double dash: —


>Modestly competent presentation is now getting portrayed as an "AI tell".

This. Someone on a reddit gamedev sub the other day was showing where his game got review bombed because his own description of his game used good descriptions and bulleted lists. It seems like anytime a bulleted list is used now, people assume it's because of AI.


I'm relatively confident this critique is AI-powered. The dead giveaways:

1. Verbosity. Developers are busy people and security researcher devs are busy even moreso. Someone so skilled wouldn't spend more than 2-3 sentences of time in critiquing this repo.

2. Hostility. Writing bug free code is hard, even impossible for most. Unless your name is Linus Torvalds, Richard Hipp, or maybe Dan Abramov, most devs are not comfortable throwing stones while knowing they live in glass houses.

3. Ownership. "Killshot" comments like this are only ever written by frustrated gatekeepers against weak PRs that would hurt "their baby". Nobody would get emotionally invested in other people's random utility projects. This is just a single python file here without much other context.

4. Author. The author is still an aspiring developer. See their starred repo highlighting adherence to SOLID/DRY principles as a primary feature of their project. Not something you'd expect to see from a seasoned security researcher. https://github.com/SSD1805/EchoFlow

5. Content. The critique is... wrong. It says the single file, utility repo is "awful" for being a "less maintainable" monolith. Hilariously, it calls the code bad because it does not need dependency injection. This was a top critique in the comment!

--

Regardless of political persuasion, I hope this trend of using AI to cyberbully people you don't like goes away.


I hope this trend of DOGE using the US Government to cyberbully people they don't like goes away.


Once you've read enough ChatGPT slop, you know it when you see it:

- Massive verbosity.

- Flawless spelling and grammar.

- Grandiose tone.

- Robotic cadence where every paragraph and sentence has similar length (particularly obvious in longer text.)

- Em dashes everywhere.

- The same few stock phrases or sentence structures used over and over - e.g. "This isn't X—it's Y", which that issue uses twice in two paragraphs:

    There is nothing "hardcore" about writing fragile, insecure, and unscalable code. This isn’t pushing boundaries—it’s demonstrating a lack of engineering fundamentals.

    If this is what was learned at previous jobs, then it’s time to unlearn it and start following best practices. Because right now, this is not just bad engineering—it’s reckless.
If AI didn't write that snippet then I'll permanently retire from internet commenting.

(None of what I just wrote is intended as a defence of DOGE.)


These are all good points, and I agree. The Em dash I've noticed a lot. One additional is over usage of adjectives like "robust" or something that came out of the third option in a thesaurus.

As someone who has been using regular dashes and words like "robust" for years, I've had to purposefully dumb down things like my resume/CV and internet comments. Like many of us here, I'm coming from a generation that actually had to write 100% of the research paper instead of an AI generating it for me. So I always took great care to aim for something close to perfection in writing.


The point 2 makes me think you did not read what developers write on the internet, in particular in flame war, in particular when they have beef with whoever they argue with.

Verbose hostility of that kind and throwing stones, even nitpicking with exaggerated outrage are no exception. And lack of experience never stopped people from feeling and behaving like god given gift to programming profession.


a propos number 2, I think this is only a feature of seasoned developers who have managed to outgrow their own high opinions of themselves. I've met plenty of younger devs who would totally write something like this taking down the work of someone whose style did not align exactly with what they considered "good".


I agree on all counts. The readme of the repo you link also smacks of an AI generated summary of the codebase. (Frankly, I don’t think the AI was able to understand what the code in that repo does, which is my guess as to why it talked much about form rather than function.)


> Developers are busy people and security researcher devs are busy even moreso.

Neither the critique, the critiquer's profile, nor even the Krebs article says that the critique is a security researcher, and it definitely isn't the case that all devs are particularly "busy people". You yourself argue later, in fact, that the signs are that the author is not an experienced dev or security researcher, so it is nonsense (even more than assuming an average rules out an exception in the group) to argue that the code is AI-written based on the assumption that normally, a security researcher would be too busy to write it.

> Hostility. Writing bug free code is hard, even impossible for most. Unless your name is Linus Torvalds, Richard Hipp, or maybe Dan Abramov, most devs are not comfortable throwing stones while knowing they live in glass houses.

If you've been online more than about 5 minutes, you know that there is no shortage of hostility, and that even if it isn't most of any given community, its a highly visible subset of any community online.

> "Killshot" comments like this are only ever written by frustrated gatekeepers against weak PRs that would hurt "their baby". Nobody would get emotionally invested in other people's random utility projects.

The only reason we are talking about this on HN is that this isn't some random "other people's random utility project". The critique was posted while the author of the code being critiqued was a high profile figure in current news stories, and the critiquer posted a more explicitly political followup the day after the original critique addressing the author's highly-publicized resignation due to the news coverage.

> The author is still an aspiring developer. See their starred repo highlighting adherence to SOLID/DRY principles as a primary feature of their project.

That...doesn't support the critique being AI. In fact, it undercuts it because it provides a simpler explanation than AI as the explanation for your next bullet point, that the critique is wrong (especially, the SOLID/DRY focus is particularly consistent combined with the "aspiring dev" status you describe is particularly consistent with the specific things you focus on the critique being wrong about.) It also undercuts your first bullet point, as already discussed, which hinges on the assumption that the critique was written by an very busy experienced security researcher, and not an aspiring dev..

I mean, if excess verbosity, a more regularized format than is typical for the venue, and being wrong together are hallmarks of an AI written critique, then I'd say your post is at least as much AI-suspicious as the critique under discussion.


Lol that's so funny. Can't imagine writing that. (the critique, not the code).


"Where are the examples" is a straw man. Imagine the ways a political enemy might exploit limitless access to the attention of 140M Americans. The calculus seems to be that a false negative will be much more catastrophic than a false positive.


I understand what you're saying but that argument I don't think should apply here. Having some kind of evidence to back up a drastic action like this is not something that should be argued for, it should be a given. I've asked at least 5 different times for people to point to anything material, and no one has come up with anything. I'm not saying there is no threat, I could be wrong and there could be a massive threat, but if there is one shouldn't we be able to point to something more than "it could happen" and being paranoid about it? I'm being asked to have faith in institutions/politicians that have a long, long, long proven track record of not having my best interests at heart and I can't accept that when they have clear conflicting interests / motives.


> Just trying to shoehorn alexa into as many domains as possible

It happened outside of Alexa too. Every team with a public facing product was directed (it seemed) to come up with some sort of Alexa integration. It was usually dreamed up and either a) never prioritized or b) half assed because nobody (devs, PMs, etc.) actually thought it made any sense.


Someone should honestly script this. Assuming this is not already that


Not a script but if you're reading on phone with the Harmonic app, there's a "View on archive.org" button for every post. It works pretty well for me.


just treat archive.ph as a 2nd level browser.

if url doesn't work in the regular browser -- copy url into that.

maybe add that as a feature request for Brave.


I just check the feature requests for the iOS client I’m using and this has been requested [1] …three years ago.

[1] https://github.com/dangwu/Octal/issues/228


Brave search has been so terrible for me. I’ve very quickly been conditioned to append “!g” to all omnibar searches, even in non-brave browsers! (This tells brave to use Google)


Those photos are quite supportive of OP. There's much more realistic diversity in the units: presence of light, color of light, objects blocking the light.

The uniformity in the NK photo is bizarre.


1.1k open issues. OOF


1.1k isn't bad for a project with ~33 million weekly downloads[1], imo. Yes, I know that's not necessarily a good metric, but it's ~10 million more than React[2] which also has a similar number of open issues[3].

[1]: https://www.npmjs.com/package/prettier

[2]: https://www.npmjs.com/package/react

[3]: https://github.com/facebook/react


A code formatter has no business having 1100 open issues (5k closed). It is not rocket science.

In my experience, the number of open issues not only correlates with popularity, but how crappy the language is. Javascript projects, with its myriad of dependencies and attracting junior, inexperienced devs, tend to accumulate a great number of bugs.

For reference, curl has 24 open issues (4k closed), it is a couple orders of magnitude more complex AND more used than prettier.


I don't know enough about prettier. But in general linters (which have overlap with formatters but aren't the same) have a lot of issues that fall in the "this is not my preference. It must therefore be changed" category.

"Gofmt's style is no one's favorite, yet gofmt is everyone's favorite." I guess.


Is that because they don’t use a bot to auto-close issues like a lot of other projects?


That annoys the crap out of me. Closing stale issues doesn’t make the issues go away, it just means that edge cases aren’t addressed. If I have an issue and find myself in a stale-closed issue, I’m not even going to bother reporting it. I’m either going to look for a different library altogether, one that actually tries to solve edge cases; or I’m going to create my own library as a big middle-finger to the project. At work, I’ll just open a new issue, which will probably just be ignored.


Yeah, I stopped reporting issues to projects when I have seen tons of stale bot closed issues. Those were real bugs, still really in the codebase, just no one fixed it for arbitrary short time.


And arguably we’re where we are because people have this idea that issue counts are comparable between projects.

I see this way of thinking around CVEs too. I think it’s a mistake of making data-driven decisions based on noise rather than signals. It sounds good when you have a comparative number to go on.


I see open issues as a pretty good signal that people are using the software and people care about its development, but maybe the number of contributors is too small. In these cases, I might even open the PR myself. Issues without replies though is probably the worst signal you can send as a maintainer. Closing issues without replying and letting a stale bot passively aggressively close issues is probably tied, but it’s usually hidden away.


Off topic but every once and a while I'm made aware of the impact prettier has had on my typing. It's so hard to write code without a tool like prettier because formatting related keystrokes have largely been removed from my muscle memory. You basically end up writing a sort of shorthand.


Yeah it’s true. Its not simply that you don’t have to manually format things a certain way. You simply don’t format things at all. You can avoid writing lots of spaces, newlines, semi colons, etc. it’s absolute garbage, then you save it, then it’s fine.


And autoformatters have cemented my preference for non-white spaced languages. As you say, just write whatever and let it format it. When I then switch to our python backend this strategy no longer works. Like, it can fix something, but it needs a much cleaner state to do so. For instance if my loops are indented wrong, black can't solve that as the decision it makes has a semantic meaning.


Something of a tangent: with Automatic Semicolon Insertion (ASI), JS is a white-space influenced language. Some of prettier's defaults are directly related to ASI, such as the way it often wraps things in extra ("unnecessary") parentheses, especially JSX but also anything complicated and multi-line especially after keywords like `return`. (`return` is the trickiest under ASI, so prettier's defaults seem that conservative in large part because of `return`.)

As someone who appreciates `{ "semi": false }` in my prettierrc, I take a lot of advantage of JS' ASI and find prettier's behavior interesting and conservative, but useful.


A team I'm doing some work with uses eslint and has not configured prettier, so instead of simply having everything get formatted correctly, I get red squiggly lines under blocks of code because of an omitted meaningless whitespace character.

There are a few linting rules that can help identify semantic errors or dead code, but only a small number of rules are needed to get all of the benefits.

Autoformatting (prettier, gofmt, etc.) is the way to go.


You can mellow the squiggles for stylistic errors in VS Code.

Take a look at the “eslint.rules.customizations” key in this file:

https://github.com/antfu/eslint-config/blob/main/.vscode/set...


Thank you! I had been wondering if that was possible. Do you know if it can be done only for a single project / directory?


You can set workspace settings in the top-level .vscode/settings.json file of your project: https://code.visualstudio.com/docs/getstarted/settings#_work...


many (most?) red squiggles are autofixable with eslint. I use eslint exactly like prettier, in that I never think about formatting and everything gets fixed/formatted on save.


Many are, but some of the "errors" are not helpful while code is in progress. Why do I need to see them while typing if they can be easily auto-fixed later? It's just more useless visual clutter to worry about.


I'm amazed people are using editors that require manually tweaking indentation. I mean, apart from making sure the code is in the right block.


In languages which don't use indentation semantically, I always end up catching a bug, sooner or later, by running code through a formatter. It's usually in a complex tree of nested if statements, some code I thought was part of one group gets moved to be in another group, making it obvious where the mistake was.

Python can't do that. It's one of a few things about the language which is "nice" when using it for simple tasks, but which makes more complex programming pointlessly error-prone.


Conversely, my strategy is "press shift+tab to close the block and not have to screw around with matching {} everywhere".


But that leaves you having to manually adjust entire blocks, when indentation levels change. With {}, prettier can readjust everything properly for you, even after mutilating changes such as copying an entire block from a different file into a new one, and at a different indentation level.


Python formatters do that too, they don't care about the indentation, as long as it's consistent. All I need to do is indent the block far enough to the right that it's not more left than the previous one, and that's it.

This also happens rarely enough that it's never been an issue for me.


I find selecting a block of code and hitting tab/shift+tab much simpler and faster than hunting down matching parenthesis, especially when adjusting longer sections or complex situations.

Of course this is mostly a tooling issue, since there is a shortcut for adjusting indentation but not for adjusting parenthesis. Or maybe there is and I just haven't discovered it yet.


Perhaps is because jetbrains make better structural editing than vscode, but I practically never have to type closing brackets.

Copying, moving, reordering, newlines, pretty much anything will keep correct pairing of brackets.


Sure, and I never have to type any opening indentation either, only closing.

Regardless, to me, this is such a minor issue that I don't consider it at all when choosing a language.


This + Copilot.

I code in an entirely different way now, I would of thought it impossible to change my ways (about 20 years of coding)


Yes I just started dabbling with copilot and I feel much the same as when I began really leveraging prettier.

I’m similarly surprised to see such an ingrained skill undergo such rapid change. And it’s funny because for me prettier was originally just meant to fix the chaos that is every dev having their own style/editor preferences.


have*


But I am quite happy with it I don’t have to waste my time typing. I don’t have to waste my time improving my raw typing speed.

I do solid touch typing 60WPM writing plain text. With code completion and linting it easily goes twice as fast writing code.


That's cool. I'm unironically interested, how do you manage to think at such speed?


I'm ~120WPM and a vim user, and yeah the main thing is how little time I actually spend typing. It helps me keep my focus on what I'm doing.

I think other tools (like the ones people are mentioning) do stuff like this for people: autocomplete, copilot, formatters, etc. I also use formatters and linters, and my autocomplete is "be strict about conventions, look up the actual meanings of words", which makes me touchy about conventions haha.


I type at 100WPM and that's usually fast enough for programming but not always. Sometimes you just know exactly what needs to be written, maybe because you've written similar things before.


I don’t do it simultaneously, when I do know what I want to code it just takes me less time writing it out so sooner I go back to thinking what’s next.


Programming is mostly thinking interspersed with a bit of typing. Nobody thinks as they type. The typing bit is just getting your abstract idea of what the program should do into the computer.


I think I probably suffer from the same thing. On top of that, I added a VS macro that adds missing imports, cleans up unused ones, and re-orders them in a single keystroke. I've never been lazier.


Anyone have experience with/opinions on Apache Cordova? [1]

It seems like it would solve most of the PWA issues. Although I vaguely recall reading that Apple is not too fond of apps that are basically just wrapped web views.

[1] https://cordova.apache.org/


I'm stoked to be learning about the real Eliza, after being surreptitiously exposed via the Zachtronics game [0]. The game has a fascinating dystopian/scifi take where a company provides AI therapy through human "proxies" that simply vocalize the AI response.

[0] https://www.zachtronics.com/eliza/


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: