Hacker News new | past | comments | ask | show | jobs | submit | atdt's comments login

OK, I will be the useful idiot. I don't fully understand your anecdote. Could you explain what is exactly it that you perceived and that the other engineer failed to see?


Periodic and regular cycles (sine wave) suggests that there is over correction happening in the system, preventing a stable point. More regular measurements or tempered corrections may be called for.

Concrete example: the system sees it's queue utilization as high so it throttles incoming requests, but for too long, the queue looks super healthy on the next check, so the throttle removed but too long until checking again, and now the util is too high again.


For someone new to time series analysis, how did you choose these particular algorithms? Are they standard in the field, or more of a personal selection?


Most of the algorithms in augurs were chosen to solve problems we've had at Grafana, which tend to require a solution that doesn't require tweaking too many parameters and deals with higher frequency series than many other time series algorithms are designed to deal with. For example, the DBSCAN clustering algorithm works without having to choose the number of clusters, and MSTL/Prophet work with multiple seasonalities and sub-daily data.

The other criteria is that they needed to be fast and cheap, which ruled out many of the deep learning/neural net based models, although I'd still like to try some foundation models using Burn or some other Rust deep learning framework!


Did you consider matrix profile as well?


entirely depends on the use case. If you want to do prediction, decomposition, classification, you have many different choices available.


What is it about Zed that you find superior to VS Code?


I downloaded Zed when it took way too long for VS Code to load the monorepo at work. It took almost as long for me to download and install Zed and then open the monorepo as it did for VS Code to load the monorepo. I think the was a fluke with VS Code as it didn't normally take this long but it did happen often enough to be annoying.

I also find Zed to be snappier than VS Code. It's hard to quantify but it just feels better to use Zed.

For reference, I mainly work on a Node/Typescript monorepo that is made up of a bunch of serverless services and is deployed with SST v2


I am starting to understand why these luminaries warn us about a future that is dominated by a solitary superintelligence, self-obsessed and uninterested in humanity.


Self-projection?


POCSAG is an ideal for broadcasting low-sensitivity push notifications, like "report to your local commanding officer by 15:00 today". These messages don't need encryption, just a reliable way to reach militants without revealing their location.


Armin advocates for 'uv' to dominate the space, but acknowledges it could be rug-pulled due to its VC backing. His solution to this potential issue is that it's "very forkable." But doesn't forking inherently lead to further fragmentation, the very problem he wants to solve?

Any tool hoping to dominate the Python packaging landscape must be community-driven and community-controlled, IMO.


Forking doesn't inherently lead to further fragmentation: the level of fragmentation post forking can still be much lower than before consolidating on the rug-pulled tool

(also, how many more decades does this imaginary community need to create a great dominant tool?)


It may be easily forkable due to the licence choice (MIT or Apache), but the choice of Rust limits the number of people who can actually contribute.


Isn’t npm VC-backed?


It was until it got acquired by Microsoft/GitHub.


This brings to mind Chernoff faces[1], a type of visualization where facial features are mapped to data points.

  [1]: https://en.wikipedia.org/wiki/Chernoff_face


When using Jpegli as a drop-in replacement for for libjpeg-turbo (i.e., with the same input bit-map and quality setting), will the output produced by Jpegli be smaller, more beautiful, or both? Are the space savings the result of the Jpegli encoder being able to generate comparable or better-looking images at lower quality settings? I'd like to understand whether capitalizing on the space efficiency requires any modification the caller code.


The output will be smaller after replacing libjpeg-turbo or mozjpeg with jpegli. You don't need to do any code changes.


I think the main benefit is a better decorrelation transform so the compression is higher at the same quality parameter. So you could choose whether you want better accuracy for the same quality parameter, or lower the quality parameter and get better fidelity than you would have otherwise. Probably to get both most of the time, just use JPEGXL


Wonderful piece. Dennett knows how to write. And he captures the pleasure and privilege of Hacker News with this felicitous phrase:

> Distributed understanding is a real phenomenon, but you have to get yourself into a community of communicators that can effectively summon the relevant expertise.


I agree. We have an eclectic community of thoughtful laypeople and experts. That's why I remain here. It's lazy to point out warts in any community (even peer-reviewed ones!), presume that good is the enemy of perfect, and thus dismiss said community. But I think the SNR here is wonderful presently, and I appreciate you all.


are you sure that's us? :)


Yes, you have a combination of people who learn and people who know here. That is what he described, and environment where you are allowed to be wrong and where people will correct you when you are. You don't get banned from HN for being wrong, so you are allowed to be wrong here, unlike many other forums like most of reddit.


What is being described is that people will know what they're talking about, which is debatable. You might be corrected on here, but I've seen more cases of people being loudly wrong but having enough general knowledge on a subject that they sound correct, and lucky enough that no one with specific knowledge stumbled on their posts to correct them.

Frankly, I don't think hackernews is all that different in terms of community from Reddit. People are just better at hiding it. Many are regurgitating rhetoric they've heard in other posts without having experience with a subject. Even in the realms of programming, it's not hard to see how little experience people have with the things they demonize or evangelize.


Maybe he's talking about the string theorist community


Yeah I'm having serious doubts


Could someone explain how re-directing from a subdomain (chess.com.foo.bar) somehow got past some same-origin check?


Clearly chess.com was using something like "starts with" to process the re-upload. Basically don't re-upload if it starts with https://chess.com, but filter out if it starts with https://chess.com/registration-invite

Typically same origin policies are relaxed for things like images by default [0]. So they came up with a trampoline, they created a chess.com.theirDomain.tld to get past the re-upload filter, which in turn returned a redirect, which the browser followed.

[0] https://developer.mozilla.org/en-US/docs/Web/Security/Same-o...


OP Here - Like the others have said, it wasn't a proper same-origin check. We'll never know for sure how it was handled beacuse it was all done server-side but I'm guessing it was something like an if in statement on the FQDN, hence why I was able to get away with pointing it to my own domain.


It wasn't a proper same-origin check - the server code was checking to see if the image was hosted elsewhere, and if so, it would download and self-host it. The code to check if it was on `chess.com` probably just checked to see if the domain included that string, because laziness.


Not CORS origin check (that does not apply to links), but hand made origin check from chess.com developers.


it sounded server side code allow-list the source, so it was probably just doing a string prefix check. the code to make the friend relation doesn't happen in the browser


if it's happening server side they might have had a bug where they are doing naive substring comparison instead of actual domain evaluation


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: