Hacker Newsnew | past | comments | ask | show | jobs | submit | SyrupThinker's commentslogin

Your CV and cover letter probably does a lot of the work at least in DACH.

The stories I hear from friends in HR, at varying company sizes, is the stuff of fiction. Apparently most people apply with utter trash, its no surprise they get filtered out if they can't even be bothered to present themselves properly.

At least at smaller companies, if you have something that actually looks like you tried, you immediately stand out. (After HR waded through all the bad ones)

We are also not talking about typos or gaps in the CV here, but things like: including everything expected in a CV, writing something even vaguely resembling a formal letter, or even, addressing the right company in it (bonus points if they are a direct competitor).


Creating a copyright on one's likeness seems pretty messy in regards to that, but there is a somewhat similar idea in German law (and surely other places) that creates similar concerns for using an image or work.

We have a "right to one's image", you generally can't distribute/publish photos with recognizable people without consent. Unless they're truly just "part of the landscape" (background randos), crowds at public events, or of legitimate news/artistic interest.

I'd expect a similar threshold to apply to this Danish solution.


"The Dress" was also what came to mind for the claim being obviously wrong. There are people arguing to this day that it is gold even when confronted with other images revealing the truth.


Some of Microsoft's extensions are licensed such that they may only be used with their own products (i.e. the official VS Code they offer for download, etc.). This already affects Cursor for example:

https://github.com/microsoft/vscode-dotnettools/issues/1909


That sounds too good to be true, and it seems like they are indeed introducing a usage system similar to their competitors next month?

> Starting August 1, 2025, we’re introducing a new pricing plan for Amazon Q Developer designed to make things simpler and more valuable for developers.

> Pro Tier: Expanded limits $19/mo. per user

> 1,000 agentic requests per month included (starting 8/1/2025)

- https://aws.amazon.com/q/developer/pricing/

Previously agentic use was apparently "free", but with a set deadline in June, so it seems like this was just for a testing phase?


50 request per month? Am I reading that correctly? If that's the case it is pityful.


That's on the free plan. It's 1000 on the 19$ subscription, or 3000 on the 39$ one.

If you go over the limit, it's $.04 / request.


Ah, that reminds me of my own university days.

Early on I've made the mistake of sharing my solutions with people I knew. Unfortunately they kept sharing the solution too, and so on.

Two times I was pulled into a profs office after the relevant lecture, to be questioned about it.

After it became clear that I was the author, and what happened, nothing ever came of it (for me).

Ironically both times the copiers supposedly failed to remove the git repo that was part of the handoff, so it was primarily about verifying I was the original author.

Lesson learned: "invisible" watermarks work, because people are generally lazy (also don't share graded work, just offer to help)


What a boring website...

It fails to differentiate between JavaScript engines, a core language implementation, and runtimes, an engine plus the useful parts needed for writing software (os, event loop etc.).

It lists engines like boa, duktape or Hermes as if they are the same thing as Node, Deno or Bun, but at the same time doesn't even mention SpiderMonkey, V8 or JavaScriptCore, as if realizing they are not actually in the same class.

I guess the snark wouldn't work as well if a chunk of the list gets eliminated by thinking about it.


From my own experience SQLite works just fine as the container for an archive format.

It ends up having some overhead compared to established ones, but the ability to query over the attributes of 10000s of files is pretty nice, and definitely faster than the worst case of tar.

My archiver could even keep up with 7z in some cases (for size and access speed).

Implementing it is also not particularly tricky, and SQLite even allows streaming the blobs.

Making readers for such a format seems more accessible to me.


SQLite format itself is not very simple, because it is a database file format in its heart. By using SQLite you are unknowingly constraining your use case; for example you can indeed stream BLOBs, but you can't randomly access BLOBs because the SQLite format puts a large BLOB into pages in a linked list, at least when I checked last. And BLOBs are limited in size anyway (4GB AFAIK) so streaming itself might not be that useful. The use of SQLite also means that you have to bring SQLite into your code base, and SQLite is not very small if you are just using it as a container.

> My archiver could even keep up with 7z in some cases (for size and access speed).

7z might feel slow because it enables solid compression by default, which trades decompression speed with compression ratio. I can't imagine 7z having a similar compression ratio with correct options though, was your input incompressible?


Yes, the limits are important to keep in mind, I should have contextualized that before.

For my case it happened to work out because it was a CDC based deduplicating format that compressed batches of chunks. Lots of flexibility with working within the limits given that.

The primary goal here was also making the reader as simple as possible whilst still having decent performance.

I think my workload is very unfair towards (typical) compressing archivers: small incremental additions, needs random access, indeed frequent incompressible files, at least if seen in isolation.

I've really brought up 7z because it is good at what it does, it is just (ironically) too flexible for what was needed. There probably some way of getting it to perform way better here.

zpack is probably a better comparison in terms of functionality, but I didn't want to assume familiarity with that one. (Also I can't really keep up with it, my solution is not tweaked to that level, even ignoring the SQLite overhead)


BLOBs support random access - the handles aren't stateful. https://www.sqlite.org/c3ref/blob_read.html

You're right that their size is limited, though, and it's actually worse than you even thought (1 GB).


My statement wasn't precise enough, you are correct that random access API is provided. But it is ultimately connected to the `accessPayload` function in btree.c which comment mentions that:

    ** The content being read or written might appear on the main page
    ** or be scattered out on multiple overflow pages.
In the other words, the API can read from multiple scattered pages unknowingly to the caller. That said I see this can be considered enough for being random accessible, as the underlying file system would use similarly structured indices behind the scene anyway... (But modern file systems do have consecutively allocated pages for performance.)


One gotcha to be aware of is that SQLite blobs can't exceed 1* GB. Don't use SQLite archives for large monolithic data.

*: A few bytes less, actually; the 1 GB limit is on the total size of a row, including its ID and any other columns you've included.


I think this is good insight, and I would extend this further to “coming from a less strict language to a very strict one”.

As someone who self-learned Rust around 1.0, after half a year of high school level Java 6, I’ve never had the problems people (even now) report with concepts like the ownership system. And that despite Rust 1.0 being far more restrictive than modern Rust, and learning with a supposedly harder to understand version of “The Book”.

I think it’s because I, and other early Rust learners I’ve talked to about this, had little preconceived notions of how a programming language should work. Thus the restrictions imposed by Rust were just as “arbitrary” as any other PL, and there was no perceived “better” way of accomplishing something.

Generally the more popular languages like JS or Python allow you to mold the patterns you want to use sufficiently, so that they fit into it. At least to me with languages like Rust or Haskell, if you try to do this with too different concepts, the code gets pretty ugly. This can give the impression the PL “does not do what you need” and “imposes restrictions”.

I also think that this goes the other way, and might just be a sort of developed taste.


For whatever it's worth, I used to do a lot of teaching, and I have a similar hunch to

> I think it’s because I, and other early Rust learners I’ve talked to about this, had little preconceived notions of how a programming language should work. Thus the restrictions imposed by Rust were just as “arbitrary” as any other PL, and there was no perceived “better” way of accomplishing something.


Possibly “(ver)änderbar” (changeable) to have a distinct keyword. “Mutierbar” would also work fine in German, but it was probably changed with the same reasoning that fn was to fk.


Because they desperately needed some Umläute :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: