Hacker Newsnew | past | comments | ask | show | jobs | submit | xonix's commentslogin

Long time ago I was obsessed with Prolog. I found that if you bend it enough you could implement imperative code (a-la JavaScript) evaluation, still being valid Prolog:

https://github.com/xonixx/prolog-experiments/blob/main/Prolo...

This was super funny, but obviously absolutely useless


Why do you say that? Imagine you wrote a language that looked procedural but was actually relational underneath? We could get rid of so much glue and use unification when it made sense and ignore it otherwise. That’s a great idea


Sounds similar to Haskell "do" notation where it can look procedural, but backed by FP principles and tools.

(Something something monads are sequentialness)


No AWK?



My take on this is that it's not always the best idea to abstract-out SQL. You see, the SQL itself is too valuable abstraction, and also a very "wide" one. Any attempt to hide it behind another abstraction layer will face these problems:

- need to learn secondary API which still doesn't cover the whole scope of SQL

- abstraction which is guaranteed to leak, because any time you'll need to optimize - you'll need to start reason in terms of SQL and try to force the ORM produce SQL you need.

- performance

- deceptive simplicity, when it's super-easy to start on simple examples, but it's getting increasingly hard as you go. But at the point you realize it doesn't work (well) - you already produced tons of code which business will disallow you to simply rewrite

(knowledge based on my own hard experiences)


I’ve taken more and more to thinking of them as a zero sum tool.

Super fast and easier to use force multiplier in the beginning, but eventually you break free of the siren song and run into some negative that eats away at your time until you reach that “if you had just sucked it up and written the damn sql you’d be done yesterday” stage.


This just seems like a normal part of the growth curve. You cannot simultaneously build an infinitely scalable solution and complete something in a reasonable timeframe with the features that users will pay for. If you get to the point where you have enough users to justify working on efficiency or scaling out your infrastructure that’s a sign that you are winning. Unsuccessful companies never have to clean up their tech debt. For successful companies, it is a constant balance. You’re lucky to ever be in a position to have to clean up your short sightedness from previous work. By the time Facebook needed to mature beyond their PHP codebase, they were already wildly successful by every metric and had the resources to tackle such a problem. Early stage CRUD APIs should absolutely be generated and use the shitty ORM generated queries. By the time you run into serious performance issues with the ORM generated queries, you should be successful enough and have enough runway to plan a better future.

The vast majority of companies like this don’t fail because their UI is too slow. It’s because they don’t have “essential” features that other platforms do. If you have good monitoring and metrics, you should be able to find the bottleneck in your ORM and resolve it before any users even notice. And that means you’re hand rolling a few queries instead of the entire data storage layer.


+1

"...premature optimization is the root of all evil."

Sometimes you just wanna get stuff out there, other times you're winning and you wanna give users the best experience. Many people have had to do both. You start with an ORM, eventually your queries are slow and all, you gradually reap them out. Almost every Engineer I know has had to do that at some POINT. Nonetheless, I am not about writing SQL for a simple barbing saloon booking app that I am not sure anybody will eventually use.


Yes this applies to a lot of abstractions of SQL, this one (inspired by Entity Framework/Linq) however works _with_ the grain by more or less by finding a sweet-spot between the SQL and the source language and most importantly doesn't try to hide the SQL.

My experience with Linq over the years has been great, only time I've needed to go raw SQL was to supply index hints (you can add that to Linq but we opted not to) and doing special things with merge statements. But EF allows you do submit raw SQL queries where needed.

The important part is, when you have a good system that actually provides benefits(Linq is properly typed) and doesn't get in the way or produce weird SQL then it'll work out.

I've only needed to use around 10 raw SQL queries where Linq failed to hundreds or maybe thousands of Linq queries where it worked perfectly well and this includes some fairly heavy queries.


Yes, yes and yes. ORM are marvelous when you do not know well SQL. With experience, you always end up needing to learn more about SQL. In the end, ORM is as much a hindrance as a help. So instead of spending energy learning the ORM of the day, it's better to invest in longer lasting technologies like SQL.


I know SQL and I like ORMs. For most simple CRUD, an ORM is fine. I don’t understand how they are “as much a hindrance as a help”; using an ORM only adds functionality, it cannot prevent you from using SQL against the data source in the same manner you would if you weren’t using an ORM.

It’s really just syntactic sugar for the subset of very basic queries that are easily expressed in the ORM. If other parts of your codebase are expecting ORM objects, it’s maybe two lines of code to re-wrap your SQL-fetched PK values back into ORM ducks.


The author of slonik, a great (IMO) tool for composing queries in raw SQL in Node for Postgres, has a good blog post explaining this same general idea: https://gajus.medium.com/stop-using-knex-js-and-earn-30-bf41...

The way I've always put it is "ORMs make the easy stuff a bit easier, and the harder stuff way harder." Just learn SQL, it's not that hard and it's a much better, transferable skill.


This is one of 4 reasons why I'm building pg-nano [1] and honestly the main catalyst. The other 3 reasons are: I still want to call my Postgres functions from TypeScript in a safe manner; I want declarative schemas with generated migrations; and I want the ability to write compile-time plugins that can generate SQL or TypeScript through introspection.

It's not released yet, but give it a look :) (v0.1 is almost done)

[1]: https://github.com/pg-nano/pg-nano


ORMs are usually used for speed until it's time to optimize with writing the SQL.

Some ORMs have def have some more experience getting optimized in delaying the need to optimize the query, indirectly, or directly by rewriting it.

ORM with a bit of SQL might still be less work than using a nosql db and trying to make it relational, but not.


I love ORMs for setting up entities and relationships, but I mostly use sql/query builder for all queries that are not trivial.


Have you use BI tools, such as Looker, Tableau, and the like?

LookerML is their abstracted version - but they always have an expander panel for seeing the sql.

---

What I would like is to use this in reverse - such that I can feed it a JSON output from my GPT bots Tribute - and use this to craft a sql schema dynamically into a more structured way where my table might be a mark-down version of the {Q} query - and it does SQL to create table if not exist, insert [these objects from this json for these things into this DB, now these json objects from this output into this other DB. Now I am pulling data into the DB that I can then RAG off as I fill it with Cauldrons of Knowledge I am scvraping for my rabbit-hole project thingamijiggers.


It's funny to watch how often new programming languages resemble visually (and conceptually) the language they are written in. You can put it the other way: just look at a syntax sample and try to guess the implementation language.

I can see some problem here. Probably, it means the author is under heavy influence of the implementation language, that limits their thinking and creativity scope to the concepts of that language.


I'm not sure if that is really a problem.

A single individual is likely to miss a lot of edge cases that a larger organisation (like the Rust foundation) has thought of while creating the language.

In that sense, I believe following the conventions created by a large and well-known entity is likely to produce better results.


I was going to say: I bet this is written in Rust, they use https://docs.rs/syn/latest/syn/ to not write their own parser/lexer, and this is how you get Rust-looking languages... I'm surprised.


It's funny yeah but it makes sense and is not really a problem.

You would write a compiler in the language that is the closest to your perfect. Trying to fix the original language by making a "clone" of it and adding/removing features to your liking.

That is what I want to change here by making a very simple language that if someone would like they could just fork it and add/remove the needed features instead of building one from scratch.

If this idea won't work then at least a very simple compiler would be a good learning resource, so a win win anyways.


Right, this is not the problem par se. To me this is a rather philosophical question. Related to the motivation behind the project.

If someone has some particular domain problem to which they need to introduce the new programming language, then, probably, it should look somehow different to all existing languages, otherwise they would be used.

Alternatively, the motivation could be: OK I need ExistingLanguageX(-alike) but in the domain/environment where it's not present yet, for example, WASM.

The motivation question is always of interest to me since it allows to judje for the long-term perspectives of the project.

However don't take this as a discouragement rant, learning case is a good motivation/raison d'être too.


  email:string( regex(".*@.*\\..*") )
This is incorrect regex for email. The correct one is https://pdw.ex-parrot.com/Mail-RFC822-Address.html


It's a pragmatic one. It has no false negatives, and there's rarely a reason to care about false positives. (Especially not where a stricter regex would (be the only mechanism to) catch it, but a fake-but-valid address wouldn't trivially bypass it.)


At the very least, I assume .+ at the start is a better choice.

The RFC says:

     addr-spec   =  local-part "@" domain        ; global address
     atom        =  1*<any CHAR except specials, SPACE and CTLs>
     word        =  atom / quoted-string
so I think the bit in front of the @ has to be non-empty.


Yeah, I agree, I think I'd have written that myself, but I don't think I'd reject it in review on that basis. I don't agree with sibling comment that it's 'shitty', it's trivially improved sure but it basically doesn't matter - if you want it to be actually correct you need to send an email to it for verification anyway.

Otherwise we can start saying not just not empty, it also needs no other @s etc. and before you know it we're rivalling the actually correct attempt up-thread. If you just want to catch typos, encourage entering real address (not just a space or full stop to fill mandatory field) then any basic thing is something, a compromise between legibility and how much it'll catch, we know there's a tonne of (mainly adversarial) bad input it'll miss, that's fine.


Agreed. I should have said so, too.


I agree with the sentiment, but this one is just shitty. Switch out the stars for pluses and it’d be a lot better


At a certain point I think you're allowed to say you don't care if the person with this email address is capable of signing up or using your service:

very.”(),:;<>[]”.VERY.”very@\\ "very”.unusual@strange.example.com


Too bad - if that's my email address and you're rejecting it, I'm going to sue you for denying service arbitrarily and win. Not every country is the USA.


> I'm going to sue you for denying service arbitrarily and win

Which country are you claiming this entirely implausible legal scenario would work in?


The denial of service was not intentional so I doubt you would win anything.


You'd have to fix your code, at least.


Is there a country where this has happened?


There is a typo on line 17.


That is an absurdly complex regex


Email address specs turn out to be rather complicated.

I really wish people wouldn't code their own checks rather than use already existing standards. Some languages like Java even include proper checks in their standard library: https://java.net/projects/javamail

Several jobs back I had no end of arguments with some Java devs about not writing their own checks, that kept routinely failing on legitimate addresses.


But why do you even need to validate that email? Send the subscriber a confirmation link - if they get it then it was valid, if not then it's on them to fix the situation in whatever way they find fit.


A single massive and unreadable regex isn't an appropriate way to validate a spec that complex. With its complexity it can't be logically evaluated- only tested, whereas a function that breaks the spec out into steps/parts is going to be a lot more maintainable, readable, and auditable.


It's all devs in general. As a java dev I have lost count of the number of times I have to tell any dev not to use a damn regex to validate emails.


It really is, I wonder if anyone has tested that extensively. It's so complex that just by reading it I don't think anyone would be able to confirm it matches the RFC correctly.



You can also implement includes in couple (tens) of lines of AWK: https://maximullaris.com/revamp_define.html#mglwnafh


Nice.

Once I did some experiments at programming in Java using only sun.misc.Unsafe for a memory access: https://github.com/xonixx/gc_less. I was able to implement this way basic data structures (array, array list, hash table). I even explicitely set a very small heap and used Epsilon GC to make sure I don't allocate in heap.

Just recently I decided to check if it still works in the latest Java (23) and to my surprise it appears - it is. Now, apparently, this is going to change.


It won’t change for many many years to come. It becomes deprecated, which just gives you a warning.


Since Java 9, deprecations are actually removed, after a couple of LTS releases.


A couple of LTS releases is 2*n years, though.


The planned releases for removal are listed on the JEP.

If they fail to provide parity, people will do like with modules, keeping holding to older Java versions longer than they should, use the escape command line options if available, turn back to JNI (the opposite this is trying to achieve), or move to other stack if they aren't that dependent on Java, e.g. ongoing Kafka ecosystem evolution.


You can also program in Python just in list comprehensions: https://maximullaris.com/abusing_python.html


Considering that list comprehensions can be replaced by maps and filters, and they can replaced by reduces, maybe we can ever go further in that craziness and do it only using reduces!


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: