A few years ago I start a dashboard project that was mostly raw SQL.
I then saw the team wanting to convert it to ActiveRecord, which they started. But lots of queries had to use AREL (Rails' "low level SQL AST abstraction"), since they weren't really possible or just too difficult to do in ActiveRecord.
But AREL is so incredibly unreadable that every single AREL query often had its equivalent in plain SQL above it, as documentation, so new people could understand what the hell it was doing.
In the end some junior was unhappy with the inconsistent documentation and petitioned that every query, simple or complex, AREL or ActiveRecord, had to be documented using SQL above the AREL/AR code.
Then they discovered that documenting using Heredocs rather than "language comments" enabled SQL syntax highlighting in their editors.
After that we had both: heredocs with the cute SQL and some unreadable AREL+AR monstrosity right below it.
I still laugh about this situation when I remember it.
You either start with or without an ORM, depending on your assessment of whether the project is gonna need one.
If you start without one, you still have to partition your code well enough so that retrofitting one doesn't cause a huge mess. Basically keep your "raw SQL queries" in a centralised place (file or folder), rather than strewn together in controllers/views/services. And you should do exactly the same if you use an ORM. Isolate the fuck out of it and make it "easily removable" or "easily replaceable".
Also keep the "effects" of your ORM or your non-ORM away from those other parts too: your controllers, views and services should be totally agnostic to whatever you're using in the data layer. When you add subtle coupling you lose the possibility of changing it, but it also makes your project less maintainable.
This is easier said than done: in dynamic languages or with structural typing like Typescript it's very easy: it's all objects, anyway, so ORM or no ORM it's the same. In stricter languages like Java it might lead to lots of intermediate data structures which are verbose and causes problems in itself. Or the middle ground: use primitives (lists and maps) rather than classes and objects, although ORMs like Hibernate will make things difficult for you, since they're not too flexible about how they're used and their types tend to "creep" all over your project.
-
Most unmaintainable projects don't become unmaintainable because people "forgot to prepare". They become unmaintainable because people assumed everything is permanent, so there's no penalty to using everything-everywhere. So there are "traces" of the ORM in the controllers and views, the serialisation library and serialisation code is called in models in services as a "quick hack", the authorisation library is called from everywhere because why not. You quickly lose the ability to easily reason about code.
The same applies other areas. I could make a treatise of how game developers love sticking keyboard-reading code absolutely everywhere in the codebase.
> Your good developers are often the ones who like to tinker with frameworks, patterns and complexity. Note: good developers don't force this down people's throats, but they're always thinking about what they can apply in the future. That's not to say they can't be perfectly fine working on boring code. But they often get bored with it. They can be 5x as productive as your average developer when working on the boring code, but you're just ticking down a clock in a lot of cases.
In my experience that depends.
But the tinkering kind is often satisfied when they are able to tinker on their own code. Even (or especially!) if they're allowed to do it during working hours. But allowing engineers to literally hone their craft on the clock is something that is becoming rarer and rarer, unfortunately.
But I agree that a developer that refuses to admit failure of their experiments and wants to force their experiments on others is a problem, of course.
On the other hand, there's more to this job than coding, and a lot of people interested in "learning" will leave as soon as they find out there's nothing more about the problem-domain to learn.
Nowhere in the grandparent's post says that it was a "walled garden", or even that it was closed source. The fact that only one person was needed doesn't mean there's only one person available. OP even said he worked for a company in a reply. The rationalisation automatically assumes that the grandparent is either incompetent or lying by omission, which is very uncharitable.
Even if all those problems were true, if it was really analysed as risky, the proper thing to do is to bring in one or two more engineers, perform audits, ask for the full source if it's not available. Ask for documentation. Heck, OP said it's not minified: try to reverse engineer it, if need be. Perhaps it's not even necessary!
There's absolutely no need to bring a 9-digit-sum team to replace a working system made by one person, even if this is common practice in our industry. Not before all other rational avenues are pursued if there are problems.
What also pisses me off is that what happened on the other side might have been caused by companies like the ones I worked for. For a long time I worked for consultancies, and it was routine for sales to "translate" our feature lists into procurement rules (sorry don't know the term in english) and give that to companies and government so we would be the only ones able to answer.
And the worst part is that software engineers go on with this tune because they enjoy so much overengineering everything from scratch.
Didn't say it was a walled garden. But management has its own ways and quirks I said it was possible that the situation was seen by mgmt as a walled garden.
And I answered to that already on my second paragraph.
Taking the nuclear option after merely "seeing [something] as" risky without exhausting the much-cheaper remaining options is not "somewhat understandable, if not plain reasonable". And it's not "ways and quirks": it's incompetence at best or corruption at worst.
This kind of situation might be common, but it is not understandable nor reasonable.
For better or worse there are tons of both reasonable and unreasonable factors as to why a large company would replace a part time developer's side project with something that costs 9 figures.
You don't know those reasons, the person you replied to doesn't know those reasons, and in fact the OP probably doesn't even know those reasons (they "used to turn up to that customer annually for maintenance").
Without understanding that it can be simply and cheaply fixed by training second person is gross incompetence. Those single cell morons should've been fired instead.
> Apple disagrees. They don't run promotions of that sort, or promotions, in fact. The price is the price, it's core to the brand.
That's not entirely true. I got my M1 in a promotion, with a discount. I also got my Watch "bundled" with the bracelet (the packages were literally bundled together), also with a nice discount. That was at a flagship Apple Store.
Also "the price" is not "core to the brand" in Brazil. It changes very frequently and is not a direct conversion of the American price.
I'm old enough to remember the first wave of outrage when Apple dropped the serial port (and floppy disk!) and started using this weird USB thing that only their devices supported.
What goes around comes around. Don't worry, in a few years you literally won't remember that you thought of USB-C as an "Apple thing". Or at least you won't mention it.
Funnyly enough Apple's Brazilian PR team completely fumbled their response to the Ministry of Justice:
> Existem bilhões de adaptadores de energia USB-A já em uso em todo o mundo que nossos clientes podem usar para carregar e conectar seus dispositivos. [1]
> Translation: There are billions of USB-A power adapters already in use around the world that our customers can use to charge and connect their devices
Those "billions of USB-A adapters" however won't work with the cable provided in newer iPhones, which is lightning and USB-C.
One of the main criticisms I see is that low-Elo people also get show mainly high-Elo people. So there's much less chance for two low-Elo people matching with each other, the algorithm works against it.
Since you have a limited number of "likes" per day, you have to actively "say no" to people that's attractive to you.
Well, if the government wants to help people meeting offline, it's gonna cost a helluva lot more than a website... to the point it doesn't sound like a waste anymore. I don't see society going back to offline dating by itself.
I then saw the team wanting to convert it to ActiveRecord, which they started. But lots of queries had to use AREL (Rails' "low level SQL AST abstraction"), since they weren't really possible or just too difficult to do in ActiveRecord.
But AREL is so incredibly unreadable that every single AREL query often had its equivalent in plain SQL above it, as documentation, so new people could understand what the hell it was doing.
In the end some junior was unhappy with the inconsistent documentation and petitioned that every query, simple or complex, AREL or ActiveRecord, had to be documented using SQL above the AREL/AR code.
Then they discovered that documenting using Heredocs rather than "language comments" enabled SQL syntax highlighting in their editors.
After that we had both: heredocs with the cute SQL and some unreadable AREL+AR monstrosity right below it.
I still laugh about this situation when I remember it.