Hacker News new | past | comments | ask | show | jobs | submit login

I often watch a pair of YouTubers who cover china, and they used to reside there. The one thing that they lamented often was a lack of maintenance, from historic buildings to lightbulbs in elevators. They tried to posit it as something more cultural, and I dont know if that is true or not for china.

I do feel like the culture of tech has become one of maintenance not being part of what we do. When was the last time you saw someone get promoted for "cutting costs" or "making the code less complicated". When was the last time you sat down and read someone else's code and were impressed at how CLEAR and SIMPLE it was?

20 years ago we were all running on bare hardware. Now it's a hypervisor, with a VM, that has a docker container, that imports everything and a cat. We get tools like artifactory to make up for the fact that all of that might disappear. Top it of with zero ownership (infrastructure as a cost center and not a depreciable asset).

It feels like a fuck shit stack of wet turds and were trying to play jenga with them, shuffling them back to the top and trying to get our next gig before the whole thing falls over and hopefully lands in someone else's lap.

To make a point: do we need docker? No, but writing installable software is hard (depending on language and what you're doing). Docker doesn't fix the problem it just adds another layer in.

The original service is the database. Yet we dont normally expose it because its security model remains rooted in the 19xx's. So we have layers of shims over it, ORM, JSON... pick your stack and "scalable" abstraction.

The author mentions LLM's. The more I play with them the more I feel like this is going to be cool after engineers beat it into submission over the course of the next few years. So much opaque, and so little clarity on what is going on under the hood. It's no wonder people are having trouble coming to grips, it's a mess! If it were a battery break through it would take 10 years to turn it into a mass producible product, but because its software we throw money at it and let it dangle out on the web!!! (and I dont fear AGI)

FTA: >> I don’t have a prescriptive solution for this. I wrote this text to start a discussion around a feeling I previously struggled with but didn’t know how to label.

I do. We need to do better. We need to stop piling on more and strip back to clear and well understood. We need to value readable code over DRY, or Design patterns or what ever the next FOTM is. We need to laud people who make systems simpler, who reduces costs, who reshape and extend existing things rather than build new ones and pile more on top because it's "easy" or looks good on the resume.

I am part of the problem too, and I need to stop. I need to stop reaching for that next shiny bit of tech, next framework, next library. Because acting like a schizophrenic ADHD child finding candy hasn't really served anyone.




Before looking at technical problems, we need to look at organizational problems. Is the tech there to solve a business problem, or is it there to solicit the next VC round, an invite to a cloud provider conference, an expensive dinner paid for by some vendor, or to perpetuate a career based on flawed technology?

A lot of the problems, associated tooling and "best practices" you mention arose as a result of the VC bubble from the last decade, where the primary objective was not to solve the business problem but to manufacture complexity so the next VC round and large headcount could be justified.

Sadly, this is not limited to VC - either collusion or technical incompetence is rampant at the executive level, which means crap vendors can nevertheless get their "solutions" into companies and lock them in. Do this long enough, and entire careers start relying on these "solutions" so you get a guaranteed supply of people who can collude with you to bleed their company dry.

See also:

* cloud computing

* blockchain

* microservices

* resume-driven-development


Would you mind listing those youtubers which cover China? Sounds interesting.

Also I'd like to kindly ask you not to use "ADHD child" in that manner because I think it stigmatizes it although I do understand the point you were trying to make there.


Here is them covering the topic directly: https://www.youtube.com/watch?v=o9eXi3RL8q4

AS for the ADHD thing, I get it, it's also a pretty accurate description of how I feel some days working in this industry. Its hard not to be a technological magpie collecting shiny rocks!


For context: both of these YouTubers were eventually denied stay in China and turned their channel into bashing China full time for a living.

I really valued their insight and perspective (rural China by motorcycle, for example, is not a common perspective in the west), but eventually had to unsubscribe from their toxic bitterness.


Yeah, it's a shame they've been audience captured. At the beginning they leaned a bit to the rosy side, clearly glossing over visible negatives. Somewhere around when they left and were able to speak about good and bad but hadn't committed to a narrative was probably the point of peak value. Now they are almost comically anti-china. Ah well.


Plenty of toxic bitterness for sure, but there is signal in the noise and unless you personally know people in China who are free thinking and also consider you close enough that they would tell you things that can incriminate them this may be the only source of that signal...


Cutting costs by adopting a better architecture was a big thing at one of my previous jobs. People were praised and promoted for cutting thousands of dollars off monthly AWS bills.


> People were praised and promoted for cutting thousands of dollars off monthly AWS bills.

Your in a place that is more rare than you think, a lot of us have experiences more like this:

https://news.ycombinator.com/item?id=38069710


> Cutting costs by adopting a better architecture was a big thing at one of my previous jobs. People were praised and promoted for cutting thousands of dollars off monthly AWS bills.

I think this is a slippery slope. praising is fine, but aws bills is essentially a non-functional quality attribute of the software. the job is to never even let it become a problem in the first place

what about teams that have built their software in time and with quality, are they essentially losing one of the opportunities to get promoted because they built a better software in the first place?


Same at my current job


why fix anything if it's easier to add some other layer and kick the problems down the line?

it's the economically rational thing to do... you woudln't want your kids to have a boring future

gota leave some problems to them, heck cause some because we didn't know any better anyways


>To make a point: do we need docker? No, but writing installable software is hard

Writing installable software doesn't help with isolation, self healing and scalability. In a microservice world you kind of need Docker/Kubernetes.


Hum... Docker isn't a great solution for isolation (it can do some of it, it promises way more than it can do).

Your system's scalability is completely defined by how you architect it, and Docker changes nothing.

And WTF is self healing? Docker can't fix any ops problem that it didn't create.


>And WTF is self healing? Docker can't fix any ops problem that it didn't create.

The gp you replied to mentioned both "Docker/Kubernetes"

It's the Kubernetes management layer of Docker-style containers (in pods) that helps with monitoring and restarts: https://www.google.com/search?q=kubernetes+self-healing


Agreed. Could you please share your thoughts on the best current solution for isolation? Thanks in advance!


VMs are designed to provide isolation. Docker depends on what Linux provides, and Linux puts less and less importance on it as the time passes.


Um. VMs were doing the thing you said before there was Docker


> If it were a battery break through it would take 10 years to turn it into a mass producible product

I think it really did take approximately that much time for LLMs as well. first transformers paper came in 2017, almost 6 years back.

text to code came almost 2-3 years before ChatGPT: https://www.microsoft.com/en-us/research/video/intellicode-c...

so even for software with the same underlying tech i.e transformers, it took almost 5 years to get to a breakthrough that can be scaled.

I really like paulg's observation that "knowledge grows fractally". If you put the scope of chatgpt as all human jobs, it would still seem that its we have only scratched the surface, same in terms of throwing money at it, we are only throwing a very little fraction of the money

> We need to value readable code over DRY, or Design patterns or what ever the next FOTM is. We need to laud people who make systems simpler, who reduces costs, who reshape and extend existing things rather than build new ones and pile more on top because it's "easy" or looks good on the resume.

not just in tech, its always been hard to quantify and reward people based on non-functional attributes of a system's output.

> I am part of the problem too, and I need to stop. I need to stop reaching for that next shiny bit of tech, next framework, next library. Because acting like a schizophrenic ADHD child finding candy hasn't really served anyone.

referencing paulg again, I think this reach for next shiny bit of tech should still happen, but reaching fractally i.e in context of everything that you do, reaching for new tech should be a small part of it but still an essential component to grow


>I do. We need to do better. We need to stop piling on more and strip back to clear and well understood. We need to value readable code over DRY, or Design patterns or what ever the next FOTM

This sentiment is common in people that lack understanding why each of the stack elements currently in place has been put in there for. I'm not picking on parent specifically, but having been working for "big tech" for quite some time I'm meeting young-er people that are starting their first gig in a "big tech stack" company(at a senior position due to their entire 5 years of experience) and their first instinct is as the above. "Why are you using all this crap? Just rip it all out and start from scratch!

No

I was like that a couple of times in my career and being more convincing I was allowed to" rip it all out" on more than one occasion. A year later my system was better than the original, but by the time I finished it was already out of date with "modern practices" and during that year I rediscovered every single seemingly stupid decision I saw made in the original system.

Now, when I see something that doesn't make sense that looks like a mind boggling tech stack doing almost nothing(and yet it works well) I ask myself, what is it I don't know about this. What documentation is lacking/out of date? (all of it usually). I then dive into the source code and learn why things were done the way they are. Also knowing how the entire stack works from the bare metal to k8 and "server less" helps.

If I was an educator I'd make up an It curriculum teaching the basics of how computers work with something like basic on 8 bit, later assembly.

Then I'd go through features of modern hardware, multi-cores, caches, etc.

Then networking basics with focus on low level protocols "tcp/Ip Ethernet, vlans, vpns". With some layer 7 stuff added like HTTP(S),SMTP etc at the same time as OS level knowledge based on Linux on the console and Windows Server is introduced(as well as bash/powershell/python scripting) . I'd have students code simple servers/clients, stand up their own SMTP gateway and Web server on bare metal. Data science should run from this time on too.

Only after they've been using and learning on bate metal for at least a year I'd start with hypervisors. Teaching things like distributed switching, more advanced features like FT and HA using virtual machines. Also NAS and Sans at this stage.

Then and only then comes containers and k8 at the same time as serverless and cloud. Then topic like resilience, meeting SLAs, SLOs (risk calculation) , business continuity, DR in depth, etc

As mentioned before few things should be thought alongside this throughout, probably java programming, data science with python. Maybe ML basics.

I don't know what is thought on It courses these days, but I strongly suspect not what I listed above. /rant over


The key part of curriculum design is prioritization because you generally have a fixed amount of hours for a degree - every important thing you put in requires taking out something else, and learning a topic in more detail requires learning fewer topics. And I'm not sure if most potential students would value spending a year on bare metal at the cost of throwing out a year's worth of stuff that is more high level. Like, be specific - count the things you're adding, take a look at the CS/SE courses in the program you had, and choose an equivalent number of courses that you would cut out to make space for your proposals - does it still make sense then?


I would love an easy way to learn all those things you mentioned. For my job I only need to know an API framework, Python, SQL, an ORM, really.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: