Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And now, in 2021, Google has inflicted their "clarity" on the rest of the world. I miss jobs from the 2000s, the jobs where you were paid to write software for a living.

You know, engineering! Given a task, or set of requirements, develop software on your computer, software which eventually runs on the customer's computer, where it's used to solve the customer's problem.

My most recent full time employment a year ago was at a great company. Healthy culture, some of the most talented coworkers I've ever had the pleasure to work along side.

Over the year I lasted there, I used for the first time: Docker, Golang, Kubernetes, Terraform, Gitlab, Saltstack, Prometheus, (and probably other middleware that my brain has GCd to free space). I was barely able to get anything done. At least, it always felt that way.

Maybe I'm just an idiot, I don't know. I'd accept it if true! What I do know is that I used to be able to build things for people, be compensated well for it, and get satisfaction from a customer liking what I've built. It was simple.

In this brave new world, with containers, pods and this and that and the other thing, where it can take months before one even understands enough primitives to do a "hello world".... how can anything ever get done?? How can anything inventive, creative, or experimental emerge from our industry when the develop/test/improve cycle has gone from minutes to weeks or months?

I don't know what the future looks like, but the present strikes me as unsustainable in the long run.

(edit: Wow! I expected this to be downvoted to oblivion, not my highest rated comment on the site...)

<tiny>(Forgive this shameless self promotion: if, dear reader-who-is-a-hiring-manager, you have a paid role for a lowly but experienced systems engineer who doesn't know anything about "web" or "apps" or "social" but is quite adroit with C/C++ (and a few others), most "sciencey/mathy" type problems, signal processing, firmware, network protocols, automation/scripting, and more, ... email is welcome!)</>



In my team, we often deploy internal "services" as cronjobs on an EC2 service. This hasn't run into any issues in 24 months.

One of these we decided to move to a more serious infrastructure (a set of AWS lambdas). It's failed three times in 6 months since, and we're moving it back to be a good old cronjob on a server.

Simple is good.


Just curious what your cost differences are between a dedicated EC2 instance and lambdas. For our organization an EC2 instance was at best 8-10 times more expensive than lambdas.


The question is also what's the cost of troubleshooting the lambda service going down 3 times. 10x more expensive and reliable can be a good trade.


There are multiple equivalent lambdas on a single EC2 instance, so it's running jobs 10-12hours per day


AWS Lightsail instances are pretty affordable ($10/mo and up)


$3.5 USD and up


likely dwarfed by eng time, both in dollars and opportunity cost


What does the cronjob do? Start some service that listens for inbound connections? Or are you talking more about daemons that do some set of work every interval?


ETL and modelling, then pushing the modeled values to tables


I tend to agree with the green dude :|

It's normal to have a production service replicated on 2 availability regions.

The green guy is annoying, because reality is annoying, and reliability is not about luck, but is about a properly calibrated and tested process.

Yes, you need to write monitoring, you cannot run only with "hope".

Yes it sucks that a DC can go down. Your particular service is not important if it's down, but having a copy of the production data is essential in case of a catastrophe.

Except for the tests that are probably unnecessary, everything else seems to make sense.

The peer bonuses are an issue though.


Green guy should be making a all of that a one click process to start up a service shell that does all of that for you although. Then as you write it up an automatic linting & rules engine will highlight what is missing before you make a final pull request to get the necessary human approvals, ONCE.


Depends what the problem you’re trying to solve is. In my experience the vast majority of business problems do not need that kind of reliability, and if they do they don’t need it deployed in such a Byzantian way.


You say that but then the system goes down and the CTO is walking up to your desk asking why it's down and exactly when can they expect it to be up and don't you know we are bleeding money right now?

What you call byzantine an SRE calls necessary complexity to meet the needs of your business.


No it’s not. There are of course some systems like that, but refit the majority of systems for the majority of companies that’s not the case. If an invoice can’t be paid for 6 hours the world doesn’t collapse. If the CFO can’t get the statistics for his quarterly powerpoint at 3am, it’s not a major problem.


A lot of it feels like premature optimization. Like I'm laying down a heavy infrastructure to support change but it's already locking me into certain ways of looking at problems.


It isn't premature though, a service is as robust as its weakest link, so if you let people write crappy services that easily goes down and are hard to get back when they do then you will get a huge amount of outages in major services since they depend on so many small ones.


Great observation. Perhaps the term "premature infrastructure architecture optimization".


All of this is true, but I'd wager there's 10, at MOST, entities on the planet that are large enough to warrant this level of ... "architectural overkill". The other 99.99% of us don't need it.

I CERTAINLY don't debate that Google, or Amazon, or Facebook, or Netflix, or the phone system, or anything else that touches a noticeable percentage of the human race needs architecture like this to provide "5 9s".

But, just like when "big data" became a buzzword, and many people thought their problem needed "big data" approaches to solve, the thought that all but a small minority of entities need this is Simply Wrong.

I am reminded of a client doing something with genomics about 9 years ago. They had some over-complicated "new tool" infested approach to solve their "big data" problem, but the run times were taking too long. I was brought in as a consultant to improve it. After I was done, a data processing run that took hours (causing employees to run them overnight) before took minutes, or seconds. What did I do? I got rid of all the complexity. I replaced their expensive cluster with one studly provisioned machine. I replaced their collection of networked Java microservices with 1 non-networked multithreaded C program. I replaced their XML based format for data at rest with something I whipped up, tuned to what they actually needed.

Once their "we need big data!" >10TB data set could fit in a single machine's memory, the rest was easy. What used to "require" a cluster of machines and overnight processing could be done interactively, and quickly enough for the scientists to get into a much more productive "flow", doing dozens of runs per day.

tl;dr: unless you're google (or google scale) you don't need all this crap. :)


I think the challenge is you'd expect a company like Google to have more of the setup be automated. If replication and monitoring are such universally good ideas, then why don't they come out of the box?


The current landscape (optimizing for hyperscale, at the cost of complexity) seems like a natural extension of relatively few giant corporations funding the majority of programmers. To such an organization, efficiency & time to market are more important than simplicity.


But at least in the FAANG example, time to market is much slower because of said complexity.


I believe is now called MANGA.


not MAGMA?


> software which eventually runs on the customer's computer

I gather then that you're not a fan of SaaS. True, one can cynically explain the rise of SaaS as rent seeking. But there's undeniably value in selling whatever functionality your software provides without burdening the customer with having to run it on their own computer(s). And when we do that, it's our responsibility to make the service reliable, which is what a lot of these tools are trying to do.


> I gather then that you're not a fan of SaaS.

I'm neutral, I think? I don't quite see the point of it would be more accurate. I don't think I've used any SaaS in my personal life (other than streaming services. Which I'd prefer as a local app anyway, and I still do, for music, but not video)

I'm sure it's a matter of opinion, not something with an objective answer, but "burden of running software on their own computers" genuinely confused me as I read your comment, I thought "burden? what burden?".

As a user:

If software is designed properly (and most isn't...) you download it once, and it runs. Is the burden the time it takes to do the download? Compared to the noticeable burden of using a webapp, with problems like crappy and frustrating responsiveness, an inability to work without an internet connection, and frequent inability to handle tasks of real complexity, I'd choose a local program any day.

As an employee:

Heck yes SaaS! $/month >>> $$$/customer :D Of course it's rent seeking, and I take (and give) no shame in that.


Perhaps we need more specialization but I remember the time before these kind of tools and I hated it.

I’m a lazy person and I absolute love tools. Tools like Docker helped me never have to solve other people’s complex environment problems again. I love metric reporting tools like Prometheus because it helps me front problems before they become weekend emergencies. I use a paid Git GUI so I can fix complex Git problems without ever making a mistake.


I'm also a lazy person! Which is why tools like this are a PITA to me.

The one exception is Docker. It's not a regular part of my workflow, because of how it makes things both harder to get started (making a working Dockerfile takes a bit of time), harder to debug, and slower to build (I just changed one line! Now I have to rebuild the whole image to see if it fixed the problem... &c).

However, for deployment of the final product? I agree Docker's GREAT. But, consider, in that respect it offers nothing I didn't have at the start of my career 20 years ago. Static linking for interpreted languages. :)


Remember that Docker does cache intermediary builds (each Dockerfile line being a build) so when you are developing, avoid editing any existing lines and then combine it back together when you get it working.

Deployment of Docker is gross, but I think that’s because that space IMO is very immature.

From my experience, when something has a poor interface (Backbone.js, AngularJS, UMD, etc.) I avoid learning it because I know something is going to replace it. Kubernetes is currently squarely in that boat as far as I am concerned.


Same. I do not have any nostalgia for when you had to say into machines and run scripts. Please no.


Which paid Git GUI friend?


SmartGit


Reminder that jobs from the 2000s that you were paid to write software for a living include also J2EE and CORBA projects.


True, true. The mention of J2EE did just made me shudder a bit. ;)

But at least, then, I could write code, run it, make changes, and run it again! And see results! Quickly! My main objection to this "future world" is the vast number of layers of abstraction that you need to fight through just to get your first result.

As you can surely guess from my biases and opinions, my happiest engineering projects are those that only require me, my thinkpad, emacs, some man pages, and a C++ compiler. :D And those are the ones I do in my spare time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: