Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

CI environments like Gitlab or Github are my nemesis. Another technology that everyone swears is absolute necessary but somehow makes everything more complicated. The provided environments in companies so far are hell 100% the time and managed by inexperienced personnel with zero or little programming experience.

* Barely reproducible because things like the settings of the server (environment variables are just one example) are not version controlled.

* Security is a joke.

* Programming in YAML or any other config format is almost always a mistake.

* Separate jobs often run in their own container, losing state like build caches and downloaded dependencies. Need to be brought back by adding remote caches again.

* Massive waste of resources because too many jobs install dependencies again and again or run even if not necessary. Getting the running conditions for each step right is a pain.

* The above points make everything slow as hell. Spawning jobs takes forever sometimes.

* Bonus points if everything is locked down and requires creating tickets.

* Costs for infra often keep expanding towards infinity.

We already have perfectly fine runners: the machines of the devs. Make your project testable and buildable by everyone locally. Keep it simple and avoid (brittle) dependencies. A build.sh/test.sh/release.sh (or in another programming language once it gets more complicated, see Bun.build, build.zig) and a simple docker-compose.yml that runs your DB, Pub-Sub or whatever. Works pretty well in languages like Go, Rust or TS (Bun). Having results in seconds even if you are offline or the company network/servers have issues is a blessing for development.

There are still things like the mentioned heavy integration tests, merges to main and the release cycle where it makes sense to run it in such environments. I'm just not happy how this CI/CD environments work and are used currently.



To be honest, some of your points can be a hindrance, but as a GitLab user, others are solveable without massive efforts.

- env vars can be scripted, either in YAML or through dotenv files. Dotenv files would also be portable to dev machines

- how is security a joke? Do you mean secrets management? Otherwise, i don't see a big issue when using private runners with containers

- jobs can pass artifacts to each other. When multiple jobs are closely interwined, one could merge them?

- what dependency installation do you mean? You can use prebuilt images with dependencies for one. And ideally, you build once in a pipeline and use the binary as an artifact in other jobs?

- in my experience, starting containers is not that slow with a moderately sized runner (4-8 cpus). If anything, network latency plays a role

- not being able to modify pipelines and check runners must be annoying, I agree

- everything from on-prem license to SaaS license keeps costing more. Somewhere, expenses are made, but that can be optimized if you are in a position to have a say?

By comparing dev machines to runners, you miss one important aspect: portability, automation and testing in different environments. Except when you have a full container engine on your dev machine with flexible network configs, there can be missed issues. Also, you need to prime every dev to run the CI manually or work with hooks, and then you can have funny, machine-specific problems. So this already points to a central CI-system by making builds repeatable and in the same from-scratch envirnment. As for deployment, those shouldn't be made from dev machines, so automated pipelines are the go-to here. Also autmated test reporting goes out the window for dev machines.


TLDR: True, most things can be fixed if configured and setup properly. Just the way the are often used and provided examples encourage many of the problems.

Env vars can be scripted, many companies use a tree of instance/group/project scoped vars though, leading to easily breaking some projects when things higher up change. Solvable for sure, guidelines in companies make it a pain. There are other settings like allowed branch names etc. that can break things.

With security, yes I mean mostly secrets management. Essentially everyone who can push to any branch has access to every token. Or just having a typo or mixing up some variables lead to stuff being pushed to production. Running things in the public cloud is another issue.

Passing artifacts between jobs is a possibility. Still leads to data pushed between machines. Merging jobs is also possible, just defeats the purpose of having multiple jobs and stages. The examples often show a separation between things like linting, testing, building, uploading, etc. so people split it up.

With dependencies I mean everything you need to execute jobs. OS, libraries, tools like curl, npm, poetry, jfrog-cli, whatever. Prebuilt images work, but it is another thing you have to do yourself. Building more containers, storing them, downloading them. Also containers are not composable, so for each project or job has its own. The curse of being stateless and the way Docker works.

Starting containers is not slow on a good runner. But I noticed significant delays on many Kubernetes clusters, even if the nodes are <1% CPU. Startup times of >30s are common. Still, even if it would be faster it is still a delay that quickly adds up if you have many jobs in a pipeline.

I agree that dev machines and runners have different behavior and properties. What I mean is local-first development. For most tasks it is totally fine to run a different version of Postgres, Redis and Go for example. Docker containers bring it even closer to a realistic setup. What I want is quick feedback and being able to see the state of something when there a bugs. Not needing to do print debugging via git push and waiting for pipelines. Pipelines that setup a fresh environment and tear it down after are nice for reproducibility, but prevent me to inspect the system aside from logs and other artifacts. Certainly this doesn't mean you shouldn't have a CI/CD environment at all, especially for releases/production deployments.


It's such a waste of resources to rebuild an operating system every time you want to run some tests, and these CI machines are much less powerful then personal computers so it takes much longer in the cloud too. If you have your CI in your own scripts it will be easy to migrate between CI environments too. build: ./buuild.sh test: ./test.sh deploy: ./deploy.sh maybe pass some env variables to the scripts. There are some advantages to CI platforms, like nightly build/test and automatic security scan on already deployed software, so that you will be notified when something suddenly stops working or a vulnerability is discovered.


The resource usage is really a big problem (if you don't sell them). Being stateless is a blessing and curse at the same time. Reproducible but forces you to feed in all required data every time.

Simple scripts like these are enough for most projects and it is a blessing if you can execute them locally. Having a CI platform doing it automatically on push/merge/schedule is still possible and makes migrations to other platforms easier.


but then, but then your corporate provided laptop with 8GB RAM and 128GB with locked down Windows Entprise™® may need 1000th of security policy exceptions and won't even fit all dependencies on its disk. Not to mention that it would be building for like 10h. Think of the shareholders! The corporation would have to buy actually usable hardware for it's workers! Think of the cost! /j

For real tho, not every project can be build by everyone locally, but at least parts of it should be locally runnable for devs to be able work (at all IMO). What I am noticing is more and more coding is being done on some server somewhere Github Codespaces anyone? Google Colab? etc.

What I am also noticing is that this tools like GH-A there is not really a way to test the CI code other than.. commit, push, wait, commit, push, wait... That's just absurd to me. Obviously all CIs have some quirks that sometimes you have _just run it_ and see if it works but this... it's like that for everything! Abusrd I say!


True, many of the problems are not solvable by technology alone. Hostile environments can be created for every approach if the corporation doesn't know/care how to do it properly. Luckily I'm blessed that my employers mostly give me admin permissions on my machine and provide decent hardware. The hardware my customers force me to use though... lets say at least I have some free time for other things.

Laptops are a lot cheaper then the cloud bills I have seen so far. Penny pinching every tiny thing for <100$/€, but cloud seems to run on an infinite magic budget...


Why even do automated testing too? Devs should just test their code. If they were doing their jobs there would be no bugs./s

Your opinions are so regressive you really should consider going into management.


What about being able to run stuff locally even hints towards me having such an opinion? The processes including tests and release are still automated, the trigger and where they run are different.

Nowhere do I say you shouldn't use CI/CD at all. I just don't like the current CI/CD implementations and the environments/workflows companies I worked for so far provide on top of them.

The regressive thing is putting everything ONLY on a remote machine with limited access and control, taped together by a quirky YAML-based DSL as a programming language and still requiring me to program most stuff myself.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: