Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Jenkins's biggest strength is also its biggest weakness: plugins. Any development shop that has been using Jenkins for a while is using at least a bunch of plugins. Plugins are not stable, they break every now and then. They require constant update with new Jenkins versions. They get abandoned by their creators (hell, many plugins still don't support pipeline).

It's a fundamental issue with how Jenkins is set up that I don't know how they can get away with unless they abandon the whole plugin architecture all together. But obviously that's not a solution.



They kind of made several plugins "blessed": Pipeline, Blue Ocean, Git, etc.

The core package plus these "blessed" plugins is a lot more stable than throwing every random plugin on top of a base installation. Just write a bit of glue script code and you're golden.


They still have their own issues. Blue Ocean requires a lot of stuff I have no need for (like github support) which in some cases conflict with stuff I do need (like bitbucket support)


Same. Every time there's a Blue Ocean update it requires you to update two dozens other plugins, many of which I don't use and can't get rid of (like the github one). And more annoying is the fact that you can't "select all" to update all plugins, you have to select them one by one.


There’s a select all link at the bottom of the plugin updates page...


well glad to see people using blue ocean, but yeah - that isn't a good look (and one of the aims mentioned is to get away from this pain, for good).

(also evergreen should take care of the updating, ideally). I will be happy when it does as don't want to spend any more time thinking about this!


The deeper issue with plugins is that they create global state across everything.

Switching the execution engine to Kubernetes will help a lot with operational pain. But consider the comparison to Concourse, which also predates Kubernetes. What drove Concourse into being wasn't the lack of Kubernetes (being born during the Diego project, itself a container scheduler), it was the amount of time and pain and unsafety that came from relying on the status quo.

Kawaguchi's agenda is bold and necessary, but I think it's going to take a while to get through even half of it. But the world is better off when Jenkins improves, simply because of its phenomenal installation base. We all talk about Travis and Circle and Drone and Concourse and Gitlab here, but I would bet folding money that over 75% of actual bits going through CI are going through some version of Jenkins.

Disclosure: I work for Pivotal, we sponsor Concourse.


I soon as I saw the word concourse, I look at the username, sure enough it was you.

You bring up Concourse comparision every single jenkins thread.

I've considered it the past but looked like there is almost no adoption outside pivotal. would you say the adoption increasing in 2018, I would like to give it another spin if possible.


> You bring up Concourse comparision every single jenkins thread.

Because it's the reference point I know best. It would be silly of me to compare it to microbiology or the internals of Travis, both of which I'm much less familiar with than Concourse.

> would you say the adoption increasing in 2018, I would like to give it another spin if possible.

It's nowhere near as popular as Jenkins, by at least 2 orders of magnitude, maybe 3. If that is important to you, wait a bit.


We looked at it. It was our second choice. Ultimately, its entirely self-hosted nature was it's greatest strength and weakness. We like self-hosting but don't have the resources to get it going at the moment. It's interesting enough that we'll keep revisiting it.


I've done a ton of work with concourse and it has a very large set of tradeoffs if makes as well.

For example, resources are really nice, but there are a ton of pain points to do with how intentionally crippled the yaml pipeline format is (and in general how much repetition there is, due to the lack of looping, etc). Also the way i've seen people write pipelines tends to end badly for all involved.

Also its just very very buggy, especially if you deploy sans bosh, since thats basically not tested.


Author of the post. I agree that it is our biggest strength & weakness, I acknowledge that problem and put forward some solutions in the doc.

One piece of the solution is to embrace core and a bunch of important plugins together as the foundation. Normal users shouldn't be asked to pick & choose the basics like that, and we want to lock down the combination of the versions in that group. Whether those are behind the scene plugins or not from contributors' perspective is an implementation detail.

Another piece of the solution is to grow more extensibility mechanism beyond the current in-process plugin. There's a thing called "Pipeline shared libraries" in Jenkins, which is a good example of this. It lets developers create higher level pipeline primitives by composing other existing ones. There's some mechanism to share those with the community, too, although not as sophisticated as plugins. From users' perspective, it extends capabilities of Jenkins just like plugins, but in a way that doesn't create the kind of instability a bad plugin can -- its impact is local to one build, for example.

Then there's the container-as-a-building-block extensibility, Jenkins Evergreen, and more...


Agree. The same happened with Eclipse IDE


And Firefox to some extent, except they actually fixed it by requiring all addons to be reimplemented. Lesser of two evils and all that


Less related but Minecraft servers were also a plugin mess. With N plugins, similar odds for any given plugin to break across an update, and frequent updates you'd pretty much always have broken plugins.


I agree this weakness comes from plugins. Because the plugins are not part of the main code base you can't introduce new functionality without breaking them. So you end up with a slow pace of development while you still end up breaking installations on upgrade.

If you add functionality to the main codebase you can keep running your tests to ensure nothing breaks. This is what I think they will do with Cloud Native Jenkins. Essentially abandoning plugins.

Jenkins Evergreen keeps only the essential plugins. This means they can run better tests. And when introducing new functionality you can update the essential plugins.

With GitLab CI we add new functionality in the main code base, avoiding the need for needless configuration and ensuring everything still works when updating.

I have just written a more extensive analysis of the blog post in https://about.gitlab.com/2018/09/03/how-gitlab-ci-compares-w...


> Because the plugins are not part of the main code base you can't introduce new functionality without breaking them.

This is a very simplistic explanation bordering on FUD. Jenkins defines something called 'Extension points' you can introduce new functionality as long as you don't break extension point contract you can continue to add functionality. For example, Greenballs plugin[1] is almost 11 yrs old and still works. Surely jenkins added new functionality in past 11 yrs.

> If you add functionality to the main codebase you can keep running your tests to ensure nothing breaks. T

Another comical statement. You only need to write tests against the contract of extension point and make sure you don't break the contract.

> I have just written a more extensive analysis of the blog post in https://about.gitlab.com/2018/09/03/how-gitlab-ci-compares-w....

This is full of misinformation too. eg: You can checkin Jenkinsfile in the root of your git repo too, you don't have to copy it around.

I don't want to attribute maliciousness to you but hope you correct the blog post.

1. https://github.com/jenkinsci/greenballs-plugin/


I agree my explanation is simplistic but my intention wasn't to to spread FUD.

If you don't break the extension point contact plugins should break, but doing so is hard. Hence the breaking plugins.

The extension points also make it harder to improve Jenkins since they can't be changed without breaking plugins.

And when you introduce a new concept, like pipelines, with a plugin there isn't a well defined extension point for other plugins.

I'm aware of the Jenkinsfile functionality but I think this is different. If you follow the link "Jenkins Configuration as Code" in https://jenkins.io/blog/2018/08/31/shifting-gears/ it points to https://jenkins.io/projects/jcasc/ which has plugin management https://github.com/jenkinsci/configuration-as-code-plugin/bl...

I don't think you can do plugin management in a Jenkinsfile https://jenkins.io/doc/book/pipeline/jenkinsfile/ so it seems incomplete.

I've tried to explain it better with https://gitlab.com/gitlab-com/www-gitlab-com/commit/0639c998...


> The extension points also make it harder to improve Jenkins since they can't be changed without breaking plugins.

Yes its a tradeoff, very similar to programming languages/libraries that people write code against. You cannot change the api (eg: syntax of the language) without breaking existing code. It doesn't mean a language cannot improve, java is has continued to evolve. Solution to this not have all java code in the world to be in one repo.

> I don't think you can do plugin management in a Jenkinsfile https://jenkins.io/doc/book/pipeline/jenkinsfile/ so it seems incomplete.

I am not quite sure what you man by 'plugin management' but you can use plugins in Jenkinsfile https://jenkins.io/doc/pipeline/steps/

I think you are referring two different concepts.

1. Managing Jenkins configuration eg: configuring global npm password, global nexus config, typically done by jenkins admin). This was traditionally done via UI and configuation as code is the effort to do it via code.

2. Managing your build configuration, done by devs setting up their builds on jenkins. In the past this was done in the UI in job configuration. Jenkinsfile /pipeline is solution for that. You can check that file into your code repo. This is equivalent to gitlab-ci.yml.

The model here is that the admin enables and configures the plugin with defaults and global level via configuration-as-code and users use that plugin in Jenkinsfile.

Very similar to gitlab releasing 'dependencies' feature on their build server so that users can use that feature in their gitlab-ci.yml.

> I don't think you can do plugin management in a Jenkinsfile

Why would a user want to do plugin managment on the server in their Jenkinsfile? It would be like gitlab ci users upgrading their gitlab CI version via gitlab-ci.yml.


> Why would a user want to do plugin managment on the server in their Jenkinsfile? It would be like gitlab ci users upgrading their gitlab CI version via gitlab-ci.yml.

This means that when a developer need a new plugin they need to ask the administrator of the Jenkins server, frequently a central IT organization.

With GitLab all the functionality is always enabled, there are no plugins to install.

I disagree that installing plugins is the same as upgrading GitLab or Jenkins itself. Although of course GitLab gets new functionality every month.


Thank you for all the work on GitLab, we are using AutoDevops extensively. Any thoughts on how AD morphs or adapts with knative? It seems like Jenkins is fully knative with CRD support.


You're welcome, thanks for commenting. Auto DevOps can probably benefit from Knative in two ways. Use Knative build https://github.com/knative/docs/blob/master/build/builder-co... for building images. And use Knative serving https://github.com/knative/serving to run review apps that don't use resources when not in use https://gitlab.com/gitlab-org/gitlab-ee/issues/3585#note_900...


Part of the ideas mentioned are to resolve this stability, and not depend on in process plugins (a new extensibility architecture that won't hurt stability). There are many things in plugins which should be core functionality (and will be).


That's just the thing. Why has there been so little adoption of commonly needed functionality as part of core, or at least as officially supported plugins?

Like, there's no official backup functionality? And why is version control not standard in 2018? This isn't something you just bolt on, or incorporate as a response to competing products.

I think they should abandon all hope of Jenkins being competitive. They should remain the weird old school universal tool it always was, and let it become relegated to legacy systems, like the Apache web server.

Jenkins was useful, but it's living in the past and trying to solve the wrong problems.


>That's just the thing. Why has there been so little adoption of commonly needed functionality as part of core, or at least as officially supported plugins?

Oh there are core bundled plugins, official etc - they are just core functionality that happens to be implemented by plugins.

>And why is version control not standard in 2018?

that is and always has been - "git" support used to be not in by default, but that was a while ago (it is included now).


I agree, the next big next step would shifting the plug-in system (wiki, downloader, etc) to the "global pipeline libraries" level.


yeah, I see 'plugins' being around for a while but docker steps becoming the more cloud native long term alternative; being more reusable stand alone & not requiring changing a Jenkins Master (or even requiring a Jenkins master for ephemeral build pods)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: