Hacker News new | past | comments | ask | show | jobs | submit | more carreau's comments login

Now, I want to write a program the play that game...


> These photographs do not contain individuals working in British intelligence or document MI6 equipment and locations

Or so they want you to believe...


I my experience if you get more builders/faster hardware someone higher up will end up asking for more items in the build-matrix, and the CI time will balloon up again.

It's rarely only a question of just allocating money to more/better hardware, it's also a question of policy and willingness of your organisation to keep CI time short/feedback fast.


That's why in my opinion you should always have the option of building locally, just what you need when you want it, instead of having to go through a slow CI/CD pipeline.


I built up our CI pipeline until it was the faster way of running tests. You can rent more compute than you can carry...


Just provisioning a new node, deploying a new kubernetes worker and downloading the source and pre-built artifacts takes longer than it takes to build stuff incrementally on my machine.

Also my local machine has resources entirely dedicated to me and isn't held back because someone else decided to rebuild the world.


why does your CI do all that after you start the build if it happens every time? Developer time is expensive.

Scale up ahead of time, so there’s always a machine ready. Prefetch the repository / packages when CI workers start, so they’re already installed before a build is triggered. Use your imagination - CI doesn’t have to suck.


If you scale up ahead of time then it's not really on-demand, that means you have dedicated hardware you're paying much more for.


The marginal cost to keep a large-ish spot instance running 99% of the time is dirt cheap (eg $400/mo to keep one extra c7g.8xlarge running).

If you're paying for an engineering team, that's a rounding error.


> will end up asking for more items in the build-matrix, and the CI time will balloon up again.

I don't think is has to end like that. You can have separate queues and separate levels of assurance. For example, does every commit have to be tested in each of 20 possible configs? You can run one common config by default and allow unblocking the rest on demand. Then enforce all of them only as a merge gate.

If you can also split them into separate queues that don't block each other, you get both larger matrix and faster dev turnaround.


Sure, and you are proving my point. You have to allocate time for someone to reply to all those questions and update the CI to only test what's necessary, or prioritize some test, and run the rest only if they pass etc.

So you do need policies to actually be allowed to optimise CI.


This aspect is pretty key. My observation has been that build systems will be as slow as people will tolerate.

Effort and money will only be expended to make things faster when the build times are perceived as intolerable.


The solution is a build system that scales by caching between runs - most PRs only change a few files, after all.


As I replied to the sibling comment, yes, I agree, but caching is not magic. You need to know what to invalidate, and you need to be allowed to spend time making test group that run depending on what have changed. So without policy and having devs being allowed to work on optimizing CI/reducing time, you can list plenty of strategies to make it faster but it will likely not happen.


Caching is pretty damn simple with any sensible build tool that simply stores hashes of files and modules. Caching in the JVM ecosystem is pretty much free, for example.


Sure, but you seem to be read this blog post in the narrow view of building as only "compiling an artifact" and not running the test suite across many platforms/condition, or any other work. I mean the author even points out that building the kernel is only a use-case that takes time for measurement purposes. For some projects I work on the compilation itself takes maybe 1/10 max of the total build and test. So even with infinite agressive compilation caching I would not gain much.


Nope, our Android app has caching, both for building APKs/AABs, but also for test suites. PRs run the entire test suite (on too many variants, even, I forgot to remove some unneeded stuff), what hasn't changed or hasn't been impacted by said changes simply gets a FROM-CACHE and isn't reran.


That's great for your android app, and you're lucky you have such a thing. But your experience is not generally applicable to all of software development. There are many, many, many build systems out there in the wild that do not support the ability to know what unit tests are affected by what code changes. I'd wager something like 99% of them.


You can cache tests too if you properly capture the inputs


Caching can often cause more issues than it solves.

While largely fixed now, older C# projects suffered from separating dll references from the reference to the package that provided them.

This lead to the situation where you may be referencing a package that hasn't been downloaded. Due to the package cache, the solution would typically build on a "dirty" environment like the developer's machine or anything that had previously built the project but would fail on any fresh environment.


i build a large app where every output of around 100 (dlls) are tagged with a version number which also contains the git commit SHA etc.

It's great for keeping things tidy and not having to break the whole thing up in modules like "UI 21.0.12.11 needs Backend 12.2.1.11 or greater". Instead everything builds more or less into a monolith. But it does have one drawback: making a commit invalidates every bit of the build output immediately.


If it's ever a problem with the Jupyter trademark, you should be able to contact the Jupyter Trademark committee jupyter-trademarks@googlegroups.com. I'm not part of it anymore, but they might be able to help with any TM related thing. It may have been autodetection of the "Jupyter" keyword on a non Jupyter domain ?


Seeing your edit: Then please consider dropping a kind word to the dev: https://gitlab.com/inkscape/inbox/-/issues/7050


[flagged]


Honestly that make you look like an ass.

You can "login with GitHub" on GitLab, and the minimum you can do for an open source project that is given to you for free is to make a tiny bit of effort to respect the maintainer choice. Really I assure it's _that_ easy, even I did it a few days ago.

If you are not capable of clicking on three button to login on a platform and say "thanks" to a maintainer on an issue that is already there, but have the time and energy to complain on HN, I'm not sure I would want you as a user.


@dang is this sort of commenting acceptable?


Try to back away from the situation and see how you're coming across here. It's a free product, and you're behaving in a really entitled way.


I'll happily donate to Inkscape but this is the first open source product that I have seen that uses gitlab!

That's truly odd, why?

Edit: thank you for your perspective.


You don't see the issue with a Github monopoly? And reporting an issue really isn't any different on Gitlab than on Github, you can even login with your Github account.


Honestly, first time I am hearing this concern that a source control system can be a monopoly. This doesn't seem to be a concern for most of the open source projects that I use every day, they are all hosted on github.


Thoughts of whether the marketplace could be used for "certified" refurbished older components ?


Have you seen https://acko.net/blog/on-termkit/ from 2011 ? Comments from around this time on various site may give you insight.


One million cell made me though of https://lumino.readthedocs.io/en/latest/examples/datagrid/in... that has a "trillion" cell demo :-) All the best.


With a virtual data model it doesn't really matter, here's our 2 trillion cell model https://bl.ocks.org/texodus/483a42e7b877043714e18bea6872b039


I wonder if this kind of ux is just for fun, like let me scroll through and have no actual sense of data. At the end of the day, finding something is via search and filter.


Because now IPython will automatically reformat you code while you type it with black ?

It should though fail gracefully if it can't import black.


It's probably feasible, I need to look into how the suggestion is stored and display it. You seem to have looked into it more than I, do you want to open an issue with your thoughts ?

I'm also hopping to integrate with https://pypi.org/project/friendly-traceback/ at some point.


Thanks, opened an issue for discussion : https://github.com/ipython/ipython/issues/13445


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: