This has been mentioned on HN before and is a good place to start IMO - What can a technologist do about climate change? http://worrydream.com/ClimateChange/
They have a caveat "Some shows record their own ads which Stitcher cannot remove". It would be nice if they marked which shows were actually ad-free. It would be even nicer if they put auto-skip markers over the "unremovable" ads. We've had that technology for TV for more than a decade (comskip.exe, MythTVs comflag, etc).
The author fails to mention that we could solve the problems that article brings up by making everyone work from 0800 to 1700, globally, once everyone is on UTC. I think people are just bull-headed enough to make this work, even if it means some people never see the Sun ever again. If China can spread one time zone over five, spreading one time zone over twenty four is just a matter of The Same, But Stupider.
There are two reasons, the first is there. The second reason is my job. I'm employed as a support engineer for company-use of Firefox/Thunderbird for 11-12 years, and the knowledge and trust are based on my addon development experience. So, I think I already monetized from addons and donation looks like a double-dealing for me now. (So after I retired I may accept donation, if Firefox and my addons still alive.) Anyway, I just say thanks to all!
> i hope to keep having a privilege to say "no" about requests not matched to my vision
I totally understand and respect that. I bet it leads to better, more focused software rather than 10 mostly-similar projects that all almost do what you want. Kodus to him!
From the GitHub issue thread, I see a lot of people being angry for their production deployments failing. If you directly point to an external repo in your production environment deployments, you better not be surprised when it goes down. Because shit always happens.
If you want your deployments to be independent of the outside world, design them that way!
Same thought here; uncached (re)installs feel like bad practice. Bad disaster planning... But also prone to inconsistencies in your software deployments.
Seriously. My favorites were the complainers sagely tutting about Docker's "single points of failure", and in the same breath complaining about Docker causing pain for the complainers' infrastructure.
Yep, even for small personal projects I deploy docker apps via saving an image repository and transferring it to the production server from my laptop. It is so easy to do so I just cannot see justifications for not doing that especially in any kind of commercial deployment.
Maybe this is the norm in big enterprises, but I have not actually come across any company which hosts a local package repository for commonly available packages.
In big enterprises, core production infrastructure frequently has NO access to internet, ever.
Most places I've ever worked in will have local repositories, procedures and timelines for anything from Microsoft and OS updates, to development stacks and libraries.
Neither workstations nor servers get updated directly from external/vendor/open repositories - it is all managed in-house.
Slower, yes; more work, yes; but that's exactly the type of issue it's meant to prevent :)
Really? We certainly do here. Linux packages are mirrored. Programming dependencies are mirrored via tools like Nexus or Artifactory.
Like tsuresh said - stuff happens. What if you internet connection went down for a long period of time. You couldn't continue working. It takes very little to setup, gives you fall over but also makes installing dependencies sooo much faster.
You don't need a seamless, robust process for dealing with the occasional remote failure (especially when there are mirrors), but you can for example save snapshots of dependencies.
You should be able to do something in an emergency, even if it requires manual intervention. If you can only shrug and wait, that's bad.
At work, we've got our prereqs stored in a combination of Artifactory and Perforce. Even for my own personal projects, I'll fetch packages from the Internet when I'm setting up my dev environment, but actually doing a build only uses packages that I've got locally.
It's a little mind-boggling to me that anyone would rely on the constant availability of a free external service that's integral to putting their product together. I handle timestamps for codesigning through free public sites, but I've also got a list of 8 different timestamp servers to use, and switch between them if there's a failure.
You don't see well written, solidly thought out code to be the norm, either,
for pretty much the same reason. It takes experience and guided thought to get
to this point, and seasoned sysadmins (who have this worked out) aren't
exactly the crowd considered to be sexy nowadays.
Infrastructure that works has very little drama. Without drama, you're not in anyone's sphere of attention, and being outside that, there's no reason to be sexy.
I agree. I'm a sysadmin myself, and the best congratulations I've ever had
was after heavy rewrite of a firewall script (old one was a mix of three
different styles, unmanagable mess after several years of work), when my
colleague asked me when I'm going to deploy the new firewall, two or three
weeks after it actually went into production. It was so smooth that nobody
noticed.
Everywhere I've worked, mostly very small companies for the past decade, has kept local mirrors of critical infrastructure we depend on, specifically distro repositories, CPAN, etc. It's just a sane best practice. It really doesn't take all that much work to run apt-mirror or reprepro.
Devops here for a startup: We run our own repo for critical packages specifically to ensure AMI baking and deploys do not break when a package isn't available for whatever reason (and we pin to the package version).
After reading the bug report I am surprised so many people are using a remote package repositories for their build machines builds... then again I'm not too surprised I guess.
I'm not that familiar with Docker but I am of package/dep management (from deb, jars, npm, eggs etc) and you most certainly want to use a mirrored package repository (jfrog, sonatype, or whatever) for this reason and many more other reasons (bandwidth, security, control, etc).
So if you did have issues with the outage I would look into one of those mirroring tools. At the minimum it will speed up your builds.
I wonder how this all works if you have a really small, non critical systems and you are using SaaS und PaaS infrastructure (hello cloudcomputing, not doing stuff in house) like Travis CI where you are not in control of their repositories. This kind of services (like SaaS CI) are not new but making life a lot easier than doing your own CI (looking at you Jenkins). Not everybody wants to replicate whole repos for everything. Also some people (like me) want to spread open source software systems (which will not install themself on the client side if one component fails on the Internet) and not mission critical single point services.
Truth here, except running your own repos to add immutability shouldn't be something you have to do for infrastructure. Aptitude, and the package manager ilk like it, need to die.
You don't have to run your own repos for immutability. You just have to use repositories that say up front what will - or will not - change, and then don't do anything on top of them that expects something different from what that particular repo provides.
The question is why would anyone would expect immutability after pointing their package tools at a mutable repository?
I'd argue that's still a bit of a pain compared to a fulltext search in the browser. I can already search the rest of the repo and issues that way, so why not the wiki?