Updating a program I use should never constitute "messing" with my OS, it should constitute "using". Why should I as an OS user have to constantly worry about upsetting the OS maker? Why should I have to jump through obscure hoops just to install some software without breaking my OS?
Wouldn't a better solution be for the OS makers to wall off their magic python behind some wall where I cannot see, touch nor use it, and let me install whatever python I want without fear.
In the ancient days, stuff the OS relied on was in /usr/sbin and stuff that was "yours" was in /usr/local/bin (or /opt) and stuff that no-one was quite sure whose it was was in /usr/bin. And lo, peace and harmony did reign upon the face of Unix. And the sysadmin did lie down with the developer, and it was good.
If there is one thing that ticks me off about modern unix it is that to install <insert minor package here> I end up having to install / upgrade half the rest of the system because of all the interdependencies between packages.
Tell me about it! I just wanted to install folding-mode for Emacs the other day, dselect wanted to install a new sodding IM client as well! (Something in emacs-goodies-el depended on it). About 50M it came to!
That is a pretty wild example, but it is common in software today.
Microsoft does stuff like this all the time, deprecating good .NET libraries for stupid ones just because of God knows what reasons (none of them intelligible to non-political folk).
I think that's more symptomatic of the open source culture around most Unix systems than Unix itself.
Being able to share and depend on other people's libraries is great for the developers as long as they're not tied to providing support for when users can't compile or run their software.
The problem is that about 80% of packages in distributions could be compiled against older versions of dependencies, but never do. Be default, distributions mark in the packages the dependency of their currently installed libraries, ignoring the fact that the software could have been compiled against an older version of the library too.
There's no mechanism for identifying such minimum possible dependencies.
What you mention is a manual mechanism which is not often used. Instead, developers say:
Requires: python
The automatic mechanism in place is to run ldd, identify the list of libraries used by the executables distributed, and mark those libraries as dependencies. Since the package is build on your newer, Debian 5 package, it requires (automatically) the newer libfoo (like python-2.7), while it would have worked just fine with the older libfoo (python-2.5).
Gentoo's package manager, Portage, is written in Python, so Python presents a special challenge, when upgrading or using a different version that the default.
What about stuff that belongs to me (the sysadmin) but runnable by all users? /usr/local/bin? /opt/bin? /my/homemade/solution/bin?
There is a bunch of stuff that doesn't (shouldn't) belong to the OS, but still be runnable by all users without all of them needing a copy in their $HOME directory.
FreeBSD installs all packages in /usr/local/bin and /usr/local/sbin
You can blow away your /usr/local and start from a new ports tree and build it all back up without adversely affecting the system.
Everything system related is in /bin, /sbin, /usr/bin, /usr/sbin depending on whether it needs to be accessible when the system has just / mounted or when /usr is mounted as well.
So in single user mode with just / mounted you have access to various different utilities. Mount /usr and you can expand upon that.
Back in the old days when FreeBSD required perl in its base system switching to a newer version didn't break any of the base system tools (or maybe I just never noticed).
Well exactly. People who never learnt the old-skool ways always reinvent the wheel in an over-complex way. /usr/local/bin is stuff for the users, but which the sysadmin installs. So /usr/bin/python for the OS and /usr/local/bin/python for your users.
> What about stuff that belongs to me (the sysadmin) but runnable by all users?
If you're writing quick ad-hoc scripts and don't have time to package, /usr/local/bin.
Per above, Python is used by your OS. It's installed by default, and not uninstallable, because the OS needs it. We can't and probably don't want to change that, so we'll need to live with it.
Use virtualenv. if you need to to, install a quick Python 2.6 package that slots alongside your OS package rather than removing it - using RHEL 5 as an example, you'd install 'python26' alongside your existing 'python' package.
Not to nitpick here, but I want to clarify something you said.
> Per above, Python is used by your OS. It's installed by default, and not uninstallable, because the OS needs it.
The OS doesn't need python. It is some package you have installed that needs python, like Gnome(py-gtk2) or KDE(py-qt), for your desktop environment or some application you decided you needed. If you don't understand what program on your server required python to be installed, then you shouldn't be administering a server. Having large numbers of language interpreters and compilers on a server can be considered a security risk, since after someone gains access, that provides him or her a large number of options of what kind of code he or she can run. Even having a C compiler is a risk, since code can then be compiled into programs already present on the system. Security should be first in mind when developing an application server.
On the other hand for a development environment, there should be nothing stopping any distribution from installing multiple language interpreter versions. The default `/usr/bin/python` or `/usr/local/bin/python` can be a symlink to python25, python26 or python27, and when the user needs to run a different version, they explicitly call the binary with the version number. This whole virtualenv method just to parse a configuration file of a program not written in python seemed a little excessive.
> The OS doesn't need python. It is some package you have installed that needs python
No. The installer and packaging system need Python. Anaconda and Yum depend on it. Try removing it and see. You cannot install a minimal RHEL or Fedora system without Python. I am fairly sure this applies to other distros too.
Whats to stop you from patching and compiling programs yourself to get rid of the python dependancies of your package manager?
If the Linux distro you choose doesn't meet your needs, why would you continue to use it? If your Fedora install requires it to be one way, but that doesn't fit the need of the server, it would like one should reconsider why they are using that distro.
If you're just running Linux on your PC in your dorm room, go for it.
If you're running 10,000 machines you don't make changes that you haven't regression tested and aren't committed to supporting. Effectively, you'd need to write your own package manager, maintain your own repository and deal with upstream yourself... You'd be Red Hat.
You answered your own question - /usr/local/bin. I don't know about MacOSX but on Linux, this is the solution to your problem - "stuff that belongs to me (the sysadmin) but runnable by all users".
People who don't do Python won't/can't use virtualenv. Also, you don't think that's a nasty workaround for something that should just be a non-issue? That's like you telling me to use NetBSD pkgsrc just to get around some ancient linux (which I have done).
Hi Zed. For the specific case of needing Python 2.6 on RHEL, ask your user to install the 'python26' package from EPEL. It's a semi official source of packages for RHEL including newer versions of OS tools that slot alongside those versions.
My point is that /usr/bin/python probably shouldn't be a core OS component. /usr/bin/python should be whatever I want it to be. The core OS-component should be something like /sys/bin/centos-python which I will never directly use, touch or even necessarily know about.
The bottom line I guess is that mixing core OS components and user installed software in the same directory structure with no way of differentiating them is simply bad design.
> /usr/bin/python should be whatever I want it to be.
Why /usr/bin/python? /usr/bin/ has been for ages the "system" area - why would you want to suddenly usurp it for yourself?
/usr/local/bin/python will on most Linux systems be executed ahead of /usr/bin/python, because /usr/local/bin/ usually precedes /usr/bin/ on the $PATH. Thus you can have your cake and eat it too.
> The bottom line I guess is that mixing core OS components and user installed software in the same directory structure with no way of differentiating them is simply bad design.
Linux distro makers agree with you - if you do not customize environment variables, user-installed software goes into /usr/local/ and the OS package-management (which can be considered "system" I guess) puts executables into /usr/{bin,sbin}.
This tends to work well as long as you keep in mind that everything outside of /usr/local/ (or your customized installation path) may at any update change subject to the will of the "system". I think this is a reasonable setup - it gives the OS vendor the ability to update the system, while giving you free reign in your /usr/local/ playground.
OS package-management (which can be considered "system" I guess)
This is where I and the distro makers disagree. When I use synaptic to install the latest version of foobar, that is not in anyway "system" and foobar should not be dumped in the same directory as core system binaries. There should be a directory that contains everything core to the OS (like python 2.4), which was only updated by the OS updater routine (which should be different from the update all the software I've installed routine). Then there should be a directory which contains everything I've chosen to install (like python 2.7) which is where everything that I install via whatever is the normal way to install things on that OS ends up. I don't care too much about what these directories are called, but having them be the same, as it is now, is not an ideal solution in my mind.
We're getting into the debate of what constitutes a "core system", and I'm afraid there's as many answers to this question as there are debaters.
As far as the update routines are concerned - you can already have those, by self-selecting which packages do you want to update. As a pro-active measure, you can even lock down packages which should never be updated.
This seems to be fine to me - the advanced user has the means to control the update mechanism, and the casual user can just do an all-encompassing "apt-get upgrade" or an equivalent to make sure that she has all the security fixes.
It seems like you want BSD ports, or some alternate package repository that has /usr/local as the root. The packages from the OS repository are the OS.
because, as the other poster mentions, /usr/bin has been a place for non-essential (not required by the machine to boot) software for thirty five years.
I know, but that doesn't necessarily make it a good solution.
I've been a sys admin for several large unix systems and variations on the problem Zed describes needed to be hacked around on each and every one of them.
I think we agree. If I were doing an OS now, I'd also ignore the FHS completely, but more to the point: I wouldn't use anything that remotely resembles it either.
/ would have one dir:
/myos
And all OS components are under there:
/myos/bin/python
People can install anything they want outside /myos, but I'd make it very clear that the OS owns this folder and it should not be modified.
I did a little work on this a while back - an LFS system is a good place to start. I was a little ambitious though - I wanted a pure C/python system with no shells (at all - just ipython) and no command scraping - just enough to start SSH and nginx.
It's a lot of work - I think I ended up trying to write an iproute2 replacement using ctypes and quit. I wonder if anyone else would be interested...
I am curious: how do you think virtualenv and pip would solve the issue of antiquated python ? The only thing that virtualenv does is to install packages in some temporary location, that you can throw away later without impacting anything outside python.
Virtualenv does nothing to solve those issues. I am actually wondering whether it does not even aggravates them because people think those tools magically solve backward compatibility issue (which is the underlying issue here). If something as trivial as virtualenv would solve those issues, people distributing and packaging stuff would have done something similar for ages.
Create your virtualenv with --no-site-packages. Of course you have to use a different Python to run virtualenv, it's not that hard to build but if you have something against that it shouldn't be too hard to find a newer binary out there somewhere.
So, what's the complaint here? You have to obtain a new version of Python if you want to use a newer version of Python than what comes with the system? That's pretty self-evident imo.
Distro makers hate statically-linked stuff as a general rule and virtualenv is basically a statically-linked Python application. How many distros do you know that send out games statically linked to SDL or whatever? Most would rather have the application break. The same is true for Python; the distro has a philosophical opposition to static linking and virtualenvs are essentially statically-linked Python apps.
If you build yourself a new python, what's the point of virtualenv ? Compiling python is of course possible, although quite a chore if all you want is to configure some software.
Concerning the need to obtain a new python if you want to use a newer python: yes, it is more or less evident, but that's Zed Shaw's point, not mine. Where he has a point IMO is about the brokenness of the installed python - many distributions make too many changes w.r.t. upstream without understanding the consequences. To distro's defend, the python packaging solutions are so poorly thought out that python itself is not helping the situation.
The point is that building a new Python just to generate virtualenvs from is not disruptive to anything else. Zed says they had to move away from Python because the system would get messed up if you replaced the distro-supplied Python. Building a new Python and merely using it from the build directory to generate virtualenvs doesn't disrupt anything in the system. We have done this on several CentOS systems and had no issue.
You just build it and don't install it, run it from the build directory or some other isolated space that won't effect the system. You _can_ create venvs from there. After you have done so, just carry your venv around with you and there's no disruption from anything. Our venvs are all kept in project-top-level/venv.
But if you rebuild python from scratch, you loose every single software packaged by your distribution. If you depend on modules with C extensions, it quickly becomes intractable. Building python is easy - building a recent pyqt on RHEL 5, not so much.
Also, virtualenv is hard to sell for people who do not know so much about python.
It creates an entirely independent python installation, including executable. Of course, you need to install that executable somewhere before you can create an independent copy of it, but there's a decent chance you'll be able to do that even as a normal user in your home directory.
We're talking about technology, not politics so consensus be damned. There is a fact of the matter independent of perception and lack of consensus in and of itself is not a valid attack on a position.
In this case maybe we can listen to those who claim the python situation isn't that bad and have provided specific technical solutions for evaluation.
No. Unless those people are morons, but the guys who actually code them know they're tools for python developers, and maybe for sysadmins of python solutions. Pip and virtualenv for software using Python as a config file format? not a good idea.
Yes, pip/virtualenv works for installing things like Django.
It totally fails for something like m2sh which has to live in the system PATH so that you can run it from wherever you have your configs.
But you know, I wonder if the various distros could sort of invert this and they start using pip/virtualenv instead of everyone else working around them?
The distros should be shipping two versions of python (if they need), walling their required old version off somewhere but actively and explicitly including a newer one higher up in the $PATH so that people don’t have to mess with the “system”. The problem is the combination of “system”, “third-party tools”, and “users” trying to use the same python with completely different upgrade timelines. If all of these distros included by default a recent-ish version, and their package managers made updating that one a snap, then third-party tools could stop trying to use system python for anything at all, and users could forget that the “system” python existed.
Unfortunately, since the problematic old python versions are on existing old distro versions, it’s not clear that there’s any easy way to fix this problem.
The mongrel conf is a subset of libconfig; it doesn't use the more advanced features. Plus libconfig has bindings and versions of it in nearly every mainstream language.
The mongrel2 configuration architecture makes it pretty easy to write a libconfig configuration shell. All it would need to do is translate the libconfig stuff to sqlite.
If using libconfig is a good idea, it'll win.
P.s. imho, this is the cool thing about the Mongrel2 stuff. The good ideas can always win, because there really just isn't a whole lot to Mongrel2-core, and everything else is decoupled.
I thought about something like that, but wanted to duplicate the existing config file format so that people's configs would keep working (mostly). It actually had some cool advantages, like a nice psuedo variable system letting you setup variables for reuse in different parts of the config file.
I haven't really seen that in other config formats, and it is damn handy.
Can you please elaborate with some details about how "broken" python is on systems, and the situation with python 2.4? According to the article, "recent Ubuntu releases" had broken/deprecated python installations. A quick search reveals that python 2.5 has been shipping as the default "python" meta-package(?)
Can you also please elaborate any "features" that the install was missing - are these specific modules ?(I think python's sqlite3 is missing by default on Ubuntu. I'm not too sure.) I'm not questioning you or anything, I'm just slightly concerned as we had plans to ship a Python application, and when someone experienced enough has a valid complaint, it makes sense to understand the basis behind that.
It was a fairly small chunk of Ubunutus that had problems with Python versions. I made sure m2sh-py worked with 2.5 (since I had an OSX around with 2.5) and I ran into a couple people with what was a recent Ubuntu, but with a 2.4 Python. Boggled the mind, but then again people do really weird stuff with their systems.
If your application is going to depend on Python then you definitely need to evaluate who's using it. If it's like mongrel2 where sysadmins will need to install it on various Linux distros they might not control, then you're screwed. You'll need some kind of installer that hooks up the right kind of python in a safe area.
If it's more like a Desktop application, then just get it to work on the latest of the desktop Linux variants, and then have it bomb out if they don't have the right python. People on Desktop systems are used to having to upgrade to use software.
I'd very much like to know why it wasn't easier for him to simply write a Python code that doesn't use anything not existing in 2.4.
Just because he downloaded to his computer whatever later version he likes he can't expect from all distros and all users to have exactly the same version across of all the currently running computers, it's more than unrealistic, it's simply impossible and always will be.
I'd very much like to know why it wasn't easier for him to simply write a Python code that doesn't use anything not existing in 2.4.
This isn't an "oh, a brand-new version just now came out and it's not in the distros yet" problem. Python 2.4 is six years out of date at this point. Python 2.3, which Red Hat will support until 2012, is even older (and for the longest time Red Hat didn't even use a 2.x Python at all -- they sat on 1.5 forever).
That's simply unacceptable, and creates endless headaches for people who want to distribute software written in Python.
In the Git project we target Perl 5.6 for critical code (like git add --interactive) and Perl 5.8 for some other code (like git-svn). Those are 10 and 8 years out of date, respectively. Or the equivalent of targeting Python 1.6 and 2.0.
It can be mildly annoying sometimes to have to use 5.6 features when I usually develop on 5.12 or newer, but Perl's policy of backwards compatibility makes this a lot easier than it would be in Python.
Git also has some Python code that targets 2.4, making it compatible with such an old version was relatively easy (see http://git.kernel.org/?p=git/git.git;a=commit;h=23b093ee087e... for an example). A lot easier anyway than rewriting and maintaining that code in C.
I'm less in touch with Perl language development in the past decade, but my impression is that the difference between Perl-5.6 and Perl-5.12 is much less than between these versions of Python and recent releases. This is expected when a language evolves rapidly, but it's rather frustrating to wait 10 years before you can depend on the "new" features being available.
My impression is that the difference between Perl-5.6
and Perl-5.12 is much less than between these versions
of Python and recent releases.
Your impression is correct. Perl is a more mature platform than Python, and backwards compatibility is taken more seriously. The latest Perl 5.12 release can still run most of the test suite for Perl 1 released back in 1987.
That and its wide availability and portability make it a much better target than Python for something like glue in a build system for a project that's mostly in C anyway.
But even though it's backwards compatible with old code it's still somewhat of a pain to write code that works on old releases that don't have the modules / core features you want.
Backwards compatibility also comes at a price. Many of the changes that broke old Python code broke it because old warts were being fixed in Python. There's a lot of equivalent old warts in Perl that haven't been fixed, and there's no plan for doing so (other than Perl 6).
> Perl is a more mature platform than Python, and backwards compatibility is taken more seriously.
That's a silly thing to say w/r/t Python 2.x. What makes people’s Python 2.5 code not run on Python 2.2 (or 1.x) is that new features were added in the mean time. New very shiny happy features that make life a lot easier for developers. Almost all python code continues to work from one version to the next, and it’s easy to write code that will work on all the versions from 2.2 to the present (or whatever), as long as you’re willing to avoid all the nifty stuff.
Basically by “backwards compatibility“ you mean “forward compatibility”.
So did Perl 5. The difference seems to be that for the newer Perl features you have to either declare the version of Perl you're depending on or import the feature explicitly.
There's a lot of equivalent old warts in Perl that haven't been fixed
This is the crux of the issue. Serious language users (especially the ones writing the new libraries that add significant value to your platform) want the warts fixed to make the environment pleasant to work in. Meanwhile, most casual users are desperate for backward compatibility because it makes distribution easier. I actually prefer Python's choice, but pain is inevitable either way.
I should have said: They haven't been fixed by default.
Perl is also moving forward, it's just doing so differently than Python. Python has major releases like 2.6 and 3.0 where they explicitly break old code in the default installation.
Perl's model means that you can usually upgrade any code base from say 5.6 to 5.12 without major headaches. But if you try to do the equivalent with Python 1.6 to 3.0 you'll run into trouble.
There are advantages to both models, but with Perl's you rarely see users with big legacy code bases staying behind on some legacy 10 year old release of Perl that may have bugs and security issues. Users usually just upgrade along with their OS without any major pains.
> Perl is also moving forward, it's just doing so differently than Python. Python has major releases like 2.6 and 3.0 where they explicitly break old code in the default installation.
3.0 yes, but 2.6 is simply not true. code written for 2.2 will work in 2.6, bugs notwithstanding.
iirc nothing was, but it made available some features that would be used in 3.0 and 2.7 is essentially a back-port of stuff the 3.0 people found useful. I believe the idea is that in a major version number, everything is back compatible.
This backwards compatibility is a conscious decision. And it is argued about. To find a bit to read about that, Google e.g. for darkpan. (It is probably one reason why Perl 5 is so extensible.)
"There is currently some FUD going about in some parts of the Perl community about why we should break Perl 5 backwards compatibility. A short blog entry, schmarkpan, is a good example of the trend: loud, noisy, but clueless and devoid of any content."
Why is it "simply unacceptable" to produce the code to the least common denominator?
Anything that is successful must have more versions in the installed base. You can target a single version of something only if it's not used at all. It seems too much developers live in "Everybody must have that what I have" world.
I haven't tried myself, but I can't believe Python changed that much that you can't write the plain Python code to the least common denominator of versions >= 2.4. If you have the specific examples why you can't I'd like to know them.
Edit: I've rechecked, he actually mentions 2.2 as the lowest version he saw, still I believe it doesn't change too much.
You can write code that runs on 2.4, yes. But you miss quite a few useful features of more recent Pythons by doing so (context managers, for example, are a big deal, as are several of the newer standard-library modules). The same thing used to happen with Python 2.3; you could write 2.3-compatible code, but it meant no decorators, no generators, etc.
Python has these useful features and has had them for years; why is it acceptable to say that it will be 5-10 years before we can reliably use anything new?
In his particular case, he certainly was not missing the newer Python modules as he was able to write a "small" C-only replacement. That what he wrote in C would certainly be easier to write in Python 2.4 (or 2.2)
Did you understand that he wrote the big C program just to process a small file with the syntax as in the following example (taken from his own file):
No, most of that is generated from the .rl files or just SQL that's common to everything. Also, you'll want to throw in all of Storm and PyREPL if you want a real comparison.
Then again, 4600 lines of anything is tiny. You have a massively skewed view of "Big" and haven't disproven anything by finding 600 lines of cruft in one directory.
I fully agree with you that this is more than tiny for a C project, but it can be big compared to a small script. I also develop/maintain the projects measured in zipped megabytes (where I don't even have a wish to try to wait to count the lines!) I only compared it to some script solution.
You're right, 600 lines more or less are irrelevant. I just thought you're measuring something else as I saw the total of 1000 instead of 4000. Still it is all tiny for a real C program.
I also think that the C solution is more portable than the dependence on any version of Python. I also cross-compiled the code for 32 MB RAM mipsel platforms and I agree that only C dependencies are better than any script language dependencies (not counting shell, when it's carefully written).
But I actively use both Perl and Python so I'd still really like to know what was lacking in Python 2.2 or 2.3 or 2.4, of course only in case you already knew that you were to have any Python on the target computer. But then if you couldn't expect to have any Python, then the fact that distros still use older versions wasn't of much relevance.
Django is in a very different situation because Python is central to all of its users, therefore it is reasonable to expect that they maintain a somewhat modern (released in the last 3 or 5 years, say) environment. When Python merely performs a subsidiary role, and your users may never use it themselves, it is far more important to support old versions.
I mentioned Django as a counterexample; the conversation seemed to be heading toward using the single specific example of Mongrel to demonstrate that there either is not a problem here or that it's not particularly serious.
Meanwhile, anyone who depends on Python for much more than a config-file parser knows what a genuine and large issue this is, and that "just write to the lowest common denominator" is not really a reasonable solution.
However your replies started with your response "That's simply unacceptable" to my "I'd very much like to know why it wasn't easier for him to simply write in Python 2.4" You mentioned (in this context actually OT) Django much later.
And I still don't know if he tried adjusting his already existing Python code to simply be backwards compatible -- he never mentions that, neither why that would be a problem in his particular case.
Regarding Django, see the other replies here about side-by-side installations, it's more than reasonable if you have something big which always runs, but not reasonable for a small config script used once.
My point is that the lowest common denominator is higher for users of something like Django because Python is a central component of what they do, thus it's reasonable to assume that they are not using an 8-year old Python environment.
Tell that to the many, many people I know who are using Django on RHEL. There's a reason why we just (as of Django 1.2) got around to dropping Python 2.3 support...
What's the point of ever producing newer versions of a language with new features (or, golly, even bug fixes) if no one will ever use them because everyone has a 'lowest common denominator' mentality? More to the point, what is the process of changing the 'lowest common denominator' everyone's using?
Where do you get "no one will ever use them" from? We're talking about the darned config script which the guy first wrote in Python and then discovered that he would have to write the processing of that file in the lower version of Python than he blindly used or to use C. And he decided to use C, I still haven't read anywhere why exactly.
The platforms upgrades have their own dynamic. But you just shouldn't be surprised that somebody who has the server running for years doesn't want to install the latest Python only to process one config script. Not more not less.
So after complaining that he could not use new language features that appeared six years ago, he went back and rewrote the whole thing using a different language whose syntax is much much more than six years ago.
I guess he must have found these must-have Python 2.5 language features that are so important and desirable in C.
And by the way, don't use new GCC features, because some ancient GCC versions will be supported by Red Hat until 2020.
It was users complaining, not me. I had no problem with Python, but all of the non-python users (the users of one of the 9 other languages Mongrel2 supports) hated Python for these reasons.
The config file written in the subset of Python, the only thing users would have to touch, hasn't changed! (edit: see http://news.ycombinator.com/item?id=1712269) He just changed how the file is processed, by implementing in C what would be certainly easier to implement in Python 2.4, (edit: 2.2) the lowest version he mentions.
If he insisted that users install Python 3.1 only to run his config script and they complained, I don't blame them. If he had users that haven't had Python at all, then I fully understand what he did. But then he can't blame distros and users for not having the latest and greatest Python version, it's just about the existence of Python on the target platform.
Maybe he's got a whole shit load of code for this thing? I was able to do a 1,000 line project that was 2.4 compatible just by looking at the 2.7 docs and avoiding anything that said it was introduced in 2.5 or later.
Not fun, mind you, because I really would have rather been doing it in 2.7 or 3.1, but not impossible or even very hard.
Speaking as someone who knows practically nothing here, I think the issue is existing systems, not brand new systems. The barrier to updating an existing system becomes greater the more and the longer things depend on it.
I definitely would have used Lua in this situation. The Lua runtime is tiny and lightweight. You can write pretty nice config files in Lua which look pretty similar to Zed's too.
This is a major problem. PETSc uses Python for configuration and we have to be careful not to use any features requiring >2.3. The last release (this spring) required 2.2 because RHEL3 is still supported (in production phase 3, see http://www.redhat.com/security/updates/errata/) until Oct 31, 2010. Python is easy to install, but it's a lot to ask of users when they just want to get your software up and running. Note that RHEL4 sees end of life Feb 29, 2012 and RHEL5 not until Mar 31, 2014. So Python-2.4 support will have to live for a very long time.
I hate this, but don't see a decent alternative. Many of our users are building in batch environments of varying ages and just want to get their science done. Shell/bash is not very portable, it would be painful to write all the configuration in C (and confusing because people would need to both native compilers and cross-compilers).
This is very interesting and is what RVM in the Ruby world was designed to solve in some way. So maybe we need a widespread and useful PVM. This feeds into an observation of mine I made over five years ago but that seems to be ever more relevant as time goes by. Linux and BSD seem to have "solved" the packaging problem through mega-repositories with dependencies and access to source code. However your favorite language and especially the scripting ones and virtual machines one will have there own repositories that are _not_ linked into the systems way of doing things.
So at the system level we have RPM and Yum and Ports and Portage and Macports and Fink and Homebrew and AptGet and Conary and yada yada yada but at the language level we have (deep breath) Perl and CPAN, Ruby and GEMS, Python and EGGS, and Emacs and ? and Java and Eclipse plugins and on and on.
I would switch IN AN INSTANT to the distro that integrated all this so that I would only need to go to ONE place to upgrade system packages and language packages. Honestly if nobody does it one of these years I might even be forced to do it myself and make mega $ € ¥.
have there own repositories that are _not_ linked into the systems way of doing things
One approach is to have a system that packages everything in the language-specific repository for the distro. Haskell on Archlinux is an example of this.
I'm not a fan of actually using the language-specific packages because I like the rolling release model and end up having to do some manual work to reinstall all my packages when the distro upgrades it's copy (or switch to the next 6-month release with Ubuntu and similar).
I've no direct experience of Arch, but the traditional problem with integrating rubygems is that it expects to be able to install more than one version side-by-side. That's alien enough from the packge management point of view, but when it's de facto idiomatic to specify required versions and manipulate the load path at runtime, it makes integrating with a distro downright fiddly.
Indeed. This is why I would never install Eclipse from synaptic on Ubuntu because the Ubuntu schedules and Eclipse schedules differed. And perhaps they wouldn't coincide anyway even if they synchronised temporally. Which means that someone somewhere is wasting there time packaging Eclipse for Debian/Ubuntu.
He's absolutely right. I have come to the conclusion that we must do one thing: Stop sharing. Shared libraries and language runtimes are a relic of the past when memory and storage were orders of magnitude smaller. Bundle all dependencies and throw out the package managers!
Why the entire OS? All applications using a particular library would have to be updated individually, which is a drawback. So be it. Every solution has its drawbacks. I suspect that a system with a labyrinthine dependency graph is less robust and less secure. Don't forget that sharing a library doesn't just mean sharing bug fixes, it also means sharing bugs.
You mean upgrade applications that depend on that core library. Which you have to do anyway since a patched library will not be bug-compatible with the old one.
That's the biggest issue, if you test with library version X and run with library version Y, then any sufficiently complicated program will have bugs that would have been found by testing with the same version of the library you run with.
Responsible library maintainers ensure that security fixes and subminor releases are drop-in replacements. No distro rebuilds the world under these circumstances.
Completely agree. The Linux distros (CentOS is the one I work with most, not out of choice) have antiquated Python versions.
You can get round it with virtualenv etc., and I do this as a matter of course when installing Python web applications. For a generic "plug and play" package like Mongrel2 however, where sysadmins just want something that runs straight out of apt-get or whatever, it's a pain and is holding things back.
I mean WTF does yum still require ye olde Python 2.4 ?
CentOS is about as conservative an option as you can get, though. Unusably so, in my opinion. None of the other popular distros have packages that old.
I am complete newbie when it comes to running a server but have a bit of experience using Ubuntu in my local box. Is ubuntu a good choice? Since most use a flavor of Red Hat like CentOS I am a bit hesitant to use Ubuntu in production.
It depends on what you are using the server for and who is using the server.
Ubuntu and debian are used by plenty of people in production and are probably the better choice for learning how to set up a server if you already use ubuntu.
I would tend to say go with Debian, if that is your aim. While I understand that some of the Ubuntu long-term support releases have stable package bases, I have had nowhere near the same level of comfort with an Ubuntu LTS as I have with Debian stable, especially when boxes are to be migrated to newer releases. With Ubuntu the LTS to LTS hop is often not successful, usually for lack of testing, whereas I've rolled from one Stable release to another many times with no issue. The Debian project works very hard to do regression testing, issuing a new release only when the release is finished. This has proved to be more prudent an approach that that of Ubuntu's, which opts for a hard(ish) 6 month release cycle; a new LTS is release every two years but is a 6 month iteration over the last Ubuntu release plus some influx from Debian Testing.
It could well be that I'm biased--and you should certainly go with what you are comfortable if this is to be a one-man show--but in the pony show that is choosing a server OS, I'd go with Debian every time.
Sure. If you have any problems the Debian mailing lists are excellent, the Ubuntu forums are good and I'm very often polite via email if you have a question that isn't very tightly focused.
IIRC RHEL still depends on Python 2.4 for a ton of internal administrative scripts.
Nobody at Redhat wants to mess with the wiring in a working system.
I'm not sure why he concerns himself with old distros. Are they the customers for Mongrel2? The description on the website emphasizes modern web technologies.
Using Rails 3 is best with Ruby 1.9.2, and yet the only popular distros that install that via their package management system are the rolling release ones, Arch Linux and Gentoo (where it is masked). Most people willing to use this brand new software are also willing to compile their own necessary bleeding edge dependencies, and probably have a generally up-to-date distro preference (like Ubuntu).
Even the latest Debian stable, a notoriously conservative distribution, which is a year and a half old and due to be upgraded soon, uses Python 2.6. I'm not sure it's reasonable to support much past that.
Upgrading Ruby to 1.9.2 won't break your OS. Upgrading Python on some distros can even break package management. That's the difference.
And if you have a well tested, rock stable environment you generally don't want to mess with it if you don't have to, or may not easily be able to.
You want to not support older than Lenny. Well, we have machines that are still on Sarge, though slowly being upgraded. Some of the machines haven't been rebooted since around the time Etch came out and that's the main reason they haven't been upgraded yet.
That's only because package management "on some distros" (I won't name them, we all know which they are and I'm sure we're talking about the same one) is fubar. I'm not talking about the package manager alone, but also the over-the-top package dependencies, incoherent dependency resolver, let alone the sheer stubborness of some developers working on package management..
FYI - Fedora 13 shipped with Python 2.6, Fedora 14 is going to ship with Python 2.7 and 3.1 in parallel. Not every distribution ships ancient versions.
I also recommend using virtualenv and pip, these tools make it much easier to create an easily reproducable Python environment.
RHEL, meanwhile, does not ship 3.1, 2.7 or 2.6. And to be perfectly honest, if you're seriously distributing Python software you have to take RHEL into account -- "Fedora ships something newer" doesn't help.
I don't use Python at all. Despite that, I still somehow have Python 2.6.4 installed on my RHEL4 box at work. (Incidentally, the Linux version number is only slightly higher -- 2.6.9!!!)
So it seems like it's possible to get 2.6 on RHEL. I did it, and I hate Python.
+1. If you use RHEL, a 'python26' package from a known good source like EPEL is the best place to get a new python that doesn't mess with your existing OS python.
The bug reports exist for the two main bugs I come across.
The main problem is that I could upgrade a couple of systems to fix the bug but doing that tends to break python (with the infamous _md5 error) spectacularly :)
I have found that the one system that doesn't give me any trouble running Python is Ubuntu. I'm using Ubuntu Server 10.4, specifically, which is the latest LTS (Long Term Support) version.
Ubuntu 10.4 has Python 2.6 but I wanted Python 2.7 (which is the LAST 2.x version). So here's what I did: I installed all common dependencies (such as OpenSSL with "apt-get build-dep python2.6", since they're all the same for Python 2.7. Then I installed Python 2.7 with "make altinstall" without disrupting the system's base 2.6 install. Then, I got pip and virtualenv installed.
I've never been happier doing Python development. pip never failed to install a single package I needed, and Python 2.7 is fast and gives me a safe path of migration towards Python 3, whenever the whole byte/str issue in the network libraries (or at least WSGI) is 'resolved'.
Oh well, you know, it's definitely not trouble enough to drive me away from Python.
Despite the whole version madness across Linux distributions, Python is still my preferred language for web development, and even if that means going through a bit of trouble getting it properly installed on a particular system, hopefully that is something I'll only have to do and document once every few years.
I surely hope all distros eventually catch up with Python 2.7 so we can all stop worrying about this. Thanks for taking the time to make some noise about it ;)
It's trouble enough to drive people away from Mongrel2. Also, I find it weird that you take me saying "we ditched python in mongrel2 because of Linux" to mean "Python SUCKS! Stop using it." Never said the latter, only the former.
Zed Should have used Lua(the entire lua source distro weighs couple of hundred kilo bytes). Lua is superb embedded language and even better configuration language.
No one needs to know it is Lua, (which also happens to be a marketing problem for Lua, heh).
His decision not to use Lua the second time around is perplexing because he is not averse to reusing third-party libraries and he is already using the code from Lua for URL pattern matching (IIRC).
Josh Simmons, one of the main Mongrel2 hackers other than Zed, is already working on that.
The idea behind Mongrel2 is that you can write your own configuration system in any language you want, as long as the final result is an SQLite3 database. The one in C that ships with Mongrel2 is "just" a default - if you don't like it, it's not hard to roll your own.
So what can we learn from this for designing future languages?
Make it standard idiom to have every file of code declare its version-dialect at the top?
Default to throwing warning if interpreter and code version-dialect mismatch?
Have the main code interpreter called by all code files simply be a host for routing code to the correct dialect-version interpreter? Perhaps include an optional "slim" download where the host only includes the interpreter for one dialect-version, but make this non-default\opt-in.
1. Less syntax, so that later language features don't cause upgrade problems and can more easily be worked around rather than causing syntax errors.
2. Awesome package management with everything not core in packages that work with the distros so they don't mind using them.
Both of those are damn hard to do though. Less syntax makes the language fairly unusable (Forth), and package management is just a nightmare in general.
I think dialects are tough to predict emergent behavior that will inevitably happen if a language becomes old enough. They just want to happen.
If your language assumes this and supports version-dialect from the start it will be more robust and age gracefully in the long run.
edit: basically it would require too much work or a stroke of pure genius to ensure perfect syntax for a new language in public release 1. Language designers should assume as project age approaches infinity the probability of a syntax revision approaches 1.
edit2: also if the interpreter host could interpret all the older versions up to the latest by default, you could write your code in both 2.7 and 3.1 and use 2.5 libraries with it etc. No need to rewrite every single older library for trivial changes.
> "So what can we learn from this for designing future languages?"
We can learn that it's inevitable that your language will come into fashion, be hyped up to be "the language", then will fall out of fashion and be declared 'dead'.
The idea of a '100 year language' is solid as long as developers don't choose languages like they choose what to wear. Unfortunately though, most developers seem to choose based on current fashion rather than anything else.
Programming language popularity works like fashion, but not because developers are shallow and finicky. It's because it's actually important what other developers are using. You could have the best programming language ever, but libraries, resources, help, and advice won't exist until you have a lot of people using it. There's enough adventurous programmers to bootstrap this cycle with new languages, but the general principle holds.
I agree to an extent. That stuff matters when you're starting out and just learning. But after you get to a certain level the 'language ecosystem' isn't important at all IMHO.
Massively disagree: once I get to a certain level, there's no way in hell I want to waste my time implementing all of (java) commons-lang, or my own sqlite bindings, etc.
When the languages ecosystem is flourishing, I am productive. When it's not, I'm shaving yaks that prefer not to be shaved.
No one wants to implement all of java commons-lang in their language of choice because they'd be lynched by the languages users. I no of no other language with a standard library as annoyingly verbose and complicated as java.
Come up with solutions and ways to avoid the problems mentioned in the future. There's always fashions and trends sure, but I think most people chose to program their last web app in Python or Ruby over assembly for more practical reasons...
Including Gentoo is wrong. Gentoo uses a slotted ebuild for python. You can have many concurrent versions running. Not only that, but the latest portage works fine on the latest 2.6 release of python. If you don't want to learn how to use your OS, pick a different one. I hear that Macs "Just Work".
I'd go one further: distros in general haven't killed Python, Red Hat has. By tying their flagship OS to a six-year-old Python release, they've made it impossible for a lot of popular software to move on and take advantage of more recent advances in the language (since "screw everybody who uses Red Hat" simply isn't a viable option).
It's not reasonable to expect OS update schedules to coincide with language update schedules. Red Hat's priorities are extremely unlikely to be the same as yours. The apps I build that depend on ruby or python get deployed with a custom build of same into a separate prefix. So either dropping the python dependency or bundling python make sense for mongrel2. Maybe dropping it was easier in this case.
Does RedHat still have that much of a market share in linux to demand such? I guess I am out of the loop, but I hear way more about ubuntu these days than RedHat...
Python, as per Guido Van Rossum's wishes, is pushing for suspension of all language syntax and built-in changes and work towards preserving semantics, precisely for giving time to alternative implementations time to catch up with CPython, as well as the various distributions to catch up with Python's recent progress.
In short: "...have killed Python" is a hyperbole for saying "I was tired to hear people nagging about dependency on Python". Still, it's a very good thing what Shaw did.
The solution to distros which are bundling three year old software is not to halt your progress for three years, that should be obvious.
Speaking as Django's release manager: Red Hat's continued use and support of ancient 2.x Python versions has been a factor in the schedule we're developing for migration to Python 3.x. We can't simply pull the rug out from under anyone who's using Django on RHEL.
Seeing RHEL/CentOS keep coming up led me to some googling and the first result for "RHEL python" is http://www.python.org/download/linux/ (about as official as it gets) which says
The IUS Community Project maintains recent versions of Python for Red Hat Enterprise Linux (RHEL) and CentOS. Installations are parallel installed next to stock versions of python therefore don't disrupt the functionality of critical python utilities on the system.
Doesn't that solve the issue and keep you from being hamstrung by those distros?
Not quite, because not everyone will install that stuff, or will be able to get approval from their higher-ups to install it (folks who run RHEL tend to be pretty conservative...).
Shouldn't python just be a library dependency like any other? I was hoping the problem of libraries with different versions was solved somehow. Apparently not :-(
Doesn't every non-stale language suffer from the same thing? So either your choices are a) using language that hasn't evolved or b) using an old version of language which has evolved. I'm not sure that alternative b is that much worse than a.
It's funny how, for Zed, anything that doesn't work the way he thinks it's supposed to work is "broken" and the result of some boneheaded decision and that it "sucks".
I too am frustrated by ancient versions of server OSs that use equally ancient versions of Python, but that's why virtualenv was invented. And source distributions. I am happy using it when required.
Being "stable" in OS terms is not changing APIs very often. One of the good things is that I know that code I wrote for RHEL 4 the day it was launched will run flawlessly on RHEL 4 today. The primary goal of such a Linux distro is that "it has to work". I won't touch the system parts.
He doesn't blame it on 'some boneheaded decision'. As far as I can see, he doesn't blame anyone, he just notes the current state of affairs, in which the situation is indeed pretty much broken. Not just for him, because he is far from the only one that thinks it should work a certain way.
that's why virtualenv was invented
Yes and shouldn't it also be the OSes that use it to isolate their default Python install?
It's a design decision. They have a default Python install they support, validate and keep working that you shouldn't mess with. I am pretty sure you can't ruin a RHEL just by installing RPMs from RH's repos. If the Python the OS vendor provides is somewhat inadequate to you, it's your job to provide another one or to use another distro. However, if you choose to write your code against the distro's Python (something that's not that hard, really - 2.4 is a modern language) you can rest assured your code will work for as long as the distro is supported. It's not "broken", not a boneheaded decision. It's just the way it is.
If you require 2.7 or 3.x goodness, you can always set-up different environments separate from the OS's Python and just be happy with it.
It is completely possible, easy and reasonable to have a multiple versions of python installed. Many OSes do this, and the convention is to have them named like python2.4 python2.5 and so on.
Any system scripts that require 2.4 can then change their shebang to say /usr/bin/python2.4. At the very same time, packages for newer pythons can be made available for ease-of-use for the rest of the world. This breaks nothing.
Actually, it's not that Python doesn't work the way I expect, it's that distros have killed Python's portability promise and made C more attractive. Granted I'm able to code C well enough to pull this off, but having to do this is annoying.
Also, this is how things improve. Someone doesn't like the current state of affairs and changes it.
Are you implying that Python is not portable between distros? I say that because I am much more familiar with Red Hat and Debian-based flavors and, on both environments, Python seems rather sane, if somehow crufty sometimes.
What specific Python dependency did the move to C cure? What are the most common complains regarding libraries and versions? Perhaps listing them can help change whatever is wrong.
It isn't Zed complaining, the 1.0 instructions used pip and suggested using virtualenv etc. It would seem he is just reacting to what his customers are telling him - namely that it is difficult getting python to work on their systems and they don't want to play around with pip and virtualenv.
An end-user of an application doesn't want to fiddle around with library management tools.
It's a shame he felt he had to tear out Python. I understand the pain of relying on distro-supported Python. You basically have to target 2.3 at minimum if you want to distribute an application. If you have windows users, it can be even more painful.
Theoretically, couldn't you have configure check your setup for Python 2.x? Setup your make process to download and install the proper python in a local build environment? I'm just running off random ideas, not sure if it would be feasible.
Too late for Mongrel2 of course, but C isn't so bad either.
I think he is talking from his user's point of view. Not all of them could be proficient in various distros of Linux and their Python versions. So if something doesn't work for them, it means its broken.
I certainly think that anything that doesn't work the way I expect is broken and things that are broken in this way frequently suck... and that seems to be the attitude of most of the people I know.
Hilariously enough, I took the time to make sure the new config file format for m2sh was nearly perfectly backwards compatible with the old file. This is pretty funny because I predict nobody will blink an eye at the config file, even though it's basically Python.
This is an interesting point, especially given the amount of code (relatively small) actually needed to parse the subset of Python the config format uses. That said, the subset is fairly "generic scripting language" since there aren't any declarations, only assignments, lists, etc (at least at a cursory overview).
What version should be promoted? As much as I like, use, and work on Python 3, a campaign to get distros to use 3.x would leave a lot of projects in the dust, broken and unusable, or resulting in a rushed conversion. On the other side of the coin, a campaign to promote a newer 2.x version is counterproductive to eventually getting everyone on 3.x.
All in all, the C code looks pretty good. I browsed around in a few files to see whether he used strn* or strl*, but it turns out he uses a proper string library instead.
After several issues with Python, Lisp, Haskell, etc., I'm coming around to the philosophy of just generating C or C++ code from Python or Lisp.
Productivity is a function of (familiarity, language, libraries, configuration, runtime). Other languages may be great due to (language, library), but configuration can be a show-stopper. Especially since most devs seem to see library/wrapper writing as a chance to pull in all their favourite cruft dependencies.
If "python for linux sucks" I don't know what to say about python for mac or windows.
I have a lot of python programs that use OpenGL, matplotlib, and a lot other libraries. No problem or very little problems on Linux.
On mac 10.6 you have a limited python version and is very difficult to add matplotlib and other libs, with Windows is a similar pain in the ass, unless you buy it from a commercial scientific distro.
It sounds like the correct solution to this problem is a package maintainer for the Mongrel2 package on different OSs. The package maintainers job is to make sure a package builds and runs on a given OS. The reason we have them at all is that it is often a difficult job. Mongrol2 systems like a system application, no?
I guess it was just about time for me to try mongrel on a fresh new project. I feel I am blessed for that only. Seeing that 0MQ batteries are included is a great bonus.
If all would be fine, perhaps my network appliance would be shipped with mongrel2 which would be fantastic.
Cool give it a shot. We've got a bunch of stuff we're working on for the 2.0 push but what you have in 1.1 should be stable and usable. If not, let me know by dropping a ticket.
I would be very curious to learn why you had not packaged Mongler2 with cx_freeze, which allows you to provide your own copy of Python with any modules you may want to add to it.
Doing so would have made your software immune to the broken Python installations.
Every unix based OS deals with package resolution in a stupid way. Didn't the old lisp machines before Linux, FreeBSD, etc solve this with self contained state in a single object? Everything old and solved is the new and hard problem.
> I hate that anything but the most trivial code won't run across all versions.
Please, define "all versions" and "most trivial code".
I have very little problems with Pythons ranging from the ancient 2.4 to modern 2.7. Of course, I am careful with what I do. I know if I use a dictionary comprehension I will be limited to 2.7+, so I try not to. I am quite sure a lot of the code I write could run under any 2.x Python with little modification. As for 1.x, I will agree that things get more complicated.
Well, it's a matter of opinion, but I would define "all versions" as versions sharing a major number, i.e. code targeting 2.2 should run on anything starting with 2. As for trivial code, just look at the string operations part of the documentation and search for "new in" and "changed in". My connection is rather slow right now, to the point that the page I mentioned is still loading, so I won't look for more examples, but string processing should IMO be trivial parts of any language. Don't get me wrong, the changes have mostly been useful ones for 2.x versions, but I have been caught testing scripts on my own box that end up not working after deploying. Since then, I have been, as you say, more careful, but I feel I have a valid gripe when I expect string and io operations to be consistent across major versions without having to do a fine print and version check. :)
I know that if I really need generators, then I really need to install something newer than the 2.4 that comes with RHEL/CentOS 4.
Generators is 2.6+, right?
BTW, an ugly approach I used (more or less as a joke) was to check for generator availability and, in case I can't use them, use a list comprehension instead. There is a performance/memory hit but, depending on what you are doing, it's a usable alternative. And a nice place to hang humorous comments.
But I agree that, if your users don't want to install an alien (from the package manager POV) Python in order to run Mongrel, getting rid of python code is the right thing to do.
It's like having a weird dependency, kind of requiring a Fortran 77 compiler in order to run Perl...
That said, I don't disagree with removing the Python dependency from the "core" of Mongrel2. As was noted in an earlier comment, people can always write a config file management tool in any language they want (or Mongrel2 bindings in general, for that matter).
This is an insane idea. I should not have to have a whole other version of python just to run a language independent web server. The choice Zed made is a great choice--all the functionality of the original python m2sh is present in the new, and there's nothing external to support, or dump somewhere else.
People who have any trouble with a system perl or python or java are shipping their own versions within the package.
Sap Bussiness Object, for example, comes with its own perl 5.004 and Sun JDK 1.4.
It is also possible to provide an up-to-date package for most used OSes. You cannot replace system's default python interpreter, because to much things in distros are depended on it, put it could be /opt/python
btw, it is only Debian-based systems and RHEL who are lagging behind. Fedora is always up to date.
I'm considering this post as just a flame-generator. There is no problem at all.
Python brought this on itself by having no respect for compatibility between versions.
There's really no such thing as the Python language, only "Python 2.2", "Python 2.4", "Python 3". If you want to run Python scripts, you need to have three or four different Python runtimes installed, and that's asking a lot of both the distros and of users who need to keep everything straight, and probably even hack the #! at the beginning of scripts so that they work correctly.
Why didn't the official bittorrent client run on Red Hat Linux (w/o downgrading your Python) for at least three years?
Why was NLTK stuck on Python 2.4 for years? I might use Python to use NLTK, but why do I have to choose an old Python to get it to work... And what if I want to use it w/ software that needs Python 2.6?
When you add features as useful as set() datatypes, if-else expressions, generators, and comprehensions, people are going to use them. Python 2.2 code nearly always runs without modification on Python 2.3+ (the only exception would be collisions between variable names and new keywords). But someone targeting Python 2.6 has a huge hassle when deploying to older versions. This means that Python is actually adding useful new functionality; that is, the incompatibility is a symptom of a good thing.
It depends on what you mean by running unchanged. Numpy and scipy also support 2.4 to 2.7 (and soon to 3.1) from the same codebase. But everytime a new version of python is out, we need to make some changes, albeit relatively minors. So it is not perfect either.
Just like Perl before it, Python (and to some extent, Ruby if you use Puppet) are part of the OS (Linux OS or OS X that is).
Shit can and will break if you mess with your OS.
You never got this with PHP because people didn't use it in their OSs.
But there's a simple solution: not only does the Ministry of Packaging want you to use virtualenv, your OS maker does too.
The faster pip and virtualenv become standard the better.