Hacker News new | past | comments | ask | show | jobs | submit login
Creating your own IPython server from scratch (claymcleod.io)
70 points by clmcleod on Feb 19, 2016 | hide | past | favorite | 15 comments



I hate to be that guy. Criticizing is so easy to fall back to, and I try really hard not to do it. But this is.. kinda bad?

It isn't anything to do with IPython (except that it runs code and IPython runs code).

It handles the easy parts (yes, you can get Python to exec() code, and you can wrap that in a Flask webapp), but then punts on the hard parts ("a simple frontend.. isn’t hard", "concurrency", "security")

52 lines of code for a fully functioning, elegant, session-based Python kernel is not too shabby if I do say so myself.

Except it isn't IPython. It's true that it is short.

I guess if I had any suggestions they would be to remove any reference to IPython, and call it something like "a remote python code execution service". That's probably a fair name for it, and as such I think it is fine.


Indeed, it does seem like the author desires features in the frontend that are not available in the Jupyter Notebook. The solution then is to keep the IPython kernel and create a custom frontend rather than throw the two (server & client) out.


Except IPython is bad garbage, one of the worst projects I've ever seen in my life.

Monstrous complexity, abysmal code quality and the architecture is idiotic to say the least. It's really a sorry state of affairs and a testament to the quality of the language community when crap like it are so widely used in Python land.

Some of the really stupid shit IPython does:

+ Can't easily use it for remote debugging (which is really the most WANTED usecase for something like this) since the concept of remote kernels is alien to it. One has to hack around the internals to expose a plain ZMQ kernel and it's still not working without issues.

+ Jupyter can't connect to remote kernels it hasn't spawned itself, gotta use stupid tricks like tunnels to fake them as local.

+ IPython forces integration with single-threaded event loops and takes the "we know better, threads are complicated and bad" approach to exposing something simple that can be used by others to integrate into any sort of app, multi-threaded or not.

For something that does things right, look at: http://pryrepl.org/

This is what IPython should have been.


> Except IPython is bad garbage

How is Jupyter/ipython even close to garbage? It works for millions of people and works well. You might not prefer it but it is a brilliant project that you don't like but calling it Garbage is extreme. I am guessing there are many garbage programming languages out there also?

The biggest scientific breakthrough of gravitational ripples was produced with a Jupyter notebook. Major companies use them. Please don't call people's hard work "garbage."

P.S. I don't use Juypter daily and it isn't my first choice but it is extremely useful.


Millions of people? I doubt it. A few thousand at best, those with the patience to put up with it (better men than me).

But even assuming what you say is true, how many people use IPython is irrelevant to how qualitatively good it is. Hundreds of millions used Windows 3-95-98, does that make these products good?

The argument could be made that we're worse off for these even existing, they actually set us back. I'm not making a similar claim for IPython because there are too many alternatives for it to have any serious impact these days and Python is pretty much dead in everything but scripting and braindead web development. Those who think it'll amount to anything in statistics/data science are deluding themselves.


Your tone is needlessly aggressive and your post has little to do with either the post or the comment you're replying to. We got it, you don't like the way IPython is designed - is that it? If so, why write so many words about it? Or do you propose that the implementation in the article is superior to IPython?


See above.


IPython is a kernel

Juypter is the the notebook

> A few thousand at best

It is a very big project that has many more uses then just data science. It also has alot of corporate users such as NASA, Microsoft, Google and many more.

Juypter does not equal Python. I mainly use different languages with Juypter.

> Python is pretty much dead in everything but scripting and braindead web development. Those who think it'll amount to anything in statistics/data science are deluding themselves.

You are coming off as a big talker with extreme statements. Are you trying to just instigate people or are you just using your normal tone of speaking with people? I personally find Python as the 2nd best tool for everything. I usually don't use it but many people make a living from Python and many people use IPython as their tool at work. Your coming across as a big troll and I doubt that is the way your trying to portray yourself.


I made an argument and backed it up with a series of technical points. Feel free to disregard my opinion piece in the end there.

How "i'm coming across" is at best irrelevant, at worst a silly attempt to avoid the issue. Either stay on topic and address the points I made or don't post at all.

TY


> How "i'm coming across" is at best irrelevant

All of life is 100% about that. Wish you all the best! Your points were paper cuts.

None of these things are broken and I use Juypter on my server. This is an non-issue for me and most people's use case. It is a document for sharing code and results and not for debugging. This is why Nature (#1 Science Journal using it and well just about http://www.nature.com/news/interactive-notebooks-sharing-the...)

Please look at this. This is the main use case. https://losc.ligo.org/s/events/GW150914/GW150914_tutorial.ht...

> + Can't easily use it for remote debugging (which is really the most WANTED usecase for something like this) since the concept of remote kernels is alien to it. One has to hack around the internals to expose a plain ZMQ kernel and it's still not working without issues.

+ Jupyter can't connect to remote kernels it hasn't spawned itself, gotta use stupid tricks like tunnels to fake them as local.

+ IPython forces integration with single-threaded event loops and takes the "we know better, threads are complicated and bad" approach to exposing something simple that can be used by others to integrate into any sort of app, multi-threaded or not.


> (better men than me)

What the fuck.


I made no comments about the quality of the IPython kernel, just that what the author created was nothing like it.

Apart from that, I'm not sure why you see the need to be so aggressive about your points.


What most people don't realize is that iPython (sans this notebook stuff) scales horizontally (across boxes) too. Get a bunch of UNIX surplus cheapo pizza boxes with decent CPUs and start crunching cheaper than buying new SuperMicro boxes... let some die and it's still cheaper than buying new gear.

If you want a Linux cluster and don't want to set that shit up manually, Rocks Clusters FTW.

I haven't had good luck with this new notebook system, it requires installing too much and doesn't just work like Python packages and it's too much of a pain, but maybe it will get less painful... Maybe they should just provide desktop apps which pull down/update and vendor their own per-language runtimes and give a way to interact with those isolated (vendored) runtimes, rather than a bunch of clunky instructions that go stale quickly. For server-side, something similar that doesnt interfere with native packages.

For somewhat older Python notebook apps, ReInteract is nice.

Source: a former Stanford HPC sysadmin


Only drawback is power consumption. Running a basement/dungeon cluster on surplus (or even brand new) hardware, you're going to end up paying through the nose on power bills.

With regard to a smoother distribution workflow, Enthought and Anaconda (the latter being my preference) go a long way to ease this. They keep curated package sets of scientific software that are known to work together smoothly, plus include their own package management systems that work well when installed standalone e.g. in a user's home direcctory. You can do this with pip and/or virtualenvs, but it's not as smooth and that is really optimized for a different use case I think.


> this new notebook system

Jupyter? Or are you thinking of something else?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: