Hacker News new | past | comments | ask | show | jobs | submit login

This is a fairly weak article, full of deflections around the real weaknesses of python. I'm surprised it is on the Paypal Engineering blog.

Like all languages, Python has strengths and weaknesses, and there is no shame in that. An honest article would address the negatives head on, acknowledge them as potential negatives,(not skip around them) and provide alternatives, .

The strawman "Python has a weak type system" is a good example of such deflection.

No one (who understands type systems) would complain that "Python has a weak type system"

A more common "criticism" would be "Python does not have a static type system, which is handy for large codebases worked on by large teams".

Address that upfront, and you have a decent article. This is just a fanboy (who just happens to be at Paypal) praising his favorite language and ignoring its weaknesses.




The keyword here is "Enterprise" Python. The Enterprise is a place where you have "decision makers" which are often people who don't understand type systems, and have absolutely no incentive to do so.

Paypal is a large organization which needs to hire Python programmers. Everyone reading this now has a reminder that Python is popular at Paypal. We also know that endless PLT arguments are probably not common in the workplace.


Fortunately, there are other large organisations employing decision-makers which understand the business value of type systems.

Apple: https://developer.apple.com/swift

Facebook: http://flowtype.org + http://hacklang.org + http://hhvm.com

Microsoft: http://fsharp.org + https://haskell.org + http://typescriptlang.org

Mozilla: http://rust-lang.org

Google: The jury is out on this one.


Google, Apple, Facebook are the tech companies.

Even though they are enterprises, the term enterprise software often refer to the other companies, who use tech as an aid to run their main businesses.

Think about Box, MobileIron, and many such companies targeted to enterprises and who their consumers are. That is the enterprise software market.


Can you clarify what you mean about Google?


For one, both Dart and Go repeat Hoare’s Billion Dollar Mistake, by admitting null references instead of implementing option types.

In a previous HN discussion, I wrote to one of the Go designers:

> I disagree with you on the relative complexity of type systems, and, as someone passionate about my craft, I despair Go is merely good not great, due to what appears to be uninformed design decisions.

> You may prefer to write in Go, but you don't work in a vacuum. Your creation is out there, gaining mindshare, and propagating mistakes made half a century ago. As a language designer, you have the power to shape human thought for years to come, and the responsibility to remain intellectually honest. This is the meaning of Hoare's apology. He didn't know better. You have no such excuse.

https://news.ycombinator.com/item?id=5822545


Well said. Your comment on that link was quite insightful as well. I'll quote it here below:

"So, Go has message passing. Just like Erlang, since 1986.

Only without per-process heaps, which make crashing processes safe. And without process linking and supervision, which helps systems built using Erlang/OTP achieve nine nines of uptime. Instead, it includes null references, also known as Hoare's Billion Dollar Mistake.

But it's not enough to scorn the industry; Go's designers also look down their noses at academia.

A modern, ML-derived static type system? Generics, which would enable the unwashed mashes to write their own `append`? Ain't nobody got time for that — wait, what? Oh, Rust does?

Go's tooling is fantastic, and its pragmatism is commendable, but ignoring the last 30 years of programming language research is not."


I think in both cases, they are allowing pragmatism to win out over correctness. The underlying systems that Dart & Go try to interface with embrace NULL with a kind of oblivious joy that can't be replicated.


I find the idea that optionals aren't pragmatic pretty bizarre. Of all the features that "advanced" type systems give you, optionals are some of the most straightforward. I still haven't heard a convincing argument for why they should not be in Go (e.g., someone pointed out that all types in Go have a default value, but there is a very obvious candidate for a default value for optionals--None; "in practice these issues don't come up"--doesn't that just mean people don't use null much? Why allow it to inhabit every type, then?).

Anyway, neither Dart nor Go is particularly "close to the hardware" as they both have fairly substantial runtimes. We're not talking about a macro assembler here, we're talking about a typed programming language. What the compiler does under the hood is largely irrelevant in any language that doesn't make a distinction between register, stack, and heap allocation.


It’s an example of the “New Jersey approach” to programming language design.

> Simplicity-the design must be simple, both in implementation and interface. It is more important for the implementation to be simple than the interface. Simplicity is the most important consideration in a design.

http://www.jwz.org/doc/worse-is-better.html


If the underlying interface (JavaScript or POSIX as the case may be) is using NULL's up the wazoo, then you need a way to represent that. You can try to represent it as optionals, but that creates impedance, which ultimately may cost you more complexity (and performance).


It doesn't have to cost performance (not even compiler performance!) or create impedance. You just represent the None variant as null in the compiled output. I realize this sounds suspiciously easy and it seems like there must be something I'm glossing over, but there honestly isn't, as long as you never introduce null to your own language in the first place. Once you've done that, though, it gets harder, because you have to differentiate between Some(null) and None, which means they can't both be compiled to null. But this is a completely self-imposed problem; it is not an issue when you create a language from scratch, only if you want to retrofit optionals onto an existing language.


> I realize this sounds suspiciously easy and it seems like there must be something I'm glossing over, but there honestly isn't, as long as you never introduce null to your own language in the first place.

Yeah, having done this before, it isn't that easy. You basically have to map NULL to something else, and if that mapping is so direct and straight forward, you actually haven't improved your engineering one bit.


It is literally that easy. This is how it is done in Rust, for instance. You have improved your engineering by (1) requiring exhaustive match on anything that is potentially null, and (2) eliminating the need to check for null anywhere else. I don't understand why people take it as an article of faith that this must be difficult. In fact, the sheer simplicity of it is why I believe it should be in Go, and am confused about why it is not.



Null is much less of a problem than having no static type system at all.

A "billion dollar" mistake is a mistake Google can affod to make :-/


Most likely a quip about how Go's type system is uninspiring.


Ironically, Go being uninspiring is its greatest strength.


Or the criticisms of Python that he has encountered are just different from your criticisms.

I have not seen anyone complain that Python is not compiled for at least a decade, but maybe the author has.


I always read "not compiled" as a synonym for "slow."

I'm not worried about Python's type system. At worst it's manageable, at best it's expressive for prototyping.

But when I see benchmarks that suggest Python is 10 to 100X slower for critical server code, I have to wonder why anyone would use it for enterprise development.

Which is why there are so many Java and C++ code jockeys working in enterprise. Neither language is pretty or fun or interesting from a CS point of view. But there's no arguing both consistently run faster than anything this side of assembler.

I would have expected critical industrial infrastructure code to pay some attention to that - because speed isn't an abstraction. When you're running giant data centres, extra cycles consistently cost real money.

Dev costs are relatively small compared to operating costs. So it's well worth spending extra time getting good, fast compiled code working.


PyPy really closes the speed gap. The latest PyPy 2.5 release uses pinning to pass pointers between the C layer and PyPy greatly improving IO performance [1]. I've noticed this in a project I've been working on holding open large amounts of concurrent connections (> 100k at a time), PyPy has been completely competitive with Go, and actually using less memory. Yes, it's not as fast as a Java/C+ version, but with PyPy its more like 2-5x slower, not 10-100x slower which really changes things.

[1] http://morepypy.blogspot.com/2015/02/pypy-250-released.html


>But when I see benchmarks that suggest Python is 10 to 100X slower for critical server code, I have to wonder why anyone would use it for enterprise development.

Because CPU cycles are cheap and bugs are not.

>Dev costs are relatively small compared to operating costs.

Uhh, not in my experience.


It really just depends on the project. Sometimes, all you want to do is, like, take data, type-check it, maybe do a couple of simple transforms, and then store it. But you want to do it 50,000 times per second.

In that case, it may very well be the case that ops costs absolutely dwarf dev costs.

Similarly, it may be that what you want is to take data and run it through super-complicated algorithms depending on a lot of business data, and massage it all over the place... but you only need to do this 10 times per second. In which case your dev costs may absolutely dwarf your ops costs.


>> Dev costs are relatively small compared to operating costs. > > Uhh, not in my experience.

It's probably not possible to make a true statement out of context about which costs less. This depends quite heavily on what you're doing.


Python being really fast to code means that the biggest optimizations (those that'll give you a million times or more increment in performance) are fast and cheap to implement. If after that performance is still that relevant, you can always replace parts of the code, with the biggest optimizations already in... And replacing code that already solves a problem tends to be much safer than solving it in "development time intensive" languages.

C++ does have its place (I'm not convinced about Java), but starting with it some project you could make as well with Python just because of performance isn't a good policy.


On my latest project, I've spent about 5k in operating costs, and about 100k in dev costs, and that is just one example. I really don't know where you're coming from when you say dev costs are relatively small compared to operating costs...


In much enterprise development (indeed much of development in general) development time is much more important than CPU time. Not just because of price, but also in reaction time to new and changing requirements.


But there's no arguing both consistently run faster than anything this side of assembler.

FORTRAN is faster than C++ and Java but you don't see it used much in enterprise anymore...


But when I see benchmarks that suggest Python is 10 to 100X slower for critical server code, I have to wonder why anyone would use it for enterprise development.

Because not all enterprise development relates to critical server code?

Neither language is pretty or fun or interesting from a CS point of view.

I find working with c++ brings huge amounts of fun, and I do that almost daily :P But maybe I don't have a CS view (not sure what that is supposed to be?)


Oh sure. I hear complaints all the time about Python not catching things you'd sure wish a compiler would catch. You end up just writing more unit tests to compensate, but that annoys people.


Some people - who do understand type systems - only think of static type systems as type systems. And in that regard they might look at "dynamically typed langauges" as having "weak type systems", since they are often just unityped (only have one type).


Unfortunately, the words “strong” and “weak” often don’t mean anything more than:

> Strong typing: A type system that I like and feel comfortable with

> Weak typing: A type system that worries me, or makes me feel uncomfortable

http://web.archive.org/web/20091227121956/http://www.pphsg.o...


I agree to a degree, though some things are at least confined to a certain dimensions: like strong/weak typing. A continuum more than a binary distinction, but at least you can say things like "stronger" and "weaker" than something else.

(Note that I wrote "weak type system", not "weak typing". "Weak" here is a just a generic adjective, and not meant to be precise.)




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: