> What irritates me is when people say classes are bad. Or subclassing is bad. Thatʼs totally false. Classes are super important. Reference semantics are super important. If anything, the thing thatʼs wrong is to say, one thing is bad and the other thing is good. These are all different tools in our toolbox, and theyʼre used to solve different kinds of problems.
The issues with inheritance based OOP are that it fits very few problems well, that it usually causes lots of problems and that many programming languages only have inheritance based OOP in their toolbox.
Java is the extreme case of this. Patterns like abstract visitor factories are hacks to express situations that cannot be expressed in an obvious way.
Inheritance is just one of multiple facets of safe code reuse in OOP. Aggregation, composition, encapsulation are as are much as fundamental notions in OOP as inheritance. So i think reducing OOP in general, and java in particular to "inheritance based OOP" is a miss characterization
> are that it fits very few problems well, that it usually causes lots of problems and that many programming languages only have inheritance based OOP in their toolbox.
Do you have any objective way to measure that ?
> Patterns like abstract visitor factories are hacks to express situations that cannot be expressed in an obvious way.
But isnt that the reason to have a pattern ? An easy way to expression a non obvious recurring situation ?
None of those things are necessarily fundamental notions in OOP so much as they are core constructs in many OOP languages. It's quite trivial to find multiple examples for why inheritance causes far more problems in many cases than when using composition- rigid class hierarchies and fragile superclasses are some such examples (problems: extendability/maintainability and high coupling). Patterns are ways to semi-cleanly work within an OOP design, but you'll find that it's often the case that a lot of verbosity needs to come along for the ride- working towards solving how to structure class hierarchies more than solving the actual problems those classes are meant to solve.
There's a reason why composition is preferred over inheritance. There's also good reason why some programmers will take it to the extreme and say that it ALWAYS causes more problems. Some languages lack of providing an appropriate aggregation alternative generally keeps inheritance alive.
Some people will hold onto their positive notions about inheritance too, and that's fine, but there's a reason why many people advocate strongly against it (and why some modern language designers skip it altogether!)
> Some languages lack of providing an appropriate aggregation alternative generally keeps inheritance alive.
I might not be understanding well, but are you talking of languages that happen to have no _composition_? But do have inheritance? Sounds like those with inheritance are a strict subset of those with composition (which in turn would be all but fringe languages).
No I'm talking about a lack of language constructs that make composition a pain and inheritance a quick fix: forwarding. Inheritance is often used improperly because writing forwarding methods to delegates is a pain. IS-A/HAS-A goes OUT-A the window: in many cases ushers developers to adopt it because it requires less typing.
It seems to me that you are just explaining what the parent was saying. My point is not inheritance is good/bad or usefull, my point was that OOP doesn't imply inheritance it's just one of the many ways OOP languages reuse code.
Aggregation and composition are almost the same, are not distinguished in e.g. Java and exist in pretty much every rogramming language regardless if they are object-oriented.
Inheritance is presented everywhere as the go-to method for structuring everything in languages supporting inheritance. The whole java class library consists of huge class hierarchies. Every non-trivial java code base I have seen is heavily infested with inheritance. Inheritance together with other forms of polymorphism are the biggest differentiator to structural programming in e.g. C.
Inheritance is everything else but safe: you have to check if you broke the behavior of all methods and public fields you are inheriting. How often do you go through all classes you are inheriting from?
Sometimes you cannot even fix this with overriding methods:
You can easily construct a cut off ellipsoid from a regular ellipsoid, but then it stops being a quadric surface invalidating your nice class hierarchy. And you cannot change the class hierarchy because half of your codebase depends on it.
Patterns are like neat useful tricks. Like how to open a bottle with a hammer. Pretty nice to open your beer at the end of the day in a workshop. If you work in a bar and you constantly have to use tricks to use your hammer for your work you should probably rethink if a hammer is the right main tool for you.
> Inheritance is presented everywhere as the go-to method for structuring everything in languages supporting inheritance. The whole java class library consists of huge class hierarchies.
That conclusion holds if your primary experience is Java: the Java class library is widely believed to have abused inheritance.
However other languages have avoided that trap. Apple's Cocoa frameworks in ObjC do use inheritance but also delegation, notifications, etc. Also Swift supports inheritance, but here we see Lattner describing inheritance as a "tool in our toolbox," not as the "go-to method for structuring everything."
> Inheritance is everything else but safe: you have to check if you broke the behavior of all methods and public fields you are inheriting.
Designing a class interface intended to be inherited is like any other API design exercise. Your API commits to invariants, and it's the client programmer's responsibility to follow them; if they do the class should not be broken.
If you find yourself checking "all methods and public fields," either the API is bad or you've misunderstood it.
Again, it's just one API design tool. Sometimes the alternative to inheritance is just an ad-hoc, bug-ridden re-implementation of inheritance.
> Inheritance is presented everywhere as the go-to method for structuring everything in languages supporting inheritance.
No, it's not. Many places recommend restraint in use of inheritance (and in languages that support only single inheritance, there are sharp limits to what it can do to start with.)
Patterns are just observed recurrences in a large sample of artifacts. Many people naively see the design patterns book as instructive, when in reality it is just retrospective.
> Aggregation, composition, encapsulation are as are much as fundamental notions in OOP as inheritance.
You say that as if these things are not just as easily expressed in FP -- if not even easier.
Aggregation is just records-of-records.
Encapsulation is just "abstract data types", e.g. ML modules with abstract members, or non-exported data constructors in Haskell. Another option would be simply closing over whatever you're trying to hide. Another option would be existential types. (There's some overlap among all of these.)
Composition... well, actually I'm not sure what exactly you mean by "composition" in the context of OO. Can you explain what you mean?
Reminds me of PHP CEO's tweet: "If you had bad experiences with scrum you just did scrum wrong. If you had good experiences making software you just accidentally did scrum".
> You say that as if these things are not just as easily expressed in FP
No, that's what you read. What i said what was i wrote... OOP is bigger than inheritance.
> -- if not even easier.
Again, this kind of statement really sound like empty FP propaganda to me.
> Encapsulation is just "abstract data types", e.g. ML modules with abstract members, or non-exported data constructors in Haskell. Another option would be simply closing over whatever you're trying to hide. Another option would be existential types. (There's some overlap among all of these.)
I think you may have misunderstood my intent. I was just trying to say "FP can do these things too".
> Again, this kind of statement really sound like empty FP propaganda to me.
Right, so any type of even very modest support for X is "empty X propaganda". Can we please assume at least a modicum of good faith here?
> Or "abstract data types" is just Encapsulation...
Oh, so "semantics" it is then. Oh, well.
FWIW, I think I'm right in saying that ADTs were invented quite a bit before OOP & "Encapsulation".
I notice that you also didn't answer my question of what Composition actually means in OOP. Do you have an answer? I promise, I wasn't being facetious.
That's easy, the word is well defined in the dictionary: a person or thing to which a specified action or feeling is directed. Some people try to call objects entities and claim they are doing something else, but they are just doing the same thing with different words.
What's "inheritance based OOP"? The kind of OOP that models everything using subclassing? Partly due to structural static typing so compatibility/polymorphism can only be achieved by having a common ancestor class?
Sure, but that's missing the point of OOP almost completely.
Anyway, subclassing is an extremely useful and by now somewhat underrated tool: it allows for unanticipated extension and programming-by-difference. Meaning you already have something that's close to but not quite what you need.
Inheritance based OOP models tree-like entites well, where hirachy is defined and clear cut. Unfortunately, lots of real life domains are best expressed by graphs - commonly a DAG. You need to pay attention to your edges and not just the nodes. Inheritance based OOP gives you one keyword to express your edges: extend, and it's horribly inadequate. Mutatable state is not a issue in Java OOP, lacking the expressive power is.
Trees can be modelled with sum types, the mathematical dual of product types (records). Java doesn't have sum types and so inheritance has to be used to encode them.
Subtyping adds huge amounts of complexity to type systems and type inference. Dart even chose to have an unsound type system because subtyping and parametric polymorphism (generics in Java) were deemed too hard for Google programmers to understand. The Go designers agreed.
Haskell and OCaml are a joy to program in, in part because they (mostly) eschew subtyping. So yes, subtyping is controversial.
>Dart even chose to have an unsound type system because subtyping and parametric polymorphism (generics in Java) were deemed too hard for Google programmers to understand. The Go designers agreed.
And they were both wrong, as millions of programmers use Java and C# and C++ just fine, and have created much more impressive software than what Dart or Golang programmers have. Plus, people used to the power of C++ would never switch to Golang (which is also what the Golang team observed: they mostly got people from Python/Ruby and that kind of services).
>Haskell and OCaml are a joy to program in, in part because they (mostly) eschew subtyping.
Sorry, but did you just said that "subtyping and parametric polymorphism (generics in Java) were deemed too hard for Google programmers to understand" (an argument based on complexity) and then went on to argue in favor of Haskell, which is notoriously difficult to grasp, and has so many foreign concepts that it makes Generics look like BASIC level concepts.
Ignoring all the abstractions and concepts (made possible by underlying simplicity), Haskell is absolutely a simpler language than C++ or Java with Generics.
EDIT: my point was that subtyping is controversial, I assume the downvotes are for my own personal position.
>Who are these people, were they professional programmers? How hard were they trying? Perhaps they just wanted a better Java?
How hard should they have tried? Should they have burnt the midnight oil?
This Java you talk about, is still in the top 1-3 languages by programmers, is it not?
And supposedly Haskell is easier (according to the parent comment) but at the same time needs people to try harder to get it than Java? It can't be both...
>I've worked on very large codebases in many large organisations with C++, Java and Haskell. Haskell certainly wasn't the horror story.
No, but it was the new novelty on greenfield staff. Easy to start as better. Java wasn't the horror story in 2000 either.
> This Java you talk about, is still in the top 1-3 languages by programmers, is it not?
I'm disappointed that you appear to judge technology based on popularity. By this argument, JavaScript trumps Java.
> And supposedly Haskell is easier (according to the parent comment) but at the same time needs people to try harder to get it than Java? It can't be both...
I can assure you, having learned both, that understanding Java and all its idioms/patterns is at least as hard as learning Haskell, especially if one seeks to build concurrent software (where the intricacies of Java's complex memory model cannot be ignored).
Because of the immense amount of investment already made, likely these people give up when they realise they cannot make much use of it in Haskell, at least not at first.
> No, but it was the new novelty on greenfield staff.
You've made an incorrect assumption. The Haskell codebases I have worked on, in both cases, have been large and over a decade old.
Anyway, my original post was about the controversy of subtyping, not Haskell versus Java.
He said Haskell was simpler than C++ which might be true. Doesn't mean it's easier. I'm more familiar with F# and C#. I'm guessing that the F# language is simpler in as far as having fewer language concepts. But many people struggle and would find it more difficult, at first. Brainfuck is even simpler yet even harder for many people.
Dart is not unsound (anymore), and they rely heavily on nominal subtyping. Ironically, typescript is unsound, and it relies heavily on more functional structural subtyping (in both cases soundness matters not much). Neither language has particularly good type inference. I believe Go is also structural, though without generics, its type system is bound to be simple either way.
Interesting interview. Java is mentioned many times as language Swift aspires to replace. He is right about Kotlin:
"Kotlin is very reference semantics, itʼs a thin layer on top of Java, and so it perpetuates through a lot of the Javaisms in its model.
If we had done an analog to that for Objective-C it would be like, everything is an NSObject and itʼs objc_msgSend everywhere, just with parentheses instead of square brackets. .."
I think Swift has real chance to reach Java level popularity. It is already at #11 in Redmonk ranking. All languages above Swift are at least 15 year older than Swift. And once it server side features like concurrency it can be much more general purpose.
I wish Swift focused on reference semantics. One of the big problems of value types in C++ is that you have to be a language lawyer to not accidentally make wasteful copies of everything, and the same is true of Swift:
I thought Objective-C had already solved this problem quite nicely with the explicit NSFoo/NSMutableFoo class pairs. I don't see why this needed to be fixed again, in a less explicit way.
> One of the big problems of value types in C++ is that you have to be a language lawyer to not accidentally make wasteful copies of everything...
Not always. The C++ standard has allowed copy elision for some time [1]. Guaranteed copy elision for certain forms of copy has been proposed for C++17 [2].
The conflation of values (which exist in the language's semantics on paper) with physical representations (which exist in computer memory) plagues most imperative languages that try to do value semantics.
For that to happen Swift needs to be usable at Java level in all OSes where JVM/JDKs (some of them with AOT support since the early days of Java) do exist.
I am still waiting for first class support on Windows on the download page.
Right now Rust has much better OS support than Swift.
At the timeframe discussed on the panel, I don't think Swift is really lagging. In few years, it has gone rather well, and in few more years, it should mature a lot more on multiple platforms. Right now, I think attempting to use Swift on a Linux server would be a big nuisance; it's enough to look at the open-source implementation of Foundation & co. and the many trivial things still missing. Once that is complete, it should start becoming more interesting. I don't think the push for Windows support will be very difficult.
I think attempting to use Swift on a Linux server would be a big nuisance
I beg to differ. Look at Vapor, Kitura & Perfect.
I know Foundation is missing implementations for Linux, but it is not something that makes it a big nuisance IMO.
You can quickly have a setup with Swift on Linux running a simle CRUD app.
Yes it is possible and IBM is the one pushing for it.
However it is still light years behind JEE and Spring features, including parity with existing JDBC drivers.
Also besides instruments on OS X, there are no comparable performance monitoring tools like VisualVM, Mission Control and many others from the JDK vendors.
But as far as I know, Instruments is using tools available already in lldb, clang and dtrace. So building a UI that displays the collected data should be more than possible.
>For that to happen Swift needs to be usable at Java level in all OSes where JVM/JDKs (some of them with AOT support since the early days of Java) do exist.
No, you seem to confuse "replace Java" with "TOTALLY AND ABSOLUTELY replace Java everywhere".
Swift just needs to be usable on the platforms that matter -- and since it's by default on OS X, that leaves Windows and Linux.
Nobody cares if it runs on some mainframe architecture that 0.001 of Java use happens, or some other obscure environment.
Apparently you have forgotten about Android, smart cards, blue ray players, embedded devices controlling your electric and heating bill, car infotainment systems, IoT gateways, Cisco phones, Ricoh and Xerox laser printers, ....
Also as of 2017 Swift is still quite unusable on Linux beyond a few demo apps and on Windows nowhere to be seen.
Additionally GNU/Linux, BSD and Windows are better served by Rust, SML, OCaml, Haskell and F# than Swift.
In all aspects that matter, they have better compilers to chose from, IDE support, libraries and tooling.
The positive aspect of Swift is being a modern multi-paradigm language being pushed by a company like Apple, which will hopefully improve the adoption of such languages.
However outside Apple's own operating systems, Swift has a very long road to travel before it gets any kind of meaningful adoption, let alone being a threat to programming languages on GNU/Linux, BSD and Windows.
>Apparently you have forgotten about Android, smart cards, blue ray players, embedded devices controlling your electric and heating bill, car infotainment systems, IoT gateways, Cisco phones, Ricoh and Xerox laser printers, ....
I haven't forgotten them, but apart from Android, I simply don't care for them as domains where Swift should dominate to call it a Java replacement. Nobody cares what runs in a set top box or smart card reader, and blue ray players wont be a thing very soon (if they ever were).
Heck, even Android is moving to Kotlin (and it's Davlik no JVM, so it's twice removed from Java now) and don't they also do a new framework for Android programming with Dart? And Fucsia, when that comes out, I don't see it featuring Java either.
I don't think Lattner had those things in mind when he mentioned competing with Java either. Nor did he had in mind some future in which Java has 0% market share and Swift has all of Java's share in every domain. (Plus, he mentioned the server side and service development as targets Swift is interested in specifically).
>Additionally GNU/Linux, BSD and Windows are better served by Rust, SML, OCaml, Haskell and F# than Swift
That would be relevant if somebody had said that Swift is to replace them today. But what was said was an intention. Not a description of the current situation.
(And let's be real: I don't think Ocaml, SML and Haskell will ever go that far at the stakes Swift is interested in. They are excellent languages but either too esoteric, or with too small communities that don't show much signs of getting any bigger. Languages with corporate backers, on the other hand, usually fair much better -- so Rust still plays, even if Mozilla is not a major player, because it also chose a much needed niche).
> Heck, even Android is moving to Kotlin (and it's Davlik no JVM, so it's twice removed from Java now) and don't they also do a new framework for Android programming with Dart? And Fucsia, when that comes out, I don't see it featuring Java either.
Apparently forgetting again that without Java and the JVM there isn't Android studio and 100% of all major Android libraries are written in Java.
Java will never go away from Android, just like C will never go away from any UNIX-like OS.
Praising Kotlin while bashing Java, is like thinking any UNIX derived OS will ever use anything other than C for its system level programming, or that JavaScript will ever stop being the king of the browser in spite of WebAssembly.
The framework you mention is Flutter and the Android team doesn't have anything to do with it. It is being developed by the Dart team while searching for whatever might become the language's killer feature to be adopted outside AdWords team and Google walls.
Java might eventually lose to other programming languages, hence the active work regarding improvements on AOT compilation, value types and GPGPU programming.
But the language taking Java's place surely won't be Swift.
> Kotlin is very reference semantics, itʼs a thin layer on top of Java
One of the main points of Kotlin is that it integrates tightly with IntelliJ. So Kotlin is a layer (not so thin) between a visual IDE (IntelliJ or Android Studio) and Java-decompilable bytecodes on the JVM.
You don't get that with other JVM languages, e.g. Apache Groovy only gives correct type hints in Eclipse 80% of the time, and JAD stopped working on Groovy-generated bytecodes in Groovy 1.7.
Wow, he just really really does not like C++. He is certainly an extremely knowledgeable C++ guy, obviously Swift is written in C++, but it's hard to entirely agree with his opinion on it across all fronts.
On one way we love the language, the expressive power it gives us, the type safety taken from Simula and Algol, thanks to C++'s type system.
On the other hand like Chris puts it "has its own class of problems because itʼs built on the unsafety of C".
So some of us tend to work around it, by using safer languages and only coming down to C++ for those tasks, where those better languages cannot properly fulfil them.
But as the CVE database proves, it only works if everyone on the team cares about safety, otherwise it is a lost game, only fixable by preventing everyone on the team to write C style unsafe code to start with.
Sure nowadays there are plenty of analysers to ensure code safety, but they work mostly on source code and like any tool, suffer from people really caring to use them.
It's still in early development, but there is a tool[1] that automatically converts potentially unsafe C/C++ code to be memory-safe. If the problem is C/C++ programmers that write unsafe code, rather than trying to change their practices (or even change the language they program in) maybe it's more practical to just have a robot "fix" the code.
As someone who spent over 20 years writing applications in C, anything built on C is crap and that includes C++ and Objective C.
Writing code is fun and interesting. But most software development is not writing code. It's a little bit of build management, even more testing, but mostly it's debugging. Debugging is not as fun as writing code. Every language feature that makes debugging more necessary, harder to do and more time intensive sucks. Dangling pointers are the absolute worst.
I can easily give up multiple inheritance for a more functional language that's far easier to write correct code in.
> As someone who spent over 20 years writing applications in C, anything built on C is crap and that includes C++ and Objective C.
Maybe that the problem, if you see C++ as something "built on C" then it logical that the see a lot of the same problem. C++ evolved from C to specifically address a lot of the weakness in C.
> Every language feature that makes debugging more necessary, harder to do and more time intensive sucks. Dangling pointers are the absolute worst.
language design is an exercise in compromise, and there is space for multiple compromise points on the spectrum. C++ decided (for better or for worst) to go for performance vs nice debugging experience.
> I can easily give up multiple inheritance for a more functional language that's far easier to write correct code in.
Am i the only getting tired of this kind of blanket statements ?
The problem with C++ is, that for all its added complexity and powers, most C code still is correct C++ code, especially all the unsafe pointer manipulations. And there is no real performance reason. Many static typed languages compile to code as fast as C - if not faster thanks to tighter semantics. (Other than that some C compilers are better quality because of the effort went into them due to language popularity rather than any language feature)
> most C code still is correct C++ code
Syntaxicaly yes, but with a more precise semantic and clearer stated "undefined behavior". The canonical example is the work around type punning and such.
> And there is no real performance reason. Many static typed languages compile to code as fast as C - if not faster thanks to tighter semantics.
Speed is only one part of the equation. For stuff like drivers and low level embedded development, we still needs C like unsafe memory manipulation. Rust,D,C# etc all have ways to do that.
> Many static typed languages compile to code as fast as C - if not faster thanks to tighter semantics. (Other than that some C compilers are better quality because of the effort went into them due to language popularity rather than any language feature)
With Valgrind, I would say dangling pointers are a solved problem by now. The real debugging headaches in C++ come from stuff like autogenerated constructors, overloading, template specialization, and other features that change semantics without requiring the syntax of the code that experiences the change to reflect that change. My unpopular opinion is that exceptions also fall into this class of dark features.
> With Valgrind, I would say dangling pointers are a solved problem by now.
Given the frequency with which use-after-free vulnerabilities are discovered in C++ programs, I’d say they’re not a solved problem. Valgrind is great but it doesn’t help when the only inputs that cause bad behavior are bizarre attacker-generated ones.
Only on the platforms that support Valgrind, with teams that bother to use it.
Given that Apple, Microsoft and Google keep doing presentations about such tools at their conferences, and my experience at enterprise level, I would say not so many bother to use them.
Avoiding dangling pointers requires a bit of discipline in pre-ARC Objective-C and C++, but now that we have ARC, isn't ObjC pretty much as safe as Swift? (Unless you explicitly use "assign" properties, of course.)
Objective-C's solution had its downsides, though. The classic `[firstName stringByAppendingString:lastName];` being fine when `firstName` is nil, but not if `lastName` is (though it doesn't always crash if `lastName` is nil – it's fine if `firstName is also nil!). Or how `[myObject isEqualTo:myObject]` returns false if `myObject` is nil. Or how adding a nil object to an NSArray doesn't crash (but is likely a bug!) when calling `arrayWithObjects:`, but does crash when it's an array literal.
These aren't world-ending problems, and you learn the rules easily enough, but it's like Objective-C only solved 1/2 of the problem.
Then there's the really strange corner cases, like how sending a message to nil was undefined behavior:
- when expecting a returned struct, before Apple switched to LLVM 3.0 (would vary depending on the platform's ABI, as well as the size of the struct)
- when expecting a returned floating-point value on PPC <= 10.4,
- when expecting a returned `long long` on PPC (it happened to return `(long)selector`)
For the record, I always loved how Objective-C handles this – I definitely preferred it to Java, Ruby, etc. where you're constantly checking for null or catching NPEs. But I like Swift's solution even more. I no longer have to remember about which parameters are nullable, or sanitize my inputs with `NSParameterAssert`s, etc.
No because there is the whole C part of Objective-C, including UB.
So unless there is some validation of 100% of the code writen by the team and third party libraries, it is impossible to ensure there aren't C style coding tricks being used.
Also ARC only applies to Cocoa like classes, there is no bounds checking, implicit casts are like in C, C style strings are still used in many APIs, and this are just a few examples of unsafety.
>Look at Javascript or any of these other languages out there. They started with a simple premise (“I want to do simple scripting in a web browser”) and now people are writing freaking server apps in it. What happened there? Is there a logical leap that was missed? How is this a good idea? [Audience laughs] Sorry, I love Javascript too. Iʼd just love to kill it even more.
It's interesting to see people talk a bunch about how nothing is good or bad, it's all a bunch of trade offs, and then they eventually tip their hand. I think people should just come out and say what they think is good or bad.
I think that very important aspect of achieving world domination for Swift is front-end development for Web (with compiler targeting JS or Web Assembly)
In a way many iOS and macOS applications are front-end software. It much more makes sense to make Swift available for other kinds of front-end development that for server-side coding.
That's the most curious part, that he wants to go more low-level with Swift instead of high-level.
Systems programming seems well catered for with Java, Go and Rust while high level application programming is left at the mercy of javascript (I like TypeScript but it's mostly improvements borrowed from C# that are bolted on). I think there would be a lot to gain there first and foremost by compiling Swift to WebAssembly.
> My goal for Swift has always been and still is total world domination
I hope that this is never happen. Swift is great, it's universal and it saves you a lot of time during coding, BUT It also has very large syntax and high number of features - documentation is huge! The most of swift programmers probably don't know complete syntax and all features which is problem in a world where we code in teams and work with open source(both cases mean that you work with code you didn't write).
We just need new simple way how billions of people can explain computers what to do and backwards understand what computer was told to do and I'm sure that it's not Swift, Java or C++.
That feature size is why Swift is so scalable. Writing useful programs is easy to do for beginners with a very limited subset of the language. But as you expand your knowledge Swift is rich with features that make complex apps much easier to write for professionals.
I don't think the problem with a large syntax is for writing - it's for reading. What happens when those same beginners are thrust into a professional production codebase and struggle to figure out what's going on?
It's interesting to note that Google has taken the deliberately opposite approach with Go: small syntax, learn 90% of the language's ins-and-outs in a few weeks, so that the average fresh college grad Googler (average tenure being less than 2 years, IIRC) spends as little time as possible ramping up and has relatively predictable output.
And slowly, the language is expanding in very obvious conclusions.
Personally, I think there needs to be a balance in a language, where it should not be too restrictive in its syntax, but on the other hand, it should not be overly complex and has too much syntax. I am on the fence with Swift, but it does seem to come on the complex side.
Don't get me wrong, Go's simplicity comes with considerable tradeoffs and I'm not sure I'd use it in most cases again (started a greenfield codebase with it 3 years ago which is now around 200kloc).
Just pointing out that syntactic edge cases can make writing easy, but most of programming (beyond one-off scripts) isn't writing. See: Scala and C++. Companies using these languages in production frequently disallow entire subsets of syntax or language features because they're hard to maintain.
As I said in another comment, there seems to be a battle between extremes. On one end, the "complex" C++, Scala and others. On the other, simplistic stuff, such as JavaScript, Go, etc. It's probably gross injustice to put Go and JS in the same category, so I apologize, but for this argument, let's over look it. I think Swift and Rust are a good middle ground here.
With your large project, in hindsight would you prefer Go or C++? At least with C++, you can go as complex as you want, or stop and set some "rules" that should not be passed. But I err on the side of having the option rather than being restricted.
I think I lean the same way as you: the option for complexity, and counting on static analysis/linters/discipline to bound that complexity.
That said, for all its theoretical flaws, Go is certainly a productive language (and has grown a pretty handy ecosystem over the years). In the end, the project was deemed a success and choice of language probably played a minimal role compared to hiring good people and prioritizing the right features.
There is some truth to what you are saying. I'm not a beginner so I probably can't appreciate how hard it is to learn new syntactical features. My experience is I've found it pretty easy to learn new ones from my peer's code, but I struggle with actually forcing myself to teach myself new and more complex language features when the existing ones work so well for me.
I'd say Swift is probably harder to learn than it could be because of how much syntax for common features has changed over major releases.
I have to wonder if pandering to "fresh college grad" only is the way forward with new languages. We were once beginners - did you see challenges when you were studying the languages you are now fluent in?
I have a feeling like the computer world is being torn between two extremes - either go all-out complex or "think of the children" simple approach. Not just in programming languages, but in software development in general. If forced to pick out of those two, my personal preference would be the complex, but I would like to see a middle ground of "moderate" technologies.
I realize this is probably unpopular, but I personally think we shouldn't aim for beginner-friendliness in our production languages. If we can do it then fine, but it shouldn't come at the cost of anything else.
People are only beginners for a (hopefully) short time, then they're not. Making things better for them in the post-beginner period is far more advantageous, since that's when the vast majority of productive work occurs.
There is certainly a place for beginner-friendly languages. People need to learn to program at some point, and something which is aimed at helping them do that is really useful. But there's no reason that it should be the same language used by professionals to do real work.
Imagine changing the design of a 747 to make it easier to fly for new pilots. We wouldn't dream of doing such a thing. The cockpit of a 747 is for experienced professionals. If you're learning to fly then you belong in the cockpit of something like a Cessna 152.
Note that I'm not advocating for difficulty just for the sake of difficulty, and I don't want to keep people out. And if a language can accommodate beginners without making things worse for professionals then let's go for it. But it shouldn't be a major goal of most languages.
It's a quandary. Beginner-friendliness does make a language somewhat easier to learn -- someone just trying to wrap their heads around the very idea of a program as a sequence of operations isn't going to appreciate being told that an integer is fundamentally different from a numeric string for reasons they don't yet understand (just to take one example).
Yet once someone has learned a language, they tend to keep using it. Their very inexperience prevents them from understanding that they should switch to something more industrial-strength once they start to write something larger and more complex. Plus, they don't even know when they start to write a program how big it is likely to get.
I don't know what to do about this except to continue to bang on the point you're making.
The cynic in me sees these approaches as a corporate-friendly way to be able to hire more "juniors" to save money. Entire frameworks and languages seemingly exist to enable low barrier entry to junior developers without any effort of the corporate to invest in these junior developers. There needs to be a balance between the need for junior developers to be productive and restricting technology just so as many as possible junior developers can be productive at low cost for the corporate. With stuff like React Native and Electron, I feel the former takes the lead at the expense of technology.
Agree that simplistic languages could work well in academia, as the first language that students see. I studied C as my first language, and I am not sure that is optimal. Nor do I think Java is that language, as many study that first, these days.
Agreed. Go works for Google's use case (hiring thousands of fresh grads with low average tenure - need to minimize ramp-up time) but most companies should hire to the language, not bring the language to the hires.
In the old days we learned directly with a mix of BASIC and Z80 (or 6502) Assembly, at the age of 10.
I was already doing C++ for MS-DOS while at the technical school (15-18 years old) and learned OOP via Turbo Pascal 5.5 and 6.0 before getting into C++.
Just look at kids today doing C++ with Arduino at school with similar ages.
So I really fail to understand the whole pandering "fresh college grad" concept.
I think the point is to bring in as much cheap[er] labor as possible. I mean, obviously people have been managing to cope with complex languages in complex multi-hundred and multi-million lines of code for decades, but now, with the startup craze, there is a need for a vast amount of developer force, that may not be as capable as before. When people go into the business for money reasons alone, things get bleak. I think that concept is for these people. And I get it, money is important! But I think there should be at least some passion there there too, and that's not just for software development, but for most walks of life.
Many features only exist because Swift needs to work inside an ecosystem built on Objective-C. Otherwise, would we really have both "static" and "class" methods? Or Swift's method declaration/call syntax that looks unlike anything else (except maybe Objective-C)?
I love Objective-C, but I don't want to inherit its baggage (via Swift) when I write backend code.
While Swift's use of argument labels would probably never have existed without the Objective-C legacy, they're pretty much the last thing I'd point to as "baggage". The Swift 1.x version of them was kinda weird, but now that the rough edges have been fixed I'd consider the optional named parameters one of the strengths of the languages.
I really can't think of anything in Swift 4 that exists in the subset of the language supported on Linux which is there for obj-c reasons that I would consider an actual problem.
I am a big fan of named arguments. But Swift has both argument labels and argument names, and I don't see the point, other than for ObjC compatibility. I find Kotlin's approach conceptually nicer, where parameter names and labels are the same thing.
To give an example (possibly a particularly bad one), UIView.bringSubview(toFront view: UIView) - I feel that the argument label has made it harder, not easier, to give this method a good name.
Well, Smalltalk actually, and Swift did a poor job of keeping it. Why everything has to look like C or Pascal is a mystery.
Swift makes Objective-C method calls like C with extra punctuation and people add classes to JavaScript. I wish Swift had at least respected that if anything just to cut down on punctuation.
I was responding specifically to the argument that you can't have only class methods because not everything is a class, in which case you could just call them something else other than "class method".
BTW: Before we get too deep in specific language criticisms, let's not forget that Chris Lattner is awesome. The fact that two super smart guys with huge work ethics like Chris and Elon Musk couldn't get along is very disappointing to me.
> The fact that two super smart guys with huge work ethics like Chris and Elon Musk couldn't get along is very disappointing to me.
I'm storing popcorn for the day all the people who've been burned working for Musk finally come together and speak out about his insanity as a manager.
I suspect the thing holding them back is that Musk's goals are laudable and everyone still wants them to succeed.
But be glad you're a (potential?) customer of Musk's, not an employee.
Was Lattner the right person to run the Tesla Autopilot development program? Spending seven years perfecting a programming language is a very different -- and relatively serene -- job compared to putting together the mythical ML/heuristics package that will prevent people from dying in self-driving Teslas, and making it happen yesterday.
Yes he was, because it's a software problem and he's proven himself to be world class at solving software problems. Here is some more evidence from his resume.
"When I joined Tesla, it was in the midst of a hardware transition from "Hardware 1" Autopilot (based primarily on MobileEye for vision processing) to "Hardware 2", which uses an in-house designed TeslaVision stack. The team was facing many tough challenges given the nature of the transition. My primary contributions over these fast five months were:
We evolved Autopilot for HW2 from its first early release (which had few capabilities and was limited to 45mph on highways) to effectively parity with HW1, and surpassing it in some ways (e.g. silky smooth control).
This required building and shipping numerous features for HW2, including: support for local roads, Parallel Autopark, High Speed Autosteer, Summon, Lane Departure Warning, Automatic Lane Change, Low Speed AEB, Full Speed Autosteer, Pedal Misapplication Mitigation, Auto High Beams, Side Collision Avoidance, Full Speed AEB, Perpendicular Autopark, and 'silky smooth' performance.
This was done by shipping a total of 7 major feature releases, as well as numerous minor releases to support factory, service, and other narrow markets.
One of Tesla's huge advantages in the autonomous driving space is that it has tens of thousands of cars already on the road. We built infrastructure to take advantage of this, allowing the collection of image and video data from this fleet, as well as building big data infrastructure in the cloud to process and use it.
I defined and drove the feature roadmap, drove the technical architecture for future features, and managed the implementation for the next exciting features to come.
I advocated for and drove a major rewrite of the deep net architecture in the vision stack, leading to significantly better precision, recall, and inference performance.
I ended up growing the Autopilot Software team by over 50%. I personally interviewed most of the accepted candidates.
I improved internal infrastructure and processes that I cannot go into detail about.
I was closely involved with others in the broader Autopilot program, including future hardware support, legal, homologation, regulatory, marketing, etc.
Overall I learned a lot, worked hard, met a lot of great people, and had a lot of fun. I'm still a firm believer in Tesla, its mission, and the exceptional Autopilot team: I wish them well."
Could someone explain why I should build a language developed entirely by and for writing Apple ecosystem products? It seems like if I'm not targeting MacOS or iOS directly, the long list of benefits suddenly looks much, much smaller compared to e.g. JVM, .NET, Go, etc etc.
"letʼs start hacking, letʼs start building something, letʼs see where it goes pulling on the string" feels scarily accurate, and it's unclear where the language will be in 5 years.
Among other things, there's no way to disable objective-c interop, even though it complicates the language and feels like someone merged smalltalk, C++, and ML—not a pretty combination. But—literally the only reason you'd enable that would be to work with Cocoa/UIKit.
I'm still out on ARC—it was much less of a problem than I expected on my last project, but it never feels like an optimal solution, and you can never just "forget about it for the first draft" the way you can a VM's GC.
> a language developed entirely by and for writing Apple ecosystem products
So, apparently you didn't even read the article, as it is explicitly stated that this was not the intention or direction of Swift.
> Among other things, there's no way to disable objective-c interop, even though it complicates the language and feels like someone merged smalltalk, C++, and ML—not a pretty combination. But—literally the only reason you'd enable that would be to work with Cocoa/UIKit.
Swift on Linux does not use any of the ObjC runtime features that are used on Apple platforms.
> So, apparently you didn't even read the article, as it is explicitly stated that this was not the intention or direction of Swift.
It might actually help that there is a real commitment in that direction. The issue being that it was IBM that mostly pushed for changes in foundation and without there initial blue socket support, even the most basic tasks did not even succeed.
Let alone the none existing windows support. It may not have been Chris his intention but one now ex-employee intention does not mean a lot when the company determines the direction after his release.
While I agree that the state of the Foundation frameworks should be better, I would not go as far as saying Apple is disinterested. Just lower priority. Also, seeing how Swift has evolved, the community has a very large impact on the direct Swift is taking.
Believe it or not, this compiler option is named `-disable-objc-interop`.
> Could someone explain why I should build a language developed entirely by and for writing Apple ecosystem products?
Possibly because you have an affinity for value types, performance, or safety. A language is a lot more than just a checkbox of platforms it supports, although iOS is a pretty large checkbox right now.
> the long list of benefits suddenly looks much, much smaller compared to e.g. JVM, .NET, Go, etc etc.
Swift isn't trying to compete with any of those. I mean sure in the "world domination 10 year plan" sense, but for the forseeable future the bullets that make Java attractive to enterprises (lots of developers, lots of libraries, lots of platforms) are not on anyone's todo list in the Swift community.
Rather, the short-term goal is to compete with C/C++/Rust. So you are writing a web server (e.g. nginx alternative, not a webapp) or a TLS stack or an h264 decoder and buffer overflows on the internet sounds scary, you are doing pro audio plugins where 10ms playback buffer is the difference between "works" and "the audio pops", you need to write an array out to network in a single pass to place your buy order before the trader across the street from you, but still have a reasonably productive way to iterate your trading algorithm because Trump is elected, etc.
As far as JVM/.NET, a cardinal rule of software is that it bloats over time. So JVM/.NET/Go can never "scale down" to the kinds of things C/C++ developers do, but it is less known whether a low-level language can "bloat up" to do what .NET developers do. In fact, C++ kinda does "bloat up", because C++ .NET exists. But that is basically an accident, because C++ was not designed in the 80s with .NET developers in mind, and perhaps for that reason it is not the most popular .NET. To the extent we have a plan, the plan with Swift is to try that "on purpose this time" and see if it works better when we're designing it to do that rather than grabbing a round peg off the shelf and hammering it into our square hole. It probably won't ever be as good at .NET problems as .NET, but perhaps it can get close, for useful values of close.
> you can never just "forget about it for the first draft" the way you can a VM's GC.
Similarly, ARC does not exist to compete with your VM on ease-of-use, it competes with malloc/free on ease of use (and your VM on performance). If your VM is performant enough (or you can afford the hardware to make it so), great, but that just isn't the case for many programming domains, and that's the issue we're addressing.
There is also a quasi-non-performance aspect to ARC that is often overlooked: deterministic deallocation. Most VM memory models are unbounded in that resources never have to be deallocated, but in a system like ARC we have fairly tight guarantees on when deallocation will take place. So if your objects have handles to finite resources in some way (think like open file handles, sockets, something to clean up when they blow away) the natural Swift solution will be much more conservative with the resource use relative to the natural JVM solution. Because of that it may be more useful to think of ARC as a general-purpose resource minimization scheme (where memory is merely one kind of resource) rather than as a memory model or GC alternative itself.
> There is also a quasi-non-performance aspect to ARC that is often overlooked: deterministic deallocation.
Assuming there are no pauses due to deletion of deeply nested data structures, or worse, stack overflows.
Herb Sutter has a very interesting presentation at CppCon 2016 about these issues, where he then presents a kind of C++ library based GC to work around them.
Also ARC has performance impact, because increment/decrements need to be synchronized due to threaded code.
Couldn't agree more