For the interested (and with the caveat that I definitely would not suggest it for the Khan Academy's purposes), CoffeeScript does try to address all of the issues that John raises in his post.
Type Coercion: There is no `==` in CoffeeScript. You usually write `if x is y` in order to be particularly clear, but if you're in the mode of most other scripting languages, and you write `if x == y`, it will compile to `if (x === y) {` in JavaScript.
John's note about `x == null` being the only useful application of double equals is quite true, and something that CoffeeScript provides in the existential operator: `if x?`
Falsy Values: The existential operator helps you ask the question "Does this value exist?" (Is this value not either null of undefined?) ... which covers many of the use cases for having saner falsy values in JavaScript. For example, instead of JS' `if (string) {` ... where the string may be the empty string, you have `if string?`
Function Declarations: JavaScript having function declarations, function expressions, and named function expressions as three functionally different things is indeed a wart on the language. Especially so because JavaScript having a single type of function is one of the beautiful aspects that shines in comparison to languages like Ruby, where you have methods, blocks, procs, lamdas, and unbound methods -- all of which behave in slightly different ways. CoffeeScript only provides JS's function expressions.
Block Scope: This is a tricky one, because unfortunately it can't be emulated in a performant way in JavaScript. So all that CoffeeScript can provide is the "do" keyword, which immediately invokes the following function, forwarding arguments. So, if a regular "for" loop looks like this:
for item, index in list
...
A faux-block-scoped "for" loop would look like this:
I don't understand the (anti) hype about type coercion and the '==' operator. I get a faintly cargo-culty vibe from people when they talk about '===' vs '==', since the examples are usually very strange things that I never see in production. I'm not an expert in JavaScript but I've written a fair amount and a couple largish programs and I've never been bitten by a type-coercion bug involving '=='.
The weird subtle bugs I see are usually associated with me using "for (i in a)" for arrays instead of the more ugly C style for loop, which results in "i" being a string (which usually doesn't matter except when using "+" which is sometimes addition and sometime string concatenation).
So, is '==' really that bad? Has anyone seen (otherwise) well-written production code fail because of '==' vs '==='?
I spent a lot of time trying to track down bug in a system once (in php) that was caused by this. I don't remember the exact problem but basically it did something like "if(x == 5)" which is true for both 5 and "5" - but later in the code it was important that the value was actually 5 and not "5".
But I think the main thing is that for people beginning programming it's better to have less flexibility. In most production system it might not matter that 5=="5" but it's important for beginners to understand how different types work. And it helps give a better insight into how if statements, and boolean logic in general, work.
I agree - in fact the problem was solved by removing some unnecessary type-checking code.
The point I was trying to make is that it's easier to learn the basics of types and logic in a strict system. Implicit type conversion and polymorphism can easily result in misunderstandings when you've just started learning to program.
Again. These are classes for _beginners_. I'm not suggesting that you should never learn these things. They should also learn functional programming, estimating how algorithms perform, concurrent programming, binary trees, and a few hundred other things.
But Programming is not something you master overnight.
You need to understand that 2+2=4 before you can understand that 2+i^2=1.
You are correct, but in that case it's not the question of what is easier, but what is necessary. "Easier" implies that there is a choice between comparable options, not that one is the prerequisite for the other.
Yes, but using === would have identify such bugs by causing failure of the test rather than running the code for "is true" and getting odd results. The conditional code simply not running will often make the bug (one of the values not being of the expected type) easier to trace than a calculation resulting in 51 instead of 6 ("5"+1 rather than 5+1) at some point later.
You can fix such a bug in two ways: "manually" coerce the values when used, or make sure they get converted to the right type as early as possible. I find tha ltter, along with using === except where you really must have a more flexible comparison, to be the better approach. And if I mistype and skip a = I end up with == instead of = (which would cause a different class of bug!)
CoffeeScript is a subset of JavaScript with bindings to an alternate syntax (CoffeeScript). John's suggestion is an enforced subset of JavaScript, with the advantage that (some) JavaScript in the wild is understandable; error messages are in terms of the source language (very helpful, more so for beginners); and yields a skill with greater marketability (so far).
Why not use the same subset as CoffeeScript? In some cases, this simply forbids some constructs; in other cases it allows a construct, but only in certain contexts.
A mathematically precise but implementation-unfriendly definition is: only allow the subset of JavaScript that is the compilation of some CoffeeScript program.
These are all reasons why I've adopted CoffeeScript over JS. Sure, you can accomplish the same exact things in JS, but they are far more natural with CS and with far less typing. Less typing is less chance for mistakes and bugs and makes an easier learning experience.
One other line item is the encapsulation of classes in CS. Doing that in CS is natural and easy to explain to a first year programmer. Accomplishing the same thing in JS is far more boilerplate and much harder to explain why you have to fight the language to accomplish something as simple as a class.
Sure, you probably shouldn't learn CS without at least a general understanding of JS, but I definitely think that CS is the next logical step for the second semester.
Seeing as how one of the arguments for teaching JavaScript first is that it lets you teach about prototypical inheritance before the students get used to class-based inheritance, the complexity of simulating a class in JavaScript is probably not the best objection to using it instead of CoffeeScript. If you teach them CoffeeScript, then you will teach them class-based inheritance just like everywhere else, and miss out on the prototype paradigm.
I'm ok with missing out on the prototype paradigm. To me, it is a wart for JavaScript. There has been a ton of discussion on why libraries like DateJS, which pollute the Date prototype, are bad.
The discussion you're linking to doesn't appear to discuss either general problems with the prototype paradigm or specific problems with adding properties/methods to the Date prototype.
The closest it seems to come is a warning about not trying to modify the prototype of an object through its __proto__ property (or, I suppose, .getPrototypeOf), which isn't the same thing.
This is a common misconception. CoffeeScript's "class" keyword is just JavaScript prototypal inheritance. They work and are used in precisely the same way. There's nothing here to "miss".
As matter of fact, while I was reading the post the first that came to my mind was how much the recommendations reminded me of the CoffeeScript introductory page...
Block Scope: This is a tricky one, because unfortunately it can't be emulated in a performant way in JavaScript.
This is not true. ClojureScript does this and it's plenty efficient. What you lose is mutability of the loop local from inside a closed over function. Given how many people get burned by the JavaScript behavior clearly block scope is expected. I doubt anyone would miss the dubious ability to mutate a loop local via callback. Certainly not as much as I miss control over scope & names in CoffeeScript.
It is probably the most controversial feature. Most people either hate it or love it. I think it's hated mostly by those who don't use Python, and well-loved by those who do.
It was not the first programming language to use significant whitespace, but it is probably the most mainstream one that does, and it did pave the way for other languages to do the same. So it makes sense (IMHO) that sometimes other languages copy this feature, because it makes code shorter and more readable.
I haven't really heard any good arguments against significant whitespace, just the usual "but when I copy & paste a block the indentation might be off" (so use the indent/dedent function in your editor) and "I cannot mix tabs and spaces" (which you shouldn't do to begin with). The "hate" is mostly because people are not used to it. Not unlike Lisp's parentheses...
Speak for yourself. I can tolerate Python's whitespace standards, but I - for one - damn sure don't "love" anything about whitespace being significant.
Assume you've never programmed before. Would you want to debug a program that has a significant whitespace bug? It's much easier for the average person to spot an unbalanced parenthesis that an incorrectly sized whitespace.
Getting stuck and being frustrated hurts motivation a lot.
Now curly brackets and syntax may be seen by a person as an irritation or inconvenience compared to whitespace, but I'd say it's better to use a language with an inconvenience than one which is more likely to leave you stuck.
You, I and others used to programming with insignificant whitespace have developed and eye for detail and can rapidly spot a whitespace but. The general population with no programming experience can't consistently space most of the documents in their life, but they can typically place periods correctly.
""" It's much easier for the average person to spot an unbalanced parenthesis that an incorrectly sized whitespace."""
Is it? Assuming you use a fixed-width font, any incorrect/inconsistent indentation should stand out like a sore thumb. More so than counting { }s in nested blocks. (Of course, decent editors help with both cases, so I think it's essentially a non-issue.)
"""Assume you've never programmed before. Would you want to debug a program that has a significant whitespace bug? It's much easier for the average person to spot an unbalanced parenthesis that an incorrectly sized whitespace."""
I'd argue that for a beginner it's equally difficult.
And how about this equally difficult to spot classic bug:
if (f==x)
do_something()
and_also_do_this() //oops!
a language with significant whitespace does not permit it. It gets people using the correct indentation levels from the start, whay they should have used anyway in any language, even one where whitespace doesn't matter.
I propose we don't talk about languages with "significant whitespace", but instead about "languages with proper indentation enforced".
> I propose we don't talk about languages with "significant whitespace", but instead about "languages with proper indentation enforced".
Yes, 'significant whitespace' is too sweeping a term to be useful. I've heard people complain about Python's 'significant whitespace' until they realize that it uses basically the exact same whitespace rules as Java does, except with sensible indentation rules instead of braces.
Why is this hard to catch? Format your code and the incorrect indentation goes away. A beginner should be taught to run the formatter to keep the code neat.
Yeah. Because only kids use and love Python and like the significant whitespace, right? And Haskell with it's significant whitespace --yeah, that's another language for "kids who don't know better"...
Anyway --you're a Smalltalk guy. How do you get to talk about weird syntax people don't like? Do many people like interleaving-argument selectors?
You're mistaken if you don't think a shit ton of people dislike Python exactly because of significant white space as well as it being the source of a great many bugs.
As for Smalltalk, yes, they do like them, since faking them by passing literal hashes is a common pattern.
"""You're mistaken if you don't think a shit ton of people dislike Python exactly because of significant white space"""
Those people don't use Python anyway. There are a shit ton of people disliking anything you can think of. I don't know of many people that tried actually USING Python and still dislike the use of whitespace.
I also don't think the "whitespace haters" are intelligent about it or have any basis. For one, you should follow EXACTLY the same whitespace conventions in any bracketed language, even C. That is: you should properly indent blocks, and you should not mix spaces and tabs. If someone does not do that, I don't really think many people would like to work with his code. And if he already does that, then Python's whitespace is not any different, it just removes the brackets.
"""as well as it being the source of a great many bugs."""
I cannot think of ANY bug that can be caused by Python's use of significant whitespace. On the contrary, it prevents a lot of common bugs in bracketed languages (if/else nesting, etc).
The only think Python's use of whitespace can cause is compiler error messages (when you have messed the indentation) -- which is totally different from "bugs".
So, what bugs are you talking about? Any specific example?
"""As for Smalltalk, yes, they do like them, since faking them by passing literal hashes is a common pattern."""
Literal hashed are more like named arguments than Smalltalk selectors. For one, elements inside them are optional...
The issues around significant white space are well known and well blogged about, I'm not going to re-argue them here; google it you must, it's an old argument. If it doesn't bother you, so be it, but you're not going to sell me on it because it's crap; I can't stand Python.
If you don't think it has ever caused bugs, then you aren't thinking very hard or you've not looked into the opposing point of view at all. In any case, I'm done discussing it with you.
"""If it doesn't bother you, so be it, but you're not going to sell me on it because it's crap; I can't stand Python."""
Yeah, I thought that was your conception on the issue: i.e just blind prejudice.
"""If you don't think it has ever caused bugs, then you aren't thinking very hard or you've not looked into the opposing point of view at all. In any case, I'm done discussing it with you."""
You invoked some mysterious bugs due to Python's whitespace rules, I asked for a specific example, but conveniently "you are done discussing it".
I told you to google it because it's common, don't be an ass. Obviously I'm talking about the difficulty of sharing code and cut and paste issues between developers who have different ideas about tabs vs spaces.
It's not blind prejudice, signifigant white space is a valid reason to dislike a language; that's anything but blind. That is but one of several reasons I don't like Python, and none of them are blind. Ruby is my scripting language of choice when Bash isn't sufficient.
What, no disclaimer that you are actually the creator of coffeescript?? Your comment seems a bit biased to me. But I'd expect nothing less from a coffeescripter.
So you think it's better for someone who has no experience coding to have to understand why they would write some code one way, then have to "transpile" to something else to get it to run? Then if something goes wrong they have to figure out how to fix it in two languages? This doesn't sound like something I would recommend for a beginner, or someone experienced for that matter.
No, he specifically said at the start that he doesn't recommend it to Khan Academy.
To me, it reads as if he is simply supporting John Resig's decisions about what parts of JavaScript to teach -- by virtue of having made those same decisions in designing CoffeeScript.
One might say the same about the English language, yet it is taught in classrooms all across the United States, despite the fact that the U.S. has no codified national language. English is the de facto national language, but nothing more.
So why Javascript as a first language? Probably the best argument is that it has the widest reach of any programming language in use today because Javascript will run in any web browser on any platform. It is the closest thing to a universal programming language you'll find today.
If widest reach is your criteria, Java and C would also be equally good choices. Unless the web browser is special in some particular way that makes it 'more worthy' than other execution environments?
Java is an extremely confusing language for newcomers. While it's "simple" in comparison to languages like C++ and Haskell, the perceived inconsistencies (do I use == or equals()? length, length(), or size()?) really drive newcomers--and some veterans--up a wall. Throw that in with the insanity of generics, the "everything must be within a class" concept, the mess between objects and primitive types (ints AREN'T Integers!), the verbosity (and inconsistency in naming things), and the lazy-ass exceptions that only vaguely sometimes give you a general idea of where things go wrong, and it's horrible for new programmers.
Javascript is relatively strange, but it's much easier for me to teach JS to somebody than Java. I find it much more easy to explain how and why things work in javascript, whereas when someone asks me why something in Java works this way, I respond with an "I have no clue--they just decided it'd be a good idea and I think it's terrible."
Java isn't really simple in comparison to Haskell--Java is full of warts and inconsistencies while Haskell is simple, consistent and elegant. The reason Haskell is viewed as hard is that it takes basically nothing from languages you already know. If you don't know any languages, it's actually a great starting point.
> The reason Haskell is viewed as hard is that it takes basically nothing from languages you already know. If you don't know any languages, it's actually a great starting point.
At my undergrad program, Haskell was used for the second CS course after the basic intro course. It was a nightmare and drove people away from the major in droves. They just did a curriculum overhaul and one of the changes the CS faculty are most excited about is moving the Haskell course out of the spot where it crushed the spirits of potential majors.
Had a similar experience, except we focused on using SML. We went from 15 people down to 3 in the class by the end. I think the tough part is that most of us had zero experience with functional concepts, while the professor couldn't seem to grasp that everything that made complete sense to him, didn't make sense to us.
There's a lot of potential for Khan Academy to make significant impact with this -- I'm not entirely sure JavaScript would be my first choice, but it does present a lower barrier of entry that is ubiquitous and allows people to even mess around straight in their browsers. With JS, you can show people a little bit of both worlds at a time and compare imperative/functional examples (whereas in Haskell, you're stuck with functional, unless you really want to mangle things). It's also a bit more exciting to be able to throw up a little JS creation on the web so that everyone can see, as opposed to writing a program in C and not really being able to show it to people easily without having them run some executables or bringing them to your computer.
One might argue that people who can't make it through Haskell or other "tough languages" shouldn't bother being CS majors -- but we need all kinds of people in the industry and people have many differing interests. There's blue collar work and white collar work even within software engineering.
The people who knew they were going to be CS majors before they got to college didn't have their spirits crushed. When I TA'd the haskell-based course, they comprised about 25-30% of the students in the class.
The students who were trying out the field but weren't born computer scientists were the ones who were miserable. It was their second course/experience with computer science ever and dealing with Haskell was extremely challenging for them. Being an internet tough guy about it doesn't help them embrace the field, it turns them off more.
Java isn't a language that promotes "play" like Javascript does, because it's been engineered to prevent you from shooting yourself in the foot. Play helps make learning more fun.
C would be a good choice, except for the fact that memory management and pointers are a very important concept early on in the language that you really don't want to bog people down with that are completely new to programming. I remember a blog post by spolsky about how you lose a lot of people on that one concept, especially if it is poorly taught.
The claim that everyone has a profiler, debugger and REPL is blatantly false. Only a subset of commonly available desktop browsers include those features, and out of those, only recent versions do. Mobile web browsers don't include any development or debugging tools.
The cost of installing Visual Studio or Eclipse is pretty small considering that you only have to do it once. Compare that with the cost a newbie programmer pays every time they use the comparison operator or the arithmetic operator and they now have to consult their reference text to figure out what the operator is going to do and whether they're using the right operator.
If your goal is to lead someone into a lifetime of programming and help them enjoy it, choosing a language in order to avoid a pesky 15 minute setup cost for a compiler is tremendously short-sighted.
> Mobile web browsers don't include any development or
debugging tools.
LOL. Great point. /sarcasm
> The cost of installing Visual Studio or Eclipse is
pretty small considering that you only have to do it
once.
Boom. That's all it takes to create a barrier of entry. Not everyone has permissions to install whatever they want on the computer they're using.
Then you get into issues with multiple operating systems. Windows does it one way and Linux and OSX does it another way. Also, the tools you've mentioned are always changing. Will it be the same in a year or two?
And for what? So you don't confuse an extra set of comparison operators? Javascript is not that bad.
Boom. That's all it takes to create a barrier of entry. Not everyone has permissions to install whatever they want on the computer they're using.
Then you get into issues with multiple operating systems. Windows does it one way and Linux and OSX does it another way. Also, the tools you've mentioned are always changing. Will it be the same in a year or two?
But won't they need to install an editor anyways? Or are they just going to use notepad? That will be a horrible experience.
And JS is changing too. None of the languages change so drastically that it's not just something simple to incrementally learn. And I think getting students used to the fact that things change isn't a bad thing. But also let them know that typically backwards compat isn't broken with these changes.
With that said I'm a big fan of JS as a first language. I'd probably spend the first day with them using notepad or vi (or whatever is natively installed on the system) -- and then have them install some nice IDE of my choosing.
I'm all for vim, but you're mad if you think first-time coders should pick that up before even writing their first piece of code. Talk about barriers to entry and unintuitive behaviors...
I think the most viable environments for absolute beginners will be online ones that include an REPL and editor, using either JS or something that compiles to JS or to anything else on the backend, so even browser choice is irrelevant. Then you don't have to teach how to load a JS file into a document and so on.
Khan Academy is a training/teaching company. I don't think it's a leap to assume they will be providing online tools to write code with (eg http://c9.io/).
I would find it hard to believe that even a significant minority of users who would have any remote or even fleeting interest in programming anything would have a mobile web browser as their only choice. You'd be hard pressed to find someone who has a smartphone or tablet but not a computer.
I struggle with this mindset. It probably makes sense for Khan Academy, but this industry (software development) needs fewer people who refuse to learn anything unless all obstacles are removed and the information is spoon fed.
I find your statement to be amusing considering a lot of the anti-JS comments on this thread are about how unwieldy it is as a language. Of course, your critique and those complaints are two different things. You are criticizing that people are too coddled about having an accessible environment to program in. The complaints are that JS is too unfriendly and muddled, and so does not coddle beginners enough!
Installing the required tools to use a programming language is a very different kind of challenge than learning the ins and outs of a programming language with lots of gotchas.
When you work as a software engineer, you're expected to be able to install and operate just about any programming environment out there, so you might as well get used to it. Installing any of the big popular programming environments is pretty trivial, Eclipse, Visual studio, Racket scheme, Ubuntu and even the Haskell platform all come with a graphical installer and a comprehensive manual. Learning a new skill is not needed.
Now learning JavaScript wtf's is something entirely different. Stumbling upon one of these issues mentioned in the article while trying to learn to code is very difficult and disheartening.
Once someone has decided that they absolutely want to learn everything there is to know about programming and have decided that they love it, yes, there's no point in spoon feeding. But so many of us stumble into what later ends up being our career or passion, and so our first encounter with any type of activity is almost always "hey, let's see if this is something for me." As a society, we need more programmers and scientists and mathematicians, so best make sure that we make that first impression a good one.
Right the technical stack has become daunting to the point that many cannot get the feedback so needed early on. Now days you either advanced with the technology, or you spent a significant investment of time to learn the trade with the intent of entering the field. The days of the hobbyist programmer have seen their peek until that changes.
This is an eminently solvable problem. There are excellent open source IDEs and toolchains available that can be repackaged in any way, shape, or form.
Double click an installer, and boom - everything's set up.
The fact that something is popular doesn't make it necessarily good, so your best argument is no argument at all. Same as with English - no connection whatsoever to a programming language, it's a bad parallel.
I think the argument is more along the lines of "javascript is ubiquitous, therefore learning javascript is likely to be useful (despite drawbacks created by its idiosyncrasies)."
I suspect Mr. Resig is familiar with the potential gotchas, and maybe even has his reasons for still believing it's still worthwhile.
For my own part, as someone who regularly taught high school seniors (and other non-programmers) various programming languages (Pascal, Perl, Prolog, JavaScript, and Java) over a good chunk of the 1990s, I have to say that JavaScript worked out pretty well, possibly even the best of the bunch depending on what your goals were.
Prolog had some real strengths. The combination of accessibility for some simple applications with the power and conceptual depth of the logic paradigm tended to put more or less smart people with no previous experience on the same footing with those with some experience.
But JavaScript seemed to be the best of the bunch in terms of balancing accessibility, speed from which students could get to doing something with everyday practical use, and available depth.
And despite the fact that those of us who were teaching were just learning and coming to grips with some of the "bad parts" ourselves (because how many people really knew JavaScript in the 1990s, right?), everybody got stuff done anyway.
I'm a little down on things like Logo and Karel, partly because I prefer things that are more general, and partly because even in the niche I might use them, my judgment is that Scratch is better. :)
I didn't try Python myself until late 2001, and by then, I was already more or less out of education, so I didn't get to try it out on anybody else.
But if I were guessing, I think the language itself would probably work pretty well for most students, possibly somewhat better than JS. Not sure how big the advantage would be, though (like I said, we really didn't struggle with JS's warts much), and I think the broad target for JavaScript's use give it something of an advantage too.
This would vary by domain, though. If I were trying to teach a number-crunching focused class, for example, Python would probably move way up my list.
Whatever, Javascript is the easiest language to get started on. No installing a complicated IDE. No configuring a server. Nothing is required except for a browser!
If anything, it's the best language for such purposes: getting started.
I don't think Javascript's availability in the browser should be a major driving concern. Browser-based REPLs and editors are getting better and better. Two examples: http://repl.it/ and http://cloud9ide.com/.
Every language has its gotchas. Even assignment operators are kind of a gotcha.
If you're coming from a basic math background, and have no programming experience, then the expression "a = 15; a = a + 1;" would be confusing in any mutable language, because assigning a value to a variable is not equivalent to stating the equality (identity) of both sides.
You're right in that even Scheme, which evolved with teaching in mind, has gotchas. However, the Racket folks made "student languages" that specifically exclude gotchas, so DrRacket is as close as a student can get to a gotcha-free environment.
Also, as stated by John, JSlint will be used. That by itself avoid the big majority of JS' gotchas. And IMO, C++ has way more gotchas than Javascript and that doesn't stop universities to teach it as first language.
In fact, I think the first language is not just about giving a strong foundation but also give students the chance to find a good internship. So, with that logic, even though I enjoy hacking with Scheme, I'd feel bad to teach that to students who will try to find an internship and writing on their resumes "Scheme" while other universities students would have "C++/Java". And, please, before your argue that a language can be learned or something else, have in mind how companies recruit and parse resumes.. they often just search for a keyword. Now, one might say that you don't want to work for a company who does that, but then, you have to start somewhere in your first internship.
Thus, I like JavaScript as a first language; not because it's the easiest to learn but because I believe it gives a strong foundation but also gives students the ability to find a good internship.
Well, I wanted to know what IDE JSLint was an expansion for, so I did some poking around... it's an online tool. Meh #1.
Next up I thought I would take a snippet from the linked stackoverflow discussion and see what JSLint made of it:
127.0.0.1.toInt();
And... it complained
So I did this:
var = function() {
127.0.0.1.toInt();
}
And it complained (yes, there's a typo)
var foo = function() {
127.0.0.1.toInt();
}
And... it 'compiles' fine, no complaints. So I thought I'd push the syntax tree button to see what was going on and it tells me:
Tree:
""
Which is less than spectacularly useful.
Oh, I went back and hit the JSLint button again and now it givess me lots of errors for the above code.
Including, but not limited to complaining that 127 occurs in column 1 instead of column 9. But then I try to enter a tab at the start of the line to achieve proper indentation... and of course it doesn't work because it is in the browser and so tab just goes and selects the next control.
Sensational.
If I had to enter all my indentation as spaces, by hand, I'd conclude that the Professor teaching this was a colossal idiot.
By "readily available at the command line" - have you forgotten that the context of this discussion is for a language to use for a first year programming course?
A not insignificant portion of the people taking those courses will not be command line savvy.
Also, I think that the fascination some programmers have for tools that are cumbersome and difficult to use is unhealthy.
The tool should not itself be the problem.
The tool should be a force multiplier that lets you solve other problems more easily.
Well whatever the language you want to use, you'll have minor installation/configuration to do. For real beginners, the teachers can do it for them. So, if they use Eclipse for instance, the IDE can be configured to automatically use JSlint.
By "Command line", the OP meant that it's easy to plugged it wherever you want it; i.e. it is not strictly an online tool.
About:
"fascination some programmers have for tools that are cumbersome and difficult to use is unhealthy"
Not sure what you're thinking about. It's more an opinion than a fact. I.e. Some programmers like emacs better than vim, some prefer Eclipse.. some prefer notepad++. "Cumbersome and difficult" mostly depends of your taste and your background.
Is 'grep' hard to use? What about ctrl+shift+f in Visual Studio? Which one is the most cumbersome and difficult to use? Highly subjective imo.
Common web IDE's all embed JSLint error reporting in the editor window while you're typing. I've used Aptana, Zend Studio and Jetbrains PHPStorm, and they all integrate jslint (though usually it's disabled by default). I've found phpstorm's check-while-you-type functionality is the most comprehensive, better than jslint.
I have to agree. BASIC is a far better option. It will obviously never be used for anything, but it will teach the student basic concepts like output, input, variables, etc.
These are not simple concepts for somebody starting from zero!
In all honestly, I started out learning BASIC on Apple IIe's, and have managed to become an okay programmer. I would support it as a learning language.
That said, I don't know that BASIC's syntax is that much easier for beginners than javascript. I mean, there's no braces or semicolons, but a lot of the control structures are basically the same.
Dijkstra could be an asshat sometimes. BASIC is great because it teaches you that a computer can be made to do neat things by following a series of tiny instructions. It gives you that without you having to understand lexical scope, which is, I think, pretty hard to grasp for new users.
Sure, every user will outgrow it, but that's fine. Tricycles are a shitty way to drive to work, but that doesn't mean you should just put your toddler behind the wheel of your car and let him go.
The problem is deeply technical people will put for the effort to learn deeper languages so that leaves a majority of the candidates as being people who want to learn programming because they want to scratch an itch. That being said, BASIC is usually not the language or platform they want to scratch that itch on, more and more the web and mobile are where they want to deliver to. That being said, JavaScript becomes the better investment of time, due to the fact that it directly parallels their goals in learning programming in the first place.
More and more I am starting to think Objective-C and IOS may be a contender, while, Objective-C is technically more difficultly to get started in, the technology stack is a lot simpler, to be effective on the web, one needs to know HTML, CSS and JavaScript and then really and truly they need to know a back end language or Node.js to deliver a back end. Whereas Objective-C and IOS is a visual editor and code. The feedback loop is faster for IOS development than JavaScript development and I believe the feedback loop is very important to a certain population of would be developers. I have never tried to teach a non-programmer Objective-C and IOS dev so the above statement is loaded with assumptions. Please read it as such, but I am leaning towards trying it next time someone approaches me and wants to learn.
Whenever I see a post about what to use as a first language I always think about how I learned, and am learning, how to program. Warning: long-ish story/rant ahead.
My first class in high school was the basic Java 101. The teacher did not know anything more than how to draw UML diagrams and tell us to use for loops. He had been teaching the class for years and had no industry experience. I imagine 10 years ago he figured out all the questions he was going to ask and learned just enough to explain the really difficult ones. He was not a zen master LISP programmer who had glided down from the plane of Forms to enlighten us.
But I started to learn programming because of him. He knew what he knew and he had his presentation down. I didn't know how to program so anything was better than what I was going with. He explained terms and ideas in the most stereotypical of ways. But they made sense. In a way, he passed off his framework of knowledge to those who cared to take it. It was incredibly mind expanding at the time.
And so I had this little ball of specific domain knowledge. I wanted to be able to do cool stuff though on my website though. Java wasn't quite there yet, couldn't do all the things I wanted it to do. So I picked up Javascript from W3 schools online. It was terrible, awful Javascript that I have purposefully forgotten. But I could make my pages interactive, I could impress my friends. My little ball of know how was growing and expanding. I kept at it and started seeing how Java "sucked" and why Javascript was the One True Language. New ideas like functions as variables, easy to run scripts, interactive webpages, they all got added on and replaced old less useful ideas from Java.
Pretty soon after that I started actively learning Mathematica in college. The whole IDE is just lisp with some mathematical lipstick, but I didn't know that at the time. I started thinking of how to solve problems with lists, how to use IDE's, and reading and writing to files so I wouldn't have to redo everything each time. I had programs that wouldn't run in less than a second which made me really aware of what I was writing and how to optimize things.
And the story just keeps going and going on like that. Common lisp with Emacs last winter because of PG. More Mathematica and jQuery during last spring and summer because of an Internship. Node, Haskell, Clojure, Python during this past fall for enlightenment. And now R and, maybe, Ruby/Rails during the winter because I have ideas I want to make happen. I'm by no means a professional, but I know enough to shoot the shit with the CS majors and randomly drop into tech meetups (What up Cinci.rb!)
All that sprung from a high school class taught by a standard teacher on a shitty language that I don't ever want to touch again. And so when people talk about first languages and why X will be an absolutely horrible language to use to learn, I zone out. Every language is going to be a horrible first language. All of them. I don't care what arguments you make about js gotchas or Python's batteries included or why scheme is more beautifuller than common. If you don't know how to program, learning how to program is going to suck. Period. Maybe learning C++ first isn't the hottest idea, but at some point you have to grit your teeth and go "Programming is hard."
So I applaud Resig's and Khan Academy's use of javascript. Not because I love javascript, but because I know they are going to doing an amazing job presenting the tiny balls of programming knowledge to kids across the world. They could have picked most any mainstream language they want to present these ideas with and I would still be excited by their ideas. All that matters is how they plant the seeds. If they can stir up a kid's desire enough to get over the "ARRAYS SUCK. FOR LOOPS SUCK. WHY WON'T MY PROGRAM COMPILEEEE." and learn the next language or build the next project, then they have done an amazing job and good for the world.
tl;dr Programming is hard, every language has it's own use, waffling about the choice of languages isn't going to teach you what you need to know any faster.
Yes, yes, a hundred times yes. My first language was Apple BASIC. Looking back now it is a horrible language (no while loops, only 2 character variable names, line numbers with GOTO as the main way of controlling program flow), but it was easy enough to get my hands wet and get the feel for programming. And more importantly, I got stuff done.
I certainly didn't learn any best practices or good programming style from BASIC, but I learned something far more important: That I loved programming.
Ditto - except it was extended basic on a TI 99/4a. Once I wrote a program of my own that actually worked I was hooked. PDP 11 Basic -> Pascal -> C -> Visual Basic -> C# and now Javascript. There was no going back.
Exactly the situation my daughter is experiencing now. Her Java I instructor cannot program her way out of a paper sack and is going at a snail's pace (an entire semester and they have yet to learn decision making or looping). She's so bored she's decided to drop Java II and switch to an online C++ course so she can go at her own pace. She doesn't want to be a programmer, her love is for digital imaging and animation. But she loves mathematics and wants to understand everything related to her art. /proud father is proud/
That's just logical and obvious, since "+" is both numerical addition, and string concatenation, but "-" is only numerical subtraction, and the '5' starts out as a string.
Every language has 'gotchas'. Doesn't really matter which you pick to learn first at all. The more important thing is that you don't give up. possibly there are languages that just make people want to give up, but I'd say perhaps they're not motivated enough to learn if that's the case.
I started out on BASIC, and after a while I decided it was a piece of shit language and learnt assembly. But it taught me programming which is what I wanted to learn. I'm really glad I learnt BASIC first... essentially I learnt to swim really fast through syrup, and then switched to swimming in water.
The good thing about javascript as a first language is that people can be programming in it immediately, in their browser. They have a built in REPL to help them, as well as a debugger, profiler, etc. They have numerous docs to look at, and if they go to any website they can check the source to see how it works. That's a big win.
Calling your example logical and obvious is only logical and obvious if you're completely trapped within the JS mindset. It's the same as how people defend the absurd semantics of Visual Basic and PHP. It's fine if you like it, but to claim that it's objectively okay is just not supported by fact.
It's also worth considering that even if every language has 'gotchas', some of them have far worse gotchas than others. It is worthwhile to choose a starting language that teaches the fewest bad habits and the fewest bizarre rules so that people can easily learn new languages.
Your ideas about what is "logical" are completely arbitrary. Look, I can make up rules too:
* '5' - 3 should return '5', since it's the string '5' minus all the instances of the character '3' in it. There is no other logical outcome!
* '5' - 3 should return an empty string, since it's the string '5' with the last three characters removed. There is no other logical outcome!
* '5' - 3 should return '2' -- since we started with a string, the result should turn back into a string. There is no other logical outcome!
* '5' - 3 should return 50, since the only logical way to do math on a character is to take the UTF-8/ASCII value of it and then do the math. There is no other logical outcome!
* '5' - 3 should return undefined, since subtracting from a string typically doesn't produce a reasonable result. There is no other logical outcome!
You have provided absolutely no rational basis for discriminating between the merits of these choices, so your claim that Javascript's choice is one of exactly two "logical" ones is bizarre. If you think Javascript's choice is better, give a reason why it's better, don't just say that it's better.
The time wasted learning the quirky semantics and special rules covering each operator and data type in JavaScript could be better spent learning to reason about algorithms and data structures in a less confusing language. Not all learning is equal, and not all challenges are exactly the same.
The claim that by hindering a newbie programmer's attempts to express themselves will make them more effective in a better language is ridiculous. Even if it's true, it's missing the point - your goal when teaching newbie programmers should be to teach them good habits and generally applicable skills, and most importantly, you want them to love programming.
Having to memorize arcane minutiae and spend tons of time debugging problems caused by stupid design decisions is not going to make people love programming. JavaScript is tremendously accessible by virtue of its ubiquity, but that does NOT implicitly make it a good language for learning to program. Its numerous flaws and divergent implementations will drive away beginning programmers that might otherwise learn to love programming if presented with a better environment.
It's logical and obvious if you already know the language; otherwise, it's perverse. I had to read your comment (the initial edit of it) twice to be sure you weren't being sarcastic.
You're looking at the wrong side of it. The subtraction does behave in a perfectly logical manner. The problem is that, given the the behavior of the subtraction operator, the addition operator's actions are illogical. Specifically, I'd argue it's perverse because it breaks commutativity:
'5' + 3 - 3 != '5' - 3 + 3
The logical approach would be to only assume that '+' is a string concatenation if both operands are strings and otherwise type coerce into numbers. Then:
'5' + 3 = 8
'5' - 3 = 2
To someone who doesn't code Javascript for a living, the above seems like a far more consistent and useful behavior.
I'd expect "hello" - 1 to return NaN, since you can't perform numerical subtraction on something that isn't a number. That's exactly what Javascript does.
In the same way, I'd expect "hello" + 1 to return Nan, since you can't perform string concatenation on something that isn't a string, and you can't perform numerical addition on something that isn't a number.
I would expect 'hello' + 1 to throw an exception. I would similarly expect '5' - 3 to throw an exception. Why? Because you can't add a string to an integer, and nor can you subtract an integer from a string. Doing anything else is arbitrary and unpredictable, IMO, leading to subtle type errors. You want to find type errors early as possible, rather than letting bogus values flow through the program.
I think Javascript is broken here, and Python has it right.
That's definitely not obvious. Some people think in patterns or generalizations.
String [binary_operator] Number = String
String [binary_operator] Number = Number
That's just begging for further explanation.
Explaining JavaScript is probably more challenging than say Java. Java has many keywords and usages, but when it comes to explaining the concept of those keywords/semantics to students, it may be a little bit less confusing.
Left hand side is a string, and "+" is string concatenation as well as addition. So '5' + 3 = '53' (String concatenation). Just as "Hello" + 1 would equal "Hello1". The right hand side is converted to a string.
Left hand side is a string, but "-" is subtraction for numbers. JS converts the '5' to a number, then subtracts. So '5' - 3 = 2 (Numerical subtraction).
That's the explanation, and it's not terribly hard to get past.
You picked one of a dozen or more examples from one list of problems with JavaScript. Of course this particular example can be understood by those of us who already have programming experience. JavaScript's type coercion is likely to be a big stumbling block for many beginners and Resig specifically points this out.
I hate behavior myself, though it's probably because I learned Perl first, where there's a separate string concatenation operator ("." in perl5, "_" in perl6). I do wish javascript had that same separation so that "+" was always addition and not sometimes string concatenation.
I'm a fairly experienced programmer and JS still drives me up the wall. The implicit casting and inconsistent operator overloads have led to many unhappy visits to Stack Overflow to see just WTF my browser is doing. I can't imagine the experience being any better for newbie programmers! Even simple things, like bitwise operators, are basically broken unless you take care to ensure that your variables are coerced to whole numbers, because all numerics are f%!#^&ing floating point numbers...
Given that the language is such a mess, I think it would induce a new programmer to compartmentalize their JS learnings as a bunch of special case hacks instead of discovering broad language principles that are applied consistently across the language.
Python seems to me like an infinitely better choice. Every language has its warts but bringing up a new generation on a language that even the author admits was a quick and dirty hack takes "worse is better" cynicism way too far.
The secret to enjoying Javascript is to stick to the non-brain-damaged subset identified by Crockford in his Javasript:The Good Parts. For example, if you always use the triple-equals comparison operator, you don't have to worry (as much) about implicit type coercion.
I have argued JS as an intro language for years. My main argument has been not based on language features which I feel are unimportant in forming a young programmers mind but rather in the complete lack of barriers to getting started.
Take for instance python( my personal language). At a minimum you need python installed on the computer. Then you need to deal with issues such as the PATH and PYTHON_PATH. Also you have to understand package naming and import scheme.
With JS you only need a .html text file on the desktop.
Write code, save, double click, results. It is something that anyone who has even seen a computer can understand.
Particularly if the student has not had much experience in computers in general, things like paths, imports, file system knowledge and command line interfaces can be a barrier to learning code. You will eventually need to learn all of the things above to be a programmer but why bore a child or teenager with the details of a file system or command line when you could be showing them how to code animations on a web page.
I will admit, that struggling with run time environment, compilers and class path issues made me into better engineer sooner, but I already had a passion for what I was doing.
In the particular case of python, I see none of the problems you mention. In pretty much all Linux distros, it already comes installed and ready to use (i.e., just execute it, no need to know paths or anything).
The same happens on Windows, once you run the installer; there you can, too, just edit and double click it. All you need is a .py file on the desktop.
Yeah, JavaScript will break their impressionable young minds. But you know what? Programming in general will break their minds. Programming is fucked up from any perspective. If they learn a good language first, they will go crazy trying to figure out why nobody is using it. Might as well teach them a language that lets them get stuff done in the real world, while preparing them for the ugliness ahead. They can learn the good language later.
I support using JavaScript as a first language. If for nothing else there are millions of code snippets just a short "right click -> view source" away. Granted these may not be the most ideal examples, but it enables tinkering. One thing that always frustrated me about the "easy" languages to learn (Python, Ruby, etc) is you still have to figure out how to download something and then the first programs just write out text to a console window. It's hard to see at first how learning one of the "easy" languages translates into building cool stuff.
Having access to such a vast array of samples, plus something like Khan Acadamy teaching the "right way" is just awesome to me.
Reading this I reminded of the saying "when you have a hammer everything looks like a nail". It's not surprising to see John Resig leaning towards using Javascript. There is a lot of research into teaching introductory programming. It would be nice to see some of that referenced in making the decision. You know, base it at least in part on science rather than just opinion.
I am so looking forward to a huge army of newly-trained programmers who view prototype-based inheritance as the default and classical inheritance as weird.
Similarly, I look forward to a whole set of various modules and libraries to graft prototype-based inheritance onto existing languages like Ruby...
Unfortunately I think that class-based inheritence is more general in concept. That is, people who understand class-based inheritence have a very small leap to get prototypal inheritence. You can pretty much just say, "Inheritence works on the instances/objects, not on the class."
Explaining class based inheritence to people who only know prototypal seems trickier. You have to explain the concept of classes and then build from there.
BTW, are there any prototypal languages that support multiple inheritence -- you can create an object from a set of an arbitrary number of objects?
That is, people who understand class-based inheritence have a very small leap to get prototypal inheritence.
I used to think that, and then I watched a lot of people try. It's harder than you think.
I used a networked server used primarily for MUDs called DGD (Dworkin's Generic Driver) for awhile. The most common MUD libraries for that use prototype-based inheritance. I know mine (based on Skotos) supported multiple prototypes. I think Skotos did as well.
It was interesting - it was pretty easy to explain the prototype-based inheritance to non-programmers. It was much more difficult (usually) to explain it to programmers.
I don't think it's as much "class-based inheritance is good preparation and prototype-based inheritance isn't" as you think.
I think it'd be more apt to choose something like Racket. It has all the desirable characteristics the author finds attractive in Javascript. It is an integrated environment and includes libraries and extensions for teaching basic programming. Unlike Javascript it isn't married to the browser (and by extension, the DOM) and doesn't suffer from a variety of syntactical discrepancies. It even comes with a free book for teaching the fundamentals of computer programming and computation.
Update: All of the desire-able characteristics except for being a resume-search keyword with a high hit frequency. IMO, learning how to program and getting a job are orthogonal.
tl;dr - they picked JS due to its "ubiquity, desirability in the larger workforce, lack of prior installation requirements, and ability to create something that's easy to share with friends"
I find that explanation disturbing. Why not start from a language that teaches the basics of the common programming paradigms, such as OOP (Java) or FP (Scheme)?
You may have a different perspective, but if I wanted to teach OOP to new programmers, Java would be the last language I would consider, not the first. It’s a fine language in a vocational setting where the objective is to graduate people for employment as Java programmers, of course.
Scheme is elegant and small, and a traditional choice for teaching programming where functions are first-class objects. If you consider Scheme appropriate for FP, my guess is that you’d also consider SmallTalk appropriate for teaching OOP.
Agree with your opinion on Java/OOP. There do seem to be better places to start IMHO.
If you had to pick one language to teach people who had never programmed before and you wanted to be able to demonstrate the basics of OOP, FP and other styles, then Python would be a good choice.
I think Python is actually a pretty bad choice for function programming. It does ostensibly support it, but in practice I don't think anybody uses anything resembling purely functional style. For example, I don't think I've seen a single fold in the wild in Python where I see them in Haskell and Scheme all the time.
Could you imagine only introducing mutation halfway through the semester of a Python course? That's what my intro CS course did with Scheme, and it was perfectly natural and effective. Scheme also makes it very easy to see mutation as something different from definition--it has both define and set! which do different things. Python doesn't even differentiate between the two because it has a weird scoping model.
Of course, Scheme itself isn't a perfect functional language either. I found that pattern matching is extremely important--Haskell, mostly thanks to pattern matching, helped me internalize recursion much more than Scheme when I was learning the two concurrently.
Scheme is also a good language to teach about OOP because it divorces the OO part from the language itself. When I had learned about it in Java, I had assumed that an object system was inherent in a language. Scheme showed me that a very expressive OO system could be written on top of a language without the language supporting it in syntax.
Finally, it is very easy to adapt Scheme syntax to other paradigms. Could you reasonably write a logic programming language that looked like Python? I honestly don't know. On the other hand, making one like Scheme is trivial. Not only did we use a language like that in the same intro course, but we actually went over how to implement it.
Ultimately, I think Python is too complicated. Not in terms of being difficult to use, but in terms of just being a ton of different features put together. They may look similar, but they are still discrete ideas that do not naturally come from each other. Scheme, on the other hand, has a few core ideas and everything else is built from those. Just compare the grammars of the two languages to see what I mean.
I agree with a lot of what you say, and I'm not qualified to comment on some of the the other things you say.
My point was that if you were to teach people new to programming just one language then Python might be a good choice because it supports many paradigms.
I agree there are languages that are much more conducive to learning functional programming than Python.
> Why not start from a language that teaches the basics of the common programming paradigms, such as OOP (Java) or FP (Scheme)?
Because that stuff is boring and it drives away a bunch of people who might really get in to programming if there wasn't a huge gap between "just starting" and "doing anything remotely cool".
I for one would not have become a developer if my first experience with programming was a typical "CS101: Let's spend 3 months learning about loops" class, which unfortunately is many students first introduction. This class alone probably drives away half of the people who were interested in programming enough to take the class in the first place, which I find disturbing.
I spent a bunch of my free time as a kid making (really bad) web pages, (really bad) Javascript, and (really bad) PHP. Did I understand OOP, or how/why it's different than FP? Heck no. It wasn't until years later that I even took a programming course. But by the time I was ready to learn solid programming fundamentals, I'd already been bitten by the programming bug.
If I had to guess, I'd say this class is more about getting people interested in programming than it is intended to teach everything you Should Know. IMO, that's what a beginner course on any heavy material should be anyway.
Then the problem is the course, not the language. Logo and Scheme are both lisp derivatives. Logo let people make things that were cool. There's no reason you couldn't do the same with scheme, but just giving them some higher order functions early on that display graphics. Later on people can decompose those functions.
The beauty of scheme is that it's turtles all the way down. If you want to see how something works, it's easier to explore.
As every PHP programmer knows today, and all of us Applesoft BASIC and Apple Pascal programmers knew when we were beginners, OOP and FP are unnecessary stumbling blocks between a new programmer and a very large class of fun computer programs.
for new programmers, the entry barrier for those languages is significantly greater than javascript. people don't quit learning programming because they're not learning OOP- they quit because it's inaccessible.
Agreed. I also think that the reason a lot of us stuck with programming was because we loved the feeling of making something fun that we could play with. This is arguably a lot easier to do from the start with JavaScript than with Java or Python because one of the main applications of JavaScript is visual manipulation. Generally the first Java or Python programs you make are used from the command line. I think that when someone's starting out with something like programming, you want to put fun before "educational value" (correct OOP design principles) to give them that itch.
Let them quit. Many of us learned to program before the commercial Internet existed and at a time when desktop computers were not found in every home. If someone finds learning to programming to be "inaccessible" now then I rather that they not join our industry.
Ability to create something you can share (or that you can other wise readily "put to use") is important. If you lose your audience because they don't get satisfaction early enough, well, they are gone. While programmers certainly want to weigh in on the issue (obviously), the experience of teachers and psychologists would suggest that keeping the carrot close is immensely valuable to getting someone to learn something.
I would prefer to err on the side of not using OOP at all for the first year or so. The problems it solves have better solutions if you're not using a terrible language. They also seem extremely contrived when you introduce them long before the students have any chance of producing a program with those problems, so the students frequently end up using OOP all the time when it's not really called for.
Java seems extremely mutilated compared to other languages, why would it seem like a good choice? There are only few concepts you could teach with Java.
I definitely agree with your point about stack traces - I've used var x = function(){} for some projects, but have found in practice that named functions are very much worthwhile when dealing with stack traces?
What's the benefit of var x = function x(){}? Is it just illustration that functions can be assigned to variables?
I have to disagree. Part of the reason is that it teaches the concept of anonymous functions.
The main reason, though, is the anonymous function case makes it more apparent that the function is an object.
Separating the RHS from the LHS is significant. The right side creates the object, the left side binds it to the reference. Much like `var a = 3`.
It is the assignment that makes it interesting for people that I teach JavaScript. Consider the following, can you do that in a less function-oriented language like Java easily?
var getData = function () {};
var mygetdata = getdata;
getData("foo")
mygetData("foo")
There is actual research on how to teach kids programming and computer science concepts. http://csunplugged.org/ doesn't use any software at all to teach concepts like binary numbers, sorting, etc.
Any languages which people unfamiliar with programming can get immediate (hopefully positive!) feedback with in under 5 minutes is ideal. Some languages do not pass this test :-)
Before I even read the article, I was pretty sure that with a title like that there is going to be lots of debate from the CS people. They do raise some good points, but here's my 2 cents.
Besides a little bit of VB in highschool, JavaScript was my first language. I mainly learned through two books: "The Good Parts" which was a nice overview, and "Object-Oriented JavaScript" a really underrated book that covered every little piece of the language, even those weird parts like block scoping and falsy types.
Having never learned anything about classes or inheritance, prototypal inheritance was kinda hard to grasp, but I eventually found it pretty amazing. I think a loosely typed language is much better to learn on too. That way you can learn the big pieces of the language then later get into the little things like typing. That's just my experience, but I'm really glad JS was my first language.
I want to take this further still. Most JavaScript programmers already use a subset of the language and I believe that there is quite a broad union of those subsets that should resonate with the majority of us. Excluding certain parts of the language will lead to more robust code that is easier to reason about (and more fun to write), I claim. My attempt to formalize it is called "restrict mode for JavaScript" http://restrictmode.org and I laid my case here: http://blog.lassus.se/2011/03/case-for-restrict-mode.html . Would be curious to hear other thoughts about it.
I think Resig means to use JSLint throughout, which does something similar. Additionally, browsers already support "use strict" which is also a restricted mode. In short, plenty of people share your views on the matter (me included). The whole thesis of JavaScript: The Good Parts was basically that, and the book is one of the best on the language.
The major difference is that JSLint as well as "the good parts" focus on a subset that can be verified statically (before running the program) while restrict mode sanitizes operator semantics that must be verified dynamically (when running the program). I like the few changes strict mode did except for one thing that breaks compatibility in an unfortunate way (this is bound to the primitive, not the boxed number). But we should take all this much further.
More important than anything I'd like us to start talking about sane subsets of JS more, and stop limiting ourselves to what one author considers "good" because quite frankly, _that_ subsets still contains a whole lot of awful stuff.
One thing that I have not seen covered that is worth mentioning, is that JavaScript for the most part embraces an event based development model. While it is not unique to JavaScript it certainly is heavily reinforced by JavaScript and JavaScript developers. In other languages it can be fairly underrepresented, that being said, it is worth learning JavaScript due to the fact that it helps developers think of execution as events. One can go their whole life in other language and not deal with events, with JavaScript you will be hard pressed to get to intermediate tutorials without fairly good coverage of events and event syndication.
If you want to teach computer science to people then a good first language for them to learn is C, because it will help them think about and understand what the computer is actually doing.
That's an argument for learning C, but not for learning C as a first language. Studying assembler or chip design or even particle physics will also help you understand what the computer is 'actually doing', but it doesn't mean that's the best place to start learning how to program.
Ok, I see your point. But perhaps one can learn Computer Science much more efficiently after already knowing how to program? It doesn't seem likely that you'll be very open to learning data structure design and asymptotic complexity when you're still wrestling with for loop syntax, the difference between assignment and reference, what the hell recursion is, etc. You have to learn to walk before you can learn to run, and while jumping in the deep end may work for some, I'd guess most would find a more programmer-friendly introductory language a quicker path to comprehending basic CS concepts and would subsequently be in a much stronger position to tackle C.
To quote Dijkstra: "Computer science is no more about computers than astronomy is about telescopes."
While learning how computers work has its place in computer science, I think computation and manipulating information are better teached on a language where you can concentrate on those concepts instead of secondary issues such as memory management.
On a similar note, I've been in the process of creating a JavaScript library[0] that is strongly inspired by _why's Shoes. My intent for doing this was to create a DSL where one can whip up webapps very quickly. But I'm also now looking into essentially recreating Hackety Hack on the web, and maybe having a simple way for very young people to get a taste of programming. Sadly JS isn't as DSL friendly as Ruby.
I know for me the way I learnt programming made me quite flexible and happy.
Working the entire scale from functional to OO languages gave me a really good perspective for anything I face.
I've learnt all the web stuff I use today completely on my own, but I use the foundation I learnt below. The "classic" academic programming languages I've learnt happened in this order.
Basic -> VB -> Pascal (High school/first year Uni) --> C --> C++ --> Java
Too much code out there rarely exists entirely on the OO or functional end and projects are often heading towards one or the other.
Once I had traversed this, I was easily able to pick up .NET, whether any one of the .NET languages was OO or functionally based (Foxpro, or whatever). Javascript was interesting because it extended from Java for me.
I really do feel that programming needs to be learnt at the mathematical/computational level of functions for clear process/analytics and then learn the benefits of using functions in an OO world.
I'm a full-time javascript developer and entrepreneur myself, but I'm wondering why you don't consider Scheme instead, basing the course on the How to Design Programs v2 curriculum?
The people behind that book have spent a lot of time thinking about how you teach programs to people.
More importantly, they focus on problem decomposition and concepts that provide a great foundation for growing.
The reasons I can see for using Javascript first is because everyone has a runtime available at their fingertips (M. Haverbeke's approach of including the console was great.) and because people can immediately see the utility of the language to real world needs.
But does Javascript provide the best foundation for future concepts? Does it teach good habits both mental and in practice?
I parted away with CS to major in biochem thinking it wasn't as fascinating (first course was in C++). This is definitely not a bad idea and also already implemented. Stanford has CS101 in entirely Javascript: http://www.stanford.edu/class/cs101/
That's true - if your program works like a conversation. That's not always the case ;) You can give all the needed context by updating the page when needed.
Well, I would like to teach kids assembly language first. That's how my generation did it. Second language should be C. That's how you know how your program actually runs on a computer. You know what memory is, and how it is used.
Its pretty easy to criticise, so instead I will discuss the steps I would take to a student CS principles from my own experiences. I find graphs to be fundamental, so I would start the student out with them.
1. Teach the student basic concepts in graph theory such as nodes, edges, walks, paths, cycles, and structures such as trees, and linked lists.
2. Introduce the student to syntax trees as a means of representing mathematical equations, and the representation of them used in Lisp: (+ (* a x x) (* b x) c).
3. Describe dataflow graphs to the student. For example, loops are cycles in the dataflow graph, and infinite loops occur when the cycle is endless.
"The policy when I went to school was to start everyone with LISP. That way, if they never got past CompSci 101, they wouldn’t be capable of impersonating a real programmer.
" -- Dave Edelhart
I've learn programming to create small 2D games: Turtle at 8, TI-82 at 16, Pascal at 18, C with Allegro at 22, android-java at 30. It was some "Snake" or "Tron" or "Bomberman" kind of games.
Seems obvious for me that children motivation can be creating game.
So, in my opinion, an ideal first language should be a sandbox language that have C syntax style without pointer stuff and can easily:
- Read keyboard()
- DrawPixel()
- DrawImage()
- PlaySound()
Bonus if there is no install.
Bonus if created games can be send by email to friends.
I have commented (at length) on this earlier, so don't have much to add. But a further thought did occur to me: JavaScript has "loose typing", that is, one is not taught the difference between say, an int, and a double, or even a char, and the fact that there is no such thing (really) as a string - "strings" are just a lazy shorthand for a char array. Loose typing is not just incorrect - on some level, it is immoral, IMO.
Someone is teaching JavaScript to other people? This is nothing less than /The Downfall of Civilisation As We Know It!/.
Quick everyone, get your pitchforks!
I do not think this would be optimum for new comers even thought i’m a die hard fan of JavaScript. You can teach ‘Io’ language instead of JavaScript which has all kind advantages you have mentioned and which has few set of parser rules that would be easier for new comers.
JavaScript was my first language. I made web applications using Netscape Enterprise Server which had server side javascript, and I worked on a signed javascript kiosk application that ran in Netscape and had javascript menus that would side off the screen.
To introduce kids to programming, there is nothing better than Javascript. Instant gratification & no installations. It'll be an easy entry into harder concepts. But the way you enter does matter.
My intro CS class in college started with a week of Karel. It was incredibly fun for me, but I can't vouch for it as a first language, since I had already learned Basic and Pascal in high school. (Yep, I'm old). Karel variants are actually very challenging in a way--the syntax is dead simple, but its lack of constructs forces you to be fairly clever.
Strict Equals Operator (===) does exactly what I expect and that is no funny coercion or guessing, two objects I'm evaluating have a 1 for 1 likeness, no more no less. How is that confusing?
That's not really language dependent--that's how floating point numbers always work. And with good reason--there are actually a ton of different NaN bit patterns in the floating point standard. I think anything where the exponent is all 1 is an NaN (except maybe for infinity--I don't remember the exact details). The point is that two completely different patterns of bits can be NaN.
Can you give examples of it doing unexpected things? I get the feeling the unexpected part about it is your operands are not of the types you thought they were?
Here is my blog article I wrote on this travesty. (Available here: http://blogkinnetic.blogspot.com/2011/12/on-civic-decay-of-u...) I am not a pugilist for the sake of it, but when I see abominations like this, well, somebody has to take a stand for whatever real programmers are actually left out there. :-)
===================================================
When I saw the above article I could only be reminded of H.P. Lovecraft's description of the Massachusetts seaport town of Innsmouth, a great fishing community before the American Revolution, but which by the early 1900's had become a classic case of civic decay, with bootleg liquor becoming its primary occupation, and the general cultural or educational status descending to the level of the primitive. This is because the approach the article advocates for teaching computer science can only end in one place: the primitive, and by that I do not mean the C primitive variable types of char, float, double, and int, but I mean primitive as in stone-age. :-)
I learned programming a bit ass-backwards. I got a "teach yourself Java" kind of book, went through that, and generally learned things "on the fly" as it were, and, in so doing, in time, eventually was comfortable with the "Java world" of Java SDK (basic Java), XML, XSLT, and some DOS / UNIX scripting skills as well. Still to this day, Java is a language I feel very "at home" in, and would choose if I had to build something up really fast. It is my "GOTO" or "default" language if you will.
Later, though, I started to study C++ and C, first for a job that required me to write unit tests using CPPUnit (the C++ port of Java's JUnit) and just sort of "learning on the fly" began to be able to understand and edit C++ code, though I was not then (or now) as proficient there as in Java. Still later, I studied C proper and read Kernighan and Richie's classic book, "The C programming Language". I think it was then, and only then, that I really understood the fundamentals of programming, by which I mean the principles of it, not just memorizing syntax to get stuff done, but rather having a deep understanding of things like memory allocation, processes, stacks, etc.
The thrill of creating dynamic (re-sizable) arrays using C-style pointers is something that still gives me a bit of a high, because there are many situations in which resizable arrays are needed or desirable, and doing this via C-style pointers is the most efficient way of doing it.
I can say that while today I still would choose Java if I were doing a personal project of some size or complexity just for expediancy's sake, I love C the most, for it is the most efficient (fast, using less memory, etc.).
To make a comparison to poetry: Java is Ginsberg and C is Eliot. Both of whom I love, but they are different styles. Ginsberg is the Jazz musician of poetry - creating crazy yet haunting melodies by going "off the map" if one wills in terms of traditional styles. Eliot is the baroque musician of poetry - using the fewest notes to create the greatest effect - precision is the key word here - no room for an off-note here or there but every note having a purpose. Both have their place. I love Jazz. But in terms of aesthetic efficiency, baroque has something to be said for it. Ginsberg is poetry's Jazz - wild, haunting, all over-the-place in a good way. Eliot is poetry's baroque - precise, haunting as well if more in a subtle way, and always having a precise direction or purpose.
Well, Java is the Ginsberg / Jazz of this analogy. It is easier / quicker to mess around and improvise and come up with a Jazz tune. It takes longer and it is more painstaking to come up with a baroque melody. Both are great and have their place. However, while I would use the quicker thing to come up with something on a deadline (Java / Jazz) there is a certain satisfaction to be had with taking longer and having to put more effort in order to produce precise, efficient, parsimonious code, and by parsimonious I mean not wasting any memory or CPU cycles, but having each bit of memory serve a purpose, just like each note of a baroque piece or each line of an Eliot poem has a precise purpose and taking one line out or one note out would ruin the whole thing.
So, while I still would use Java probably the most, I find a certain nobility in C, much as while I might probably listen to Jazz (or its descendants) the most, I find a certain nobility in baroque, and whereas I love and relate very much to Ginsberg, there is a certain appeal in Eliot's ability to say so much with so little that will always hold an attraction for me.
This is why the above article I came across, in which a computer science professor is talking about using, not even Java, but JavaScript for goodness sake, as the first language to teach students, is so tragic. Like I intimated before, if I had to do it all over again, I would have studied C before I even got into Java. That would have taught me correct principles and just a better "philosophy" of programming. As it was though, I was lucky. I worked with a math PhD who was a C++ whiz, a guy named Dr. Mark T. Lane, Chief Scientist at what is now mobi (mobicorp.com) who imparted to me the basic concepts of efficiency and attention to detail that I could never have gotten from the Java world, so, although it was later that I seriously began to study C, even early on I had some of those benefits, for which I will always count myself lucky and grateful.
But not everyone is going to luck out like me and get to work with such brilliant folks. I can only feel sorry for those aspiring computer scientists who go to a computer science program and get freaking JavaScript as the opening silo in their introduction to the world of programming, and I can only feel contempt for those professors who would advocate such a fool's errand.
When I was a kid, I loved this old 1950's teen sci-fi novel called "The Forgotten Star" featuring a character named Digby Allen who travels to the 50's version of a moon base and a Mars base, and eventually lands on Eros, an asteroid. Turns out in the book the asteroid is a space ship and inside are people from another planet (from a "forgotten star") who long ago have forgotten the knowledge that propelled them into space in the first place. The interior of the ship has a simulated earth-like environment, with a sky, fields, etc., and these people live like primitive savages, in huts, etc. not knowing there is a world outside the interior of their space ship, not even knowing, for that matter, what a space ship even is. They have a cool contraption which can convert atoms into anything asked for, so they get their food from that. The contraption (as near as I can recall) would basically take atoms from space and convert them into the molecules for whatever the user requested, so I could say ask it for bread and it would give me bread. To these inhabitants it was like a magic thing, for they had lost the knowledge that went into producing that contraption to begin with. And I suppose the young space adventurer Digby Allen saves the day and brings them into the modern age, though now I forget just how that ended. But I will never forget the impression which the book had upon me - the concept, the very sad concept, of a people once-advanced who through laziness had allowed themselves to descend into ignorance and dependency upon technologies they could no longer understand.
I was reminded of this tragedy when I saw the above article. Already I had read essays about computer science professors lamenting that C / C++ is no longer at some schools taught, Java being the preferred language. And now, it seems we are descending yet another rung, with JavaScript now being the preferred language. What is next? HTML? How about just forget about teaching kids how to write code and teach them how to use point-and-click tools like say WordPress which does not require any code skills at all to at least be able to use the basics thereof.
If we go down this road enough, we will be in the "Forgotten Star" situation - able to use tools built in the past but not having the knowledge anymore to build those tools again. Because you can only create great Jazz if you also know how to play baroque. You can create mediocre jazz I am sure - hell, a chimpanzee, given enough time, also could. But you cannot play great jazz without the underlying principles that led to it. Neither can we expect great code to be developed without the understanding of the underlying principles which led to our current languages (like JavaScript) in the first place.
Oh, and one more thing, subverting a function into an "object" has its purposes in terms of being able to code things up faster, more easily understanding the architecture, etc., but here is a dirty secret that apparently contemporary self-styled computer science professors won't tell you: a mathematical function is not an "object", sweetheart. Because "objects" belong to "sets" which may describe computational functions, but are not the functions themselves.
Your comment is heavy with comparisons that don't really apply while still not making much of a point. Javascript isn't as low level as other languages but HTML is no more the next step than XML is the next rung down from java.
If you hadn't linked the blog post in your comment I would have thought tl;dr but since you did I tried to wade through all the flowerly language. Your whole post can be summed up as "People who learn javascript first will never learn CS fundamentals" which is unfair and untrue.
this was downvoted. Let me make the analogy clear.
Javascript is sent by a server to a rendering engine (your web browser), usually a part of displaying a page, though of course also as part of interacting with it.
Postscript is sent by a (print) server to a rendering engine (in a printer), as part of typseting a page. It is a Turing-complete programming language, but that's no guarantee it won't make your eyes bleed.
The existence of things like jquery "on top of" javascript are really no different form having Microsoft Word or Open Office / Libre Office there to 'print' your postscript file.
Like it or not, neither javascript nor postscript are:
- Systems languages.
- Desktop application languages. (As Java can be).
- Low-level language suitable for writing drivers.
- Low-level languages suitable for writing network protocols.
- High-level languages suitable for prototyping very large, complex data structures and relationships
- High-level scripting languages suitable for abstracting away incredible sophistication and power and letting the user program in a very high level mode.
- Robust and scalable applications languags, suitable for putting in a version control system and having a team of fifty iterate on it
They're niche, domain-specific languages, like a Perl regex. Would you suggest anyone's first language be the regex Perl uses? Obviously not.
Appropriate first languages are anything from:
* A fake language like Turtle Logo
* An almost-fake academic language like Pascale
* An "electrical engineer" approach nuts and bolts
introduction: assembly.
* A low-level systems language, C, or C++
* A high-level scripting language: Ruby, Python, Perl
* An interpreted systems language: Java.
* A markup language: HTML and CSS
* No language. Configuration files for nginx and other things.
etc etc.
Very, very low on the list would be something like Javascript. This would be akin to editing Adobe Illustrator files in a hex editor. Sure you could end up with an image and an understanding of vector art, but, why in the name of God would you try to learn those concepts in that way?
because none of those languages can be compiled and run instantly with any web browser, anywhere on the planet. you can write some javascript and run it right now, without even opening text editor. javascript:alert('hello world') in your address bar. It's that simple. Alerting hello world is not much of a first program, but i'm just illustrating how dead simple it is to get someone to write a line of javascript and see results.
Most beginners these days aren't really interested in your 'robust and scalable applications'. That is not where beginners need to start. At least if they start with javascript, they can see some value in continuing to learn programming. If that brings them to 'robust and scalable applications' then they'll migrate away from javascript if need be. But to say that javascript is far down on the list of first languages is just an asinine reaction that shows you don't really comprehend the problem of getting people to learn to program.
Type Coercion: There is no `==` in CoffeeScript. You usually write `if x is y` in order to be particularly clear, but if you're in the mode of most other scripting languages, and you write `if x == y`, it will compile to `if (x === y) {` in JavaScript.
John's note about `x == null` being the only useful application of double equals is quite true, and something that CoffeeScript provides in the existential operator: `if x?`
Falsy Values: The existential operator helps you ask the question "Does this value exist?" (Is this value not either null of undefined?) ... which covers many of the use cases for having saner falsy values in JavaScript. For example, instead of JS' `if (string) {` ... where the string may be the empty string, you have `if string?`
Function Declarations: JavaScript having function declarations, function expressions, and named function expressions as three functionally different things is indeed a wart on the language. Especially so because JavaScript having a single type of function is one of the beautiful aspects that shines in comparison to languages like Ruby, where you have methods, blocks, procs, lamdas, and unbound methods -- all of which behave in slightly different ways. CoffeeScript only provides JS's function expressions.
Block Scope: This is a tricky one, because unfortunately it can't be emulated in a performant way in JavaScript. So all that CoffeeScript can provide is the "do" keyword, which immediately invokes the following function, forwarding arguments. So, if a regular "for" loop looks like this:
A faux-block-scoped "for" loop would look like this: