This is a little bit of an older article and Apple has since been working on MacRuby (http://www.macruby.org/trac/wiki/MacRuby) which is poised to replace RubyCocoa. MacRuby adds a special syntax to Ruby to better deal with the keyed arguments. From the docs:
But it looks like the important thing is that they have found a way to automatically map this syntax to Objective C method names.
EDIT: Or, as the MacRuby docs put it:
"MacRuby has a modified version of the parser which detects calls or method definitions using one regular argument, and other arguments using the key/value pair syntax. It will reconstruct the Objective-C full selector if needed. The method name that will be used in the Ruby VM is the same as the Objective-C selector."
I think this may be validation of Microsoft's approach with the .NET platform since one its core conclusions seems to be that Cocoa is too closely coupled to Objective-C to make alternative dynamic languages attractive as they don't "fit together nicely". The other line of discussion it goes down is whether or not Objective-C is the best option, but that's always going to end up being a matter of taste.
I don't think he's passing judgment, saying that Cocoa is too closely coupled, I think he's just comparing why it works better with Objective-C than with Ruby. Amongst other interesting projects are Nu, a Lisp designed for Cocoa and Objective-C; and MacRuby, a fork of Ruby that uses Objective-C's runtime and garbage collector, and language tweaks for named arguments.
I'm doing a bunch of work in Ruby/Cocoa right now and I am having no problems. Highly recommended. Thoughts:
(1)
ObjC syntax is clearly clunkier than Ruby syntax; you can't honestly compare the two languages without noting that Ruby has arrays and tables as first-class citizens, and ObjC has them only as libraries. You could stop right there; your ObjC code is littered with "dict.objectForKey" and "dict.setObjectForKey" calls, and your Ruby code isn't.
The keyword argument bridge syntax turned me off too, but the reality is that 80% of the pain is due to Cocoa's incredibly verbose method names; for instance:
You can turn garbage collection on in 10.5 ObjC projects, but it isn't recommended; you lose 10.4 compatibility, and allegedly take a performance hit. The Ruby/Cocoa bridge handles retain/release counts automatically.
(3)
Ruby can call into C/C++ code too; there are multiple FFIs (I've done a lot of work with Ruby/DL) and if you're going to suck it up and write C code wrappers anyways, like you would be in ObjC, you can just write a Ruby extension.
(4)
This argument about "OSX revolves around Cocoa and Cocoa revolves around ObjC" is just emotional. What's the thing you can do in ObjC that you can't do in a Ruby/Cocoa project? Ruby/Cocoa can access arbitrary ObjC objects, open them up and redefine methods, or subclass them. In the unlikely event that I find something that doesn't work in Ruby, then I'll go write the 20 lines of ObjC required to make it work, and call into it from Ruby.
It's 2009. We shouldn't be writing application code directly in C anymore.
To be fair, it depends on the application. You might not have to write them, but there are still plenty of applications where memory and speed are issues, and C gives you some really good tools to deal with them, but you're right -- C is not a high level language and shouldn't be used as one. I would go a step further and say that Objective-C also is not a high level language. Most people here would agree, but lots of Cocoa developers see it as the top of their language stack.
When you get serious with it, the Ruby/Objective-C combination has a lot of problems, mainly because the two languages and cultures simply weren't designed to go together. I have some notes on that here http://programming.nu/rubycocoa-and-rubyobjc and if you have the patience, a talk online that I presented at Jonathan Rentzsch's C4[1] conference: http://www.viddler.com/explore/rentzsch/videos/13 MacRuby is a step in the right direction, but in my opinion it's better to use a glue language that's specifically designed for the task (and if you don't like mine, write your own :-) ).
I'd like to know what argument you would make to claim Objective-C isn't a "high level language". Despite the fuzziness of the term, I think Objective-C would meet most any bar that didn't specifically require the language to be interpreted. The only thing I can imagine being an issue is lack of support for closures, but a) it's coming to the language, and b) it's not radically more expressive than what objective-c already has to offer.
It's relative, and it's a distinction that changes as we make progress. Objective-C is a higher-level language than C, but as you noted, it's not interpreted, and I don't think it would be very pleasant to use as an interpreted language. It is verbose and repetitive (for example, to add a property to an Objective-C class requires you to add three lines of code in three separate places in your source code). And apart from the C preprocessor, it doesn't give us tools for building layers of abstractions that would hide these problems -- except for its handy ability to be used to write the implementation of something that future programmers might continue to see as "high level".
I don't think being interpreted matters. It may make your immediate development cycle faster, but that says nothing about long term development. I imagine the Scala guys would argue heavily that being compiled is a huge advantage to the power of that language, and would also strongly disagree that it isn't a high level language.
Objective-C properties are poorly done in my opinion, though you can add one in two lines with a given set of assumptions. But I don't think 1 vs. 3 lines is a meaningful indicator of very much either way; it's a small constant penalty, not some lack of expression at the core of the language.
Being verbose isn't a bad thing, and Objective-C isn't verbose in any meaningfully different way than most other languages. Header files may be a slight inconvenience, but they aren't even strictly necessary. Cocoa, the framework most commonly associated with Objective-C is verbose, and I think its a good thing. 90% of what you do as a programmer is read code, not write it. I gladly pay the penalty of 2x the keystrokes in return for half the mental overhead in reading the code. In reality, the penalty is less than 2, and the benefit may be more than 2, but its a matter of opinion.
You say you wouldn't like Obj-C as an interpreted language, I wonder if you're aware of Objective-J (http://cappuccino.org). It's essentially an interpreted version of Objective-C. It doesn't have header files, and it doesn't do properties the same way (it only does code generation for accessor methods, nothing else, but that generation is one extra keyword). It's interpreted, it has closures, and it generally has every feature JavaScript already has, plus dynamic message sending and classical inheritance courtesy of the Obj-J runtime (which is functionally equivalent to the Obj-C runtime). I'm biased, but I think its a great language.
Hi Ross, little things add up, and they are easy to fix in higher level languages (with macros). I consider programming to be a process of building abstractions, and I think the best high-level abstractions are concise.
I like to quote Peter Norvig on the subject of interpreted languages. But in his quote, he uses "interactive." Maybe that's a better word:
"Which way would you rather learn to play the piano: the normal, interactive way, in which you hear each note as soon as you hit a key, or "batch" mode, in which you only hear the notes after you finish a whole song? Clearly, interactive mode makes learning easier for the piano, and also for programming. Insist on a language with an interactive mode and use it." - Peter Norvig, Teach Yourself Programming in Ten Years.
I've seen Objective-J and generally have good feelings about Javascript. But it's not particularly strong on one other criteria that I like to apply, which is that it should be easy to mix code written at different language levels.
That's easy. High level languages have data types other than "things that fit inside registers", structures, and objects. Objective C doesn't. High level languages do automatic memory management. Objective C didn't until 10.5.
Lots of high level languages don't have closures. Most of them have first-class string types. NSMutableString is a library.
Obviously, you know this stuff better than I do (I've been following you for awhile), but for what it's worth, here are my initial reactions to your post:
(1) Ruby and ObjC have inconsistent syntax, which among other things is the reason we have that crappy snake-case encoding. This is true, but like I said, Cocoa's nomenclature is already so bad that wrapping everything in Ruby idiom was a win for me
(2) Ruby/Cocoa has to bridge Ruby types (Hash) to ObjC types (NSMutableDictionary). It's true, but this is only really painful if you're using the bridge "bidirectionally"; the real win is to stay in Ruby, and use Cocoa like a library, not a second programming environment.
(3) Ruby and ObjC have overlapping libraries, like ActiveRecord and Core Data (or a million smaller examples). With the exception of Core Data --- and in retrospect I wish I'd stuck with ActiveRecord --- I just stick with the Ruby side and use Cocoa only for appkit. Just like above.
(4) Ruby and ObjC are storing redundant objects. The bridge makes this transparent. If I cared about performance, I wouldn't be in Ruby at all.
(5) Ruby and ObjC have totally different memory management schemes. Part of the point of writing Ruby is not to have ObjC's memory management headache, which the bridge takes care of, but I admit that I could find out 6 months from now that that was a bad assumption.
(6) You can't call Ruby from multiple ObjC threads, or really using ObjC threading at all. That would be a problem if I was writing substantial amounts of ObjC code, but I'm not. (I'm also an async bigot, so I avoid threads anyways).
(7) If a Ruby name clashes with an ObjC name, you have to preface the ObjC name with oc_ to call it from Ruby. This does not bother me at all; isn't it exactly what you'd expect when merging two independent environments?
I'm sure this stuff kills you if you have a complicated ObjC/Cocoa application and you want to extend it with Ruby, or if you want to do a Tcl-style integration with Ruby glue code and ObjC engine code. Also, I am totally biased, because in my consulting life I've spent years in Ruby/DL and Win32 doing my own manual bridging and runtime code generation; Ruby/Cocoa is a dream for me. =)
Nu is really cool. But none of my code is written in Nu.
Yes, I agree with you on a lot of that, especially if you can write your entire app in Ruby. That hasn't been true for me though. Either I have to go to C for performance or there's some library that I want to use that's easier to use directly from C.
You'll probably want to switch to MacRuby. By this summer there should be lots of sample code.
My biggest problem with Ruby for Cocoa work is that I enjoy the Smalltalk message passing syntax (keyword) more than the C++ syntax (object.name). The hacks to make it work right in Ruby do not look quite right.
Objective-C fills the place C++ or C# fill in the .NET platform world. It would be interesting to see a language built for Cocoa that fills the VB role.
I had the same reaction, but ObjC's method names are so bad that I just wound up writing my own Ruby wrappers anyhow, so most of my code is written in Ruby idiom.
ObjC does not fill the role C# fills in .NET. ObjC is still C code. C# doesn't have pointers. It doesn't segfault. Every object has reflection. There aren't aliasing problems.
ObjC fills the place C++ fills in Win32; we still need a C# replacement.
Objective-C:
RubyCocoa: MacRuby: