People fret over language syntax too much. In most sane languages including Objective-C you simply forget about the syntax after a few months. What’s much more important is the conceptual complexity: the number of language features and the way they fit together. A second important thing is the standard library. Objective-C is a pleasant language by both these metrics, since there are just a few language constructs above the C level, they fit together well without creating dark corners, and Cocoa is a very mature and well thought-out framework. The iOS Objective-C ecosystem does take some time to master, but so does every modern SDK, since the libraries are always huge. Programming is hard and if somebody is scared by minus signs in front of method names, evil spirits will suck his soul out by the time he gets to thread synchronization.
It's good to remember Jeff Raskin's theorem "intuitive = familiar". If you come from the C++ world, Objective-C is going to look weird. If you come from the Smalltalk world, not all that weird.
Also, syntax is the worst place to start learning Objective-C imho 'cos there can be a lot of it to learn if you come from a non-smalltalk world. The best place I've found is to dive straight into the core runtime function objc_msgSend. Once you grok that, and see that you could write down the core runtime in a handful of C functions in an hour or so, everything else -- classes, categories, protocols, delayed invocation, remote invocation, proxies, posing, key-value coding -- finds a "natural" slot in your brain. As a bonus, as you get to the more "advanced" features unique to the system (relative to, say, C++) such as key-value coding, you see how the dynamism of the language plays to support all of that. (Disclaimer: Yes, this is how I got it, but I don't know whether it is generally good way to approach it, though I'd recommend it. Maybe I should write a tutorial on it.)
If you start by looking at the syntax and going "ugh", you'll be missing all the neat ideas in the system ... including, imo, the older memory management system that many complain about. I've, for example, used the "auto release pool" idea in C++ to relieve colleagues of the need to think about ownership and lifetime in relatively isolated corners of a system while considerably simplify api design and staying performant. If you're looking for "predictable performant garbage collection", this is a reasonable design candidate.
I couldn't agree more.
Everything is "hard" on the first couple of times, but the only recommendation I say to people is to stop worrying about the language, and go create stuff, get into that mindset of creating something, even if it's just a simple app. that will allow us to research and ask things around, and that's when you learn and progress. Nobody achieves anything by bitching around how hard this and that is.
You're right - I should have called it "Why Objective-C is Hard to Learn"; all of the issues I enumerate are surmountable with enough experience and experimentation.
I couldn't disagree more. Objective-C is a product of the 1980s, when it kind of made sense that your program would crash if you did this:
[NSArray arrayWithObjects:@"Hello", @"World"];
Of course it crashes! You have to add a nil sentinel value to the end of your list of objects, silly. And of course it compiles with only a warning, just like when you leave out the @ that makes the difference between a C string literal and an NSString. Those errors will crash your program as soon as the code is run, but they compile as valid Objective-C. Things like that are just a nuisance if you tell the compiler to treat warnings as errors, though. If you really want to know why Objective-C is hard, why not trust the authorities on Objective-C, namely Apple?
Where Apple tells you you will screw this up is memory management. To start with, there are four different memory management schemes: C memory management for C objects, manual reference counting, automatic reference counting, and garbage collection. You get to choose two, of which C will be one. Objective-C originated as enhancements on top of C, and Objective-C programmers writing Cocoa apps still have to rely on C APIs for some functionality, so you'd think by now they would have provided a better way of managing, say, the arrays of structs you sometimes have to pass to the low-level drawing functions. Nope; everyone still uses malloc and free. Failure to make malloc and free obsolete is hard to forgive.
From the other three memory management methods, pick one. (Different OS versions support different ones.) Automatic reference counting (ARC) is the latest and apparently the new standard, though manual reference counting is still supported, and GC is still supported on Mac OS. Reference counting requires a little bit more thinking than garbage collection. For example, since the Cocoa APIs were written with reference counting in mind, some objects, notably UI delegates, are held as weak references to avoid reference cycles. You basically have to manage those objects manually: create a strong reference to keep the object alive and then delete the strong reference when you decide it's okay for the object to be collected. (I'm not sure, but I think this is true even if you turn on GC, because delegate references remain weak.)
All reference-counting systems have that problem, but at least they have the benefit of determinism, right? When you pay that much attention to object lifetimes, you get to piggyback other resource management on top of memory management and kill two birds with one stone. (In C++ it's called RAII, and it's the saving grace of C++ that almost completely makes up for C++'s other warts.) However, according to Apple, this technique should not be used with Objective-C:
You should typically not manage scarce resources such as file descriptors, network connections, and buffers or caches in a dealloc method. In particular, you should not design classes so that dealloc will be invoked when you think it will be invoked.
Why not? Application tear-down is one issue, but that doesn't matter for resources that are recovered by the OS when a process terminates. "Bugs" are given as a reason, but I think they mean bugs in application code, not in the Objective-C runtime. The main reason, then, is that if your Objective-C programs leaked file descriptors and network connections as often as they leaked memory, the world would be in a sorry state:
Memory leaks are bugs that should be fixed, but....
Remember the "I don't mean to be a ___, but..." discussion?
Memory leaks are bugs that should be fixed, but they are generally not immediately fatal. If scarce resources are not released when you expect them to be released, however, you may run into more serious problems.
In other words, if you really need something to work reliably, you had better use a different mechanism, because you don't want your management of other resources to be as unreliable as your management of memory. That's a pretty strong statement that you will screw up memory management whatever your best efforts.
So apparently Objective-C memory management is hard. That's what Apple thinks, anyway.
> You should typically not manage scarce resources such as file descriptors, network connections, and buffers or caches in a dealloc method. In particular, you should not design classes so that dealloc will be invoked when you think it will be invoked.
Do they propose an alternative mechanism to handling resources other than reference counting? As you state, RAII breaths life into c++, given how well it works for all types of resources.
It sounds like the above statement is possibly being made in anticipation of the introduction of garbage collection, which would make piggybacking resource destruction non-deterministic. Whereas, it could also be interpreted as a very strong reason to favor manual (maybe automatic) reference counting, and eschew GC entirely. I don't know objective-c very well, but I wonder if the use of GC has generated these arguments against it from within the OSX developer community.
That's an interesting hypothesis, but I can't find any source to confirm or contradict it offhand. The part I took the quotes from only mentions that the order of dealloc'ing objects in a collectable object tree is undefined, as is the thread on which dealloc is called. Both of those are easy to keep in mind while implementing dealloc, though. If a resource has to be freed from a particular thread, then dealloc can schedule it to be released on the right thread using GCD. The non-deterministic order of dealloc'ing would rarely be a problem for releasing resources. After all, if a resource is only used via a particular object, and that object is dealloc'ed, then clearly it's okay to release that resource! Perhaps there are complicated cases where resources have to be released in a particular order, but that's no reason to give up RAII for simple cases.
Apparently it's a feature in Xcode 4.4 in the beta release of the Mountain Lion SDK. There's no developer preview for Lion, though. Fingers crossed that Xcode 4.4 will be released for Lion and not just for Mountain Lion....
Best I can tell, all of the problems you cite are fixed by MacRuby. It shows how surprising well Ruby semantics maps onto the message passing semantics of Objective C. They also found ways to wrap up the C stuff without making you manage your own memory.
Not sure why Apple hasn't been more aggressive in pushing it for Cocoa development. Maybe because they don't trust it to perform well, yet, on iOS devices and don't want to promote it until it can be used anywhere as a replacement for Objective C.
What in the parent post do you disagree with? It's probably obvious to you, but it's not obvious to me.
I understood his point to be mostly that syntax melts away after time, and you will just see the concepts. It seems that you are objecting to the notion that "Programming in Objective-C is easy," but I don't see that in his post.
" And of course it compiles with only a warning, just like when you leave out the @ that makes the difference between a C string literal and an NSString."
How is the compiler supposed to know you meant NSString or C-String?
I've put together a few things with Objective C over the years dating back to OSX 10.1 (yuck PB sucked then) to iOS. Most ended up being ported to Java or C#.
The syntax IS absolutely horrible if you ask me as it results in crazily verbose ways of expressing stuff. Everything is "too meta" and there are very few first class parts of the language. It still FEELS like it's hacked together with C macros (which was what it originally was).
Add to that the reference counting implementation (when GC is not enabled which you can't do on iOS) and it's just painful. Also the lack of any decent threading abstraction - ick.
I think there is a lot of hype around it. It's not where we should be in 2012. Android does better with bastardised Java if you ask me.
Also the lack of any decent threading abstraction - ick.
What? Grand Central Dispatch is a lot easier to work with than most explicit threading mechanisms and with the new block support is a lot less verbose than the typical Java thread-based approach.
It's just a fancy thread pool/task queue with a fugly syntax extension not some magic unicorn that poops rainbows.
Java/C# don't need a language extension - the functionality exists outside the semantic boundary of the language. Another cludge in Objective-C.
C# (ThreadPool/async framework/Windows workflow) and Java (ExecutorService/lots of 3rd party frameworks) have had them for years with well-known communication, thread safe data structures, concurrency and locking semantics.
Most of the verbose mess you see in Java threads is because the person writing it doesn't know much.
Your "fugly syntax extension" is your old friend the closure. The syntax is as good as its going to get in an Algol derivative. I'll take it over plain java any day of the week. If Kotlin takes off on Android then we'll have a real contest.
I spend half my day in iOS development and the other half on a Java web stack. I love the RESULT of Obj-C+Cocoa Touch, you can achieve amazing user experience. But I'm reaching the point thinking: it's 2012, I'm an application developer, why am I spending half my time debugging memory leaks and concurrency issues? Java isn't much better either, why all this boiler plate, and still concurrency nightmares. I've done a handful of side projects with django and that's better, but I still think if I showed my teenage self what I'm programming in, he'd wonder if there was ever any real progress.
I guess what I'm saying is after all these years I want to work on a higher level, as a result I've started to play with Clojure and functional languages. Whether I'm idealizing functional/clojure life, I'll soon find out, but the appeal is very high to spend my time dealing with problem complexity, not language/framework ones.
A word of caution with ARC -- you still have to release certain things manually, for example CGImageRefs with CFRelease if you're doing any sort of image manipulation.
ARC to me is scary... I got over the huge learning curve, and the nuances of autoreleasing, and retaining, and it seems like now I gotta unlearn all of that??? And not to mention some third party libraries/source code don't support ARC. To me, I'm gonna hold off using ARC as long as possible.
Well, you don't have to "unlearn" it. It's actually a good thing you went through the "pain" of learning it pre-ARC because if you understand how reference counting actually works, you will be able to make better and more informed decisions on the management of your objects in 5.0+ with ARC (e.g. when to use strong vs. weak properties). There's nothing magical about it, and Ray Wanderlich has a fantastic ARC tutorial that helped me greatly: http://www.raywenderlich.com/5677/beginning-arc-in-ios-5-par...
Holding off on ARC is only advantageous if you need to support iOS versions before 5.0, but ARC is the future of iOS.
Indeed, as Aaron above mentioned, ARC doesn't absolve the programmer of the responsibility of proper memory management, it's just less overhead to have to worry about.
To me, this is almost like asking, "Why, since it is 2012, is the Halting Problem, such a problem?" I don't know what 2012 has to do with functional programming languages, though ... seems like there was this language a long, long time ago, in a far away land ...
What made Clojure stick out for me was its easy access to the vast java libs, and its philosophy on concurrency. Both (I could be very wrong) seemed novel in the functional world.
But I'm reaching the point thinking: it's 2012, I'm an application developer, why am I spending half my time debugging memory leaks and concurrency issues?
Because Objective-C targets everyday desktop apps and mobile apps. In that space, manual memory management still wins the day.
For example, in Windows and Linux DESKTOP those kind of apps are ALSO made in C++ or C.
Java and C# are for the web server and the CORPORATE desktop (in-house apps).
Not many major end user apps outside the enterprise are made with either. Not any famous, widely used ones, anyway. Azureus, maybe, and a few dozen more.
Avoiding GC doesn't equate to "manual memory management." Objective-C uses reference counting, which is only manual in Objective-C for historical reasons, and they're trying to overcome that with ARC. I'm pretty sure most popular scripting languages use reference counting instead of garbage collection. C++ with pervasive use of shared pointers shouldn't be characterized as "manual" either.
Also, as of a few years ago, the only performance-related reason why the JVM wasn't a popular language for desktop GUI apps was startup time. (In the mobile space, it might be true that Java isn't fast enough on current hardware. My experience with Android hasn't been very inspiring, for sure.)
Keep in mind that desktop GUI frameworks take a HUGE amount of time and labor to create, and almost all of the excitement has been in web apps for the last decade. The status quo in GUI frameworks is heavily colored by history. All of the major GUI application frameworks are ancient and reflect the linguistic realities of the year 2000 much more than they reflect current technology.
Also, as of a few years ago, the only performance-related reason why the JVM wasn't a popular language for desktop GUI apps was startup time.
I don't think so. Besides startup time, Swing was always slow --an over-engineered mess. For some Java people it was always "fast enough in the latest version" (like for some Linux people it was always "the year Linux wins over the Desktop"), but even the best Swing UI had perceptible lags over a bog standard native. Heck, even SWT that's half-native has huge GC related lags in Eclipse.
Swing also had the uncanny valley effect, trying to mimic native UIs. And even when they tried to bypass the issue with custom l&f like Alloy et al, they couldn't, because the uncanny value is mostly due to how the controls BEHAVE and not with their style (that's why in, say, OS X, you can use apps styled like Aqua and others styled like Metal at the same time and you don't get the "uncanny valley" effect).
If we judge Java by Eclipse, can we judge C by iTunes? :-) They both tend to become unresponsive at odd times, but it's caused by clumsy background processing, not language performance.
The Eclipse framework itself is plenty fast, and UIs based on Eclipse RCP can be quite snappy. (Except for that damned startup time.) Swing's a mess, but if you're looking for the technical limitations of a language platform, it's the best performers that are relevant, not the worst performers. Otherwise, iTunes is evidence that even C is just too slow.
That's true, but I'd guess that's true of much desktop software. People don't use Microsoft Word because of its efficient C++ code; they use it because it's semi-standard, has lots of features, and overall is good enough. I would bet giant piles of legacy code are a bigger reason for not moving to C# than anything language-specific is.
Don't look to Minecraft for an example of well written code, there are open source alternatives (Minetest in C++ springs to mind) that run rings around it. And Notch himself is well known for his inefficient magic-number and circular-reference ridden Java code. Although I can't attest to Jeb (who is now the lead dev)'s coding skill.
"Then they are converted from Java Virtual Machine-compatible .class files to Dalvik-compatible .dex (Dalvik Executable) files before installation on a device. The compact Dalvik Executable format is designed to be suitable for systems that are constrained in terms of memory and processor speed."
I'm kind of fond of WPF. Lots of aspects of it (data binding, styles, templates, the layout system etc) seem very elegant to me. It is not without issues. How do you justify the 'total pile of shit' call?
Doesn't scale up as well as win32/GDI. Requires much faster kit with graphic hardware acceleration to run (we had to bin about 200 Matrox Parhelia cards and replace with hefty NVidia cards to make use of hardware acceleration where GDI was fine on Matrox). Can't ILmerge thanks to XAML loader problems. Editor sucks. 5-million casts required in your code. BUGS! Hard to do trivial things. Virtually impossible to produce a scalable composite application. Grinds an 16 core Xeon to a halt inside VS2010. Learning curve from hell (this hurts on a 20 man team).
It's not good progress - it's just a deeper abstraction.
I can't argue with most of those. The designer sucks, and I blame that for lots of the VS slowness. I never open XAML files in the designer. Not sure about 'scaling up' relative to GDI - I guess if you're a gun GDI programmer you can probably make it do pretty much anything, but I felt more productive doing graphics stuff in WPF - seemed to let you do quite a few cool things pretty easily. ILMerge thing is a pain, but not a major one (unless you've gone out and built thousands of assemblies and are getting slammed by load times, in which case you kind of painted yourself into a corner there). When you say "Virtually impossible to produce a scalable composite application." do you mean scaling development, or run-time scalability?
I can't help but feel if MS had paid more attention to perf (maybe re-platform it on top of Direct2D in the .NET 4 timeframe, instead of going all in on WinRT) things would be a lot better.
WRT scalability - it's scaling UI components over time. I build large complicated metadata driven applications and it's quite hard to compose an application on the fly.
Agree with performance. I hope WinRT is better. I have little faith based on my experience with Win8 so far but it's not RTM so I shouldn't comment on it yet.
Unless you've played with other languages that support these features, like Ruby or Lisp, then this feels really weird. Don't worry! Lots of great things feel really weird the first time you try them, like broccoli or sexual intercourse.
I find it really weird that there's no mention of Smalltalk, which is exactly where the weird syntax comes from. It's also where the notion of IntentionRevealingNames comes from, which the author wonders about.
Pretty great article. Though I wish someone could point to the paper or whatever that explains the philosophy of Objective-C having insanely verbose method and parameter names. Like, readable is one thing, but they always end up like stringFromAppendingThingToThingToNumberYieldingThingThx and it becomes unimaginable to use Objective-C without XCode to autocomplete the other 40 characters.
It's because Objective-C methods are named according to the nouns that they return rather than the verbs they perform. For example, "[input stringByAppendingThing:thing]" rather than "input.append(thing)".
Methods are actions, not objects, so the most concise description for a method is usually a verb. Describing it as a noun instead requires adding prepositions and turning verbs to the gerund '-ing' form.
I realized this because I write both Ruby and Objective-C, and sometimes write basically the same thing in idiomatic forms of both languages. In idiomatic Ruby method-chaining, your code is a series of verbs with the relations between the actions defined by the '.' or the '()' signs, rather than by words such as 'to', 'from', 'with', or 'by' present in the name of the method.
Eh, I think that's a little overbroad. The way it works is, methods whose purpose is returning something are named for what they return, while methods whose purpose is creating a side effect are named for what they do. So NSString has `stringByAppendingString:` because you're asking for a new string, while NSMutableString has `appendString:`, which is essentially the same as it would be in Ruby except with an allowance for Objective-C's static type system.
What really creates the impression that Objective-C speaks in terms of nouns is that Cocoa tends to promote immutable objects more than Ruby does (e.g. the only way to get an immutable string or array in Ruby is to freeze a mutable one, while you'll almost never get a mutable array in Cocoa unless you create one yourself), so you probably do spend more time asking your objects for other objects than you do in Ruby.
Although the verbosity can get overwhelming, I actually like this about Objective-C. In terser dynamic languages, I'm constantly having to confirm (either mentally or in the docs) which methods mutate and which return a new object. Cocoa's naming conventions mean I pretty much never have to do that.
The guidelines for Ruby are to add a bang (!) to any methods that mutate the object rather than returning a new one. That's not strictly followed, though, but for most of the commonly used standard library bits, you can be fairly certain that that is the case.
In practice, even in the standard library, this isn't followed often enough to rely on. Here's a (possibly incomplete, since I'm writing this on the fly) list of bangless mutating methods just from Array:
As an even more extreme example, IO contains precisely one bang-method, despite the fact that probably 75% of IO's instance methods are destructive.
The general rule seems to be that if there's a mutating and non-mutating version of the same method, the mutating one will get a bang, but when there's a mutating method with no counterpart, it might get a bang but probably won't.
The guideline is: if your method does something that the programmer should think twice about or shouldn't use without proper knowledge (e.g. didn't read the docs), use !. An incomplete case of usages:
- There is a safer alternative (e.g. mutating vs. non-mutating or skipped validation)
- It should only be called once in a process (e.g. Padrino.start!)
- It is non-reversible (many statemachine libraries use action! as the way to invoke state transitions, which might not be reversible)
This doesn't mean that every method needs to be suffixed by ! if it does something destructive. `delete` in the context of an ORM is standard, so it doesn't have a bang. The whole point of `pop` is to manipulate the receiver: no point in warning about it. IO is always destructive, so ! doesn't make sense either.
Very well said. I think this also promotes a mindset of making methods that either mutate state or build and return an object. It's often very difficult to follow code that has lots of methods that do both.
Thats a hilarious post but it sounds like Steve would love PHP. He could use nouns (objects) when he wanted yet add global functions (verbs) with no attachment to objects. I personally think that's a great way to make a terrible mess, but perhaps he knows something that I don't.
Selectors always describe exactly what the method does and what it needs. For example, stringByAppendingStringWithFormat: says "You will get a new string, by appending a format string to the receiver." There is also the mutable counterpart, appendStringWithFormat:, which is shorter because it doesn't return a new string, instead, appending directly to the receiver.
Verbose selectors make Objective-C self-documenting at the cost of extra typing. But autocomplete solves this problem because you only ever have to type 2-5 characters to insert the method you want (once you are experienced enough to predict what autocomplete will spit out).
Objective-C naming conventions form predictable patterns. Inexperienced programmers gripe because they have yet to figure out these patterns, good programmers love Objective-C because they understand these patterns and can therefore predict the name of a method and its arguments and great programmers write their own classes that use these patterns.
Seconded. The first time you write a fully working method using a library you've never used in one shot without looking anything up will make you love ObjC.
I've done exactly as you say, and I still find Objective-C Perlesque in its obnoxiousness. I also manage to achieve that not-terribly-hard feat on a pretty regular basis with Java: intellisense (hi, IntelliJ!) and IDE-provided Javadocs do the same thing, too, and someone who isn't going to write good Javadocs isn't going to name things well.
The idea is that code is written once but read many times. Objective-C's verbose naming make you work a little more when writing it (though a good programmer's editor or IDE like Xcode or AppCode greatly eases this) but it pays off each time you need to read the code, especially code you're not familiar with.
With it's C-based syntax, Objective-C isn't as clean as Python or Ruby, but due to the explicit naming conventions, I think it's more readable than Java or JavaScript.
I actually find it much harder to read as a result of it's verbosity. For example, just yesterday I ran into a bug with these 2 lines:
if ([[data objectForKey:@"released"] isKindOfClass:[NSNull class]]) {
if ([[data objectForKey:@"posterUrl"] isKindOfClass:[NSString class]]) {
They weren't right next to each other, and at a glance, I misread to assume they were doing the same thing. There's too much shit in the way of the actual differences (NSNull vs. NSString in this case) that I have a bad tendency to gloss over the details. Coming from ruby, the closest syntactical equivalent:
if (data['released'].class == NilClass) {
if (data['posterUrl'].class == String) {
Is so much clearer to me when glancing through code. That doesn't even touch on how you'd actually write that sort of thing (data['released'].nil?) which is infinitely more concise than either example. I know this is a bit of a contrived example, and I certainly could find better ones. I just find the 120 character long method evocations to consistently blur the details for me, and this just happens to be the freshest instance.
To be fair, I've only been doing iOS stuff for about a month. Does this trend reverse after you've been writing obj-c for a while?
Is it really easier than `data['poster_url'].is_a? String`? I wouldn't call the Objective-C example unreadable, but it's certainly a lot more line noise for not a lot more meaning.
You do get used to it. Because it's a superset of C, Objective-C has all the syntactic noise of C and some of its own. Ruby is certainly more compact, though with its Perl influence, you can write very cryptic Ruby code. You do pay a price in Objective-C in order to have C directly and immediately available.
FWIW, extracting nested expressions into local variables helps a lot with nested method calls.
Key-value accessors aren't exactly the shining moment of verbose method names (It's even been leaked that 10.8 has dict[key] sugar), but for methods with more parameters, it's much nicer than positional. e.g.:
I've found it pays off at write time too. Having to say something about each parameter when naming a method really helps me to stop and think about what I'm doing every time I add to an object's interface. I'd like to think it ultimately leads to less bloat.
I don't agree with your reasoning, but part of that is my operating definition of "verbose" is "more words than needed." That is, if you're being verbose, then by definition you're using too many words, which is a stance I find difficult to defend.
Personally, once you have more than two humps in your camel case, my eyes have trouble scanning.
True, "verbose" isn't really the correct term, "explicit" better describes Objective-C. Here's an example where I think Objective-C's explicitness is helpful. The Windows API CreateWindow() function call in C:
Admittedly not equivalent examples, but I couldn't quickly find a Cocoa method call with eleven parameters. My argument is that if you're not intimately familiar with these two calls, CreateWindow() is pretty cryptic. You only get the most general sense of what it's doing without consulting a reference. Objective-C's strange method naming scheme (taken from Smalltalk) makes complex method calls much easier to understand in situ.
> I couldn't quickly find a Cocoa method call with eleven parameters
Allow me to introduce you to NSBitmapImageRep, which holds the dubious honor of being initialized by the longest public selector in all of Cocoa.
Here's a contrived example:
I agree the second example is more understandable, but I think most of that benefit comes from named parameters (a language feature), not the naming convention itself. That is, were the naming up to me, I would prefer:
Of course, I'm making assumptions about what those parameters mean. But I find this much more clear, assuming the intentions are what I think they are.
In Objective-C, those aren't named parameters, like you might find in Python, or faked in Ruby or Groovy.
They are called 'Keyword Messages'. In your example above, the method signature is replace:with:options:range:, compared to say a C++ style replaceWithOptionsRange.
It's simply a way to interleave arguments in the message send itself, inherited from Smalltalk.
When you use the word "replace", to most people, that signals that you would be changing the current object in place, not returning a new object based on the given one.
On the second line, you didn't mention at all what is passed in.
For the last two, removing the NS prefix would work if Objective-C had some kind of namespacing support. Currently it doesn't, so the NS prefix is kinda needed.
New object versus mutating the current object depends on convention - in Python, there is a "replace" function on the native string type that returns a copy (http://docs.python.org/library/stdtypes.html#string-methods); in C++, std::string::replace mutates the string in place.
I'm not sure what you mean by not mentioning what is passed in. Keep in mind: I have never programmed in Objective-C. I am going on intuition alone.
No namespace support is a bummer - it means you're going to end up with long identifiers all over the place.
That's one of the things I'm finding most annoying about learning Obj-C. The weird thing is that these extra long names make the language less readable to me, because I have to mentally diff very similar looking long strings.
The design philosophy behind much of Objective-C is described in Brad Cox's book "Object-oriented programming: an evolutionary approach" [1]. Unfortunately, I don't have a copy handy to give you a quote.
It's a technical requirement. Parameter types are not part of the method signature, so you CAN NOT define both [string append:string] and [string append:int] methods. You must give them different names, like [string appendString:string] and [strig appendInt:int] , in order to give them different signatures.
Really? If you compare Apple's documentation to Android's...they're worlds apart. Here's a good example: I'm an iOS developer, and have been since pretty much day 1 of the platform. I recently actually started playing around with the Android SDK, and environment.
I followed Google's supplied tutorial for creating a tabbar app, and asked one of the Android devs who works with me to come take a look at the finished result.
"Oh", he said, "that's deprecated now. We don't do tab bars like that any more. We use fragments instead". And sure enough, when I looked closely, Eclipse was telling me what I'd done was, in fact, deprecated and Google recommended a different approach.
Except I'd followed their main tutorial for Android, step by step. Compare this with Apple, who have consistently updated their documentation for each API release. How Google could possibly think it's a good idea to deprecate a major platform feature for fragments (a good thing) and not update one of the most popular tutorials on their site to reflect this (the 'Hello Views' tutorial) is beyond me.
This is hardly consistent even in Apple's world. I recently went through your exact exercise myself, except instead of tabbars in Android, it's Core Data storage in iOS.
The vaunted Core Data + iCloud integration is woefully underdocumented (actually, it's practically undocumented), with the main resource being a mega-thread on Apple's dev forums, where every few pages someone from Apple will chime in with ever more confusing suggestions, and yet never updated sample code nor docs.
I really don't think "shitty documentation" is a uniquely Android thing, nor is it something iOS has resolved.
Why does a comment about Apple's documentation need to turn into a holy war versus Android? The parent said that Apple "could improve" their documentation, not that it was worse than Google/Microsoft/IBM/whomever.
I never really worked with Android so I can't speak for it.
But I completely agree with you on the updated documentation, I just feel that sometimes I get a little lost when I trying to learn how to use a class or framework.
Comparing any other documentation to Googles is just too easy since Google is generally very poor. IMHO, the best documentation out there is still MSDN. Apple's documentation is fine, but still not up to MSDN level IMHO.
I'm going to disagree. Objective-C is an ASNI C derivative language. Programming for me at least isn't knowing the syntaxual elements of a language but leveraging paradigms I know to exist from one language to the next. Objective-C while looks different, is really no different than most languages. It definitely shouldn't be your first language, maybe not even your 2nd choice, but if you have a conceptional knowledge of programming languages and you're keen on diving into the deep end, there's enough resources out there you're not going to drown. I love Objective-C for many reasons, but then again I equate programmatic choice to personalities. If that makes sense. Point being, don't be deterred. iOS SDK is something else all together, but like anything worth learning, learn by doing.
What you should probably decide for yourself is if this article makes it seem harder than it is. His conclusion about Automatic Reference Counting is on the money, but that's about it.
"When learning Objective-C, it's not just a language or a framework or a runtime or a compiler, it's all of these things". No is not. These are different.
If anyone is considering learning this language, there's a bunch of unsolved problems that frequently include writing new libraries.
I might be just me, but I've always found Objective-C's syntax very nice (compared to, say, C++).
It might be verbose, but you have a clear separation between C and Objetive-C. If it doesn't contain "[]" or "@", my brain can just parse it as C code.
It's best to think of Obj-C method call syntax as a sentence, written in english, which happens to also be computer code. If you name your methods and variables succinctly and explicitly, the language is extremely readable and documents itself (assuming you know english).
Obj-C was the first language I learned after Python. I remember the 2nd month in, it went from being difficult to read, to extremely easy.
Obj-C code can be written terribly, like this:
NSString *someString = @"hello what's up?";
NSMutableString *anotherString = [NSMutableString stringWithString:@"I have more to say, don't I?"];
NSArray *stringArray = [NSArray arrayWithObjects:someString, anotherString, nil];
NSUInteger stringArraySize = [stringArray count];
Messy Obj-C code! Human eyes like simplicity, like right angles, and columns. Same code, more readable:
NSString *someString = @"hello what's up?";
NSMutableString *anotherString = [NSMutableString stringWithString:@"I have more to say, don't I?"];
NSArray *stringArray = [NSArray arrayWithObjects:someString, anotherString, nil];
NSUInteger stringArraySize = [stringArray count];
Takes an extra few seconds of typing, but goes miles.
Syntactic sugar (dynamic getter and setters using @synthesize, @property, allowing for dot syntax accessors) is not new. Nor is Garbage Collection. Garbage Collection is not available on iOS, but has been for a long time on OS X [edit: and as noted below, is actually being deprecated in favor of ARC]. Objective-c 2.0 came out in 2006. Blocks, at this point, are not really new either. So, I think it's incorrect to say that Apple is 'adding' these things.
If a newcomer checked out the online documentation for contentStretch they would find:
"Defines portions of the view as being stretchable. This behavior is typically used to implement buttons and other resizable views with sophisticated layout needs where redrawing the view every time would affect performance."
There's also a lot of good arguments as to why dot syntax is often NOT what you want to do.
For instance, a someCALayer.frame will give you the frame of that layer based on its anchor point, position and bounds. However, you can't do myLayer.frame = someRect [edit: as pointed out below, you can do this -- but the results may not be what you expect].
The introduction of the 'simpler' dot syntax, in that example actually makes things harder for a new programmer.
So, I don't agree that syntax is why Objective-C is hard. Intimidating because of syntax, perhaps. But, once someone begins coding (IMHO) it can be one of the easiest languages.
My school taught Pascal in the intro to comp sci class. I found it incredibly difficult (well, maybe dull is a better word). I then self-taught myself actionScript (late 1990's). I then self-taught myself Objective-C, and I have to say it really just took a Big Nerd Ranch guide and I was off and running. It takes years to become fluent, but I really think that when someone grasps the basics of Objective-C, over time it is one of the most intuitive languages.
I'm not saying that the information isn't available - I'm only saying that to someone new to the framework, it's hard to know what you don't know yet.
As a concrete example, you actually can do myLayer.frame = someRect. The results might not be what you expect, especially if the CALayer exists in a hierarchy already and if the anchor point isn't the centre of the layer, but how would you know that if you hadn't experimented already?
Right, it won't cause a crash -- just unpredictable results. My point was more that dot syntax doesn't always equate to easier coding;
In this particular case though, who would be playing with the CALayer class without ever having touched the documentation? The overview of view geometry (frame, anchor point, bounds, position) is second only after the Core Animation introduction in the docs. And it's pretty clear: when you get a frame from a layer, it is an implicit function of the anchor point, position and bounds -- but the frame itself is not stored (when you set it).
So, if you're using dot syntax to store properties throughout your code, and then you use it on the frame property, a casual reading of the code might lead someone to think that you could retrieve that value later and have it be the same.
>> who would be playing with the CALayer class without ever having touched the documentation?
How many hackers out there always read the entire manual before playing around? I'd wager most.
I agree that Objective-C is slightly schizophrenic about the dot-syntax, but I would argue that dot-syntax for getters and setters is easier for a newcomers to learn.
I'm not saying one has to read the manual before playing around, but some things are simply not explorable with at least a little bit of introduction. You had to read somewhere that [NSObject alloc]init] was how to create a new object; you didn't just guess at it. Similarly, how far would someone get creating a CALayer, and adding it to a view without ever looking at the docs?
If someone is having a hard time learning Objective-C, maybe the real suggestion is to start by reading at least a bit of the manual.
Just to be clear, I was referencing the point in the original article where it said that apple was 'adding Garbage Collection' to make the 'code expressed in Objective-C simpler'.
The entire article is about how Smalltalk syntax is different from C syntax, something that any reasonably competent programmer gets over very quickly. And he gets it wrong.
I think it's very largely a question of what you're used to. I don't know much about Objective C, but given my knowledge of Smalltalk, the use of keywords to identify arguments seems entirely natural.
But then I've never really understood why people think it's acceptable for languages to insist that you do this:
myfunction("First argument","Does this one really go second?","Is there even a third?")
Anything you do not understand is inherently hard.
The only thing i would say is uniquely hard about Objective C is getting your head around some of the APIs, but then again that can apply to any language.
To me the hardest thing is the seemingly arbitrary CG functions in Quartz2D and why they're so interspersed throughout the code. If I'm writing within the UIKit most of the time, and then have to make my own UIView for some custom drawing, I have a hell of a time remembering how, and always have to reference my earlier code, which I usually find on StackOverflow or somewhere else.
Example, let's draw a line and an ellipse in drawRect:
CGContextRef ctx = UIGraphicsGetCurrentContext();
//CGMove the point somewhere?
//CGLineTo something
//CGDrawEllipseInRect or something
//stroke or fill? CGFillSomething maybe?
//do i need to end the context?
Maybe it's a mental block on my part, but I can _never_ remember how to do this and always have to look it up, probably because it's not part of the standard UIKit and not used on daily basis. I guess I would just like to see the Quartz stuff conform more to UIKit naming conventions (but because it's based in C, I understand why it's not)
Well, from my limited experience with Objective-C a few things made it hard.
The first is the traditional Cocoa pattern of a method that does useful things, which looks like this:
- (void)beautifullyNamedMethodFor {
void* ugly_ptr_type; // and around 45 more
CFObscurePtrRef* .. = CFObscureObsoleteFunction(NULL, NULL,.....); // 56 arguments
// to the callback omitted for brevity
...*....(*foo)...->(*x++);
// and so one - with 45 lines of NULL ptrs passed as void* to CF calls
// juggled and incremented ad absurdum until your eyes bleed.
}
So on the surface it's a beautiful Smalltalkish thing, while down below it's usually all hairy C, pointers and null-terminated strings and Core Foundation callbacks right out of MacOS 7 (especially if you want anything useful to be done that is not in Cocoa by default). This always seemed to me to be a deception in a way.
Another pet peeve of mine is the same agony of choice that is object variables (pointers versus values). When I want to return something or declare a variable, even when I am in the rose-tinted-glasses Cocoa world of beautifully-named methods, classes and keyword arguments I still have to put the dreaded death star in front of just the right things (and to remember NOT to put it in front of exactly proper other things).
So I guess for me the most problematic Objective-C part is the one that has to do with C (because it adds a level of complexities on top of C). The "Objective" part is actually very nice, once you get used to the call syntax and the brackets.
Hard is OK. Over time, you become better at it, until it's no longer a problem -- it's a relative thing. Not saying easy is bad, but hard isn't _that_ big of a problem.
Example: I find Russian very hard to speak. That doesn't mean Russian IS hard, I just don't know russian. And some languages are harder (more stuff to learn) than others.
Not looking for trouble here but writing an iPhone app should be no more difficult than creating a Keynote presentation, imho. If Apple is the leader in document development (Keynote, etc), why can't they do the same with writing apps. Look at what they've done with the complexities of video editing. Where is the consumer grade development app for iPhone?
I've only learnt 2 programming languages 'thoroughly'. Pascal and Objective-C. I find Objective-C a much simpler to understand language than Java for example. I have worked in Java (although not extensively) and it just seems messy to me. Objective-C is much more human readable and better structured in my opinion.
The only item I agree with is the last item: Objective-C is a nebulous term because it's so intrinsically tied to Apple and Cocoa/UIKit. The syntax? You pick it up within a week. Message passing versus method invocation? An important, but subtle distinction.
So what makes Obj-C hard? For me, it was Apple's gigantic MVC-style framework. Rewiring my brain to grok Obj-C was nothing compared to grokking Foundation Kit, UIKit and AppKit. Growing up with C++, C# and Java, you get used to a particular way of doing thing. APIs are designed and interacted with in a certain way. Apple's API feel completely different. From building strings and making network connections to working with images and animations. Apple's version just feels different.
I know Objective C, C++, Java and have basic knowledge of C#. Of all these I'd rate Objective C as the easiest to learn and to handle. It's much more forgiving and easy on the programmer, and the syntax is trivial.
You seem to make a big case of the message passing syntax, but your example is very poorly chosen. Rather than 'performAction:withTwoParameters:' it should be 'performActionWithFirstParameter:andWithSecondParameter:' as are most Cocoa methods. Named parameters may seem verbose but they are much more readable than 'performAction(param1, param2)'.
If Objective C is a "large" language, I wonder what you'd call C++ or C#. Huge ? Humongous ? If you think Cocoa is large and complex, the C++ Standard Library or the Java library will make you weep.
The C++ standard library is actually quite small. Check Herb Sutter's keynote at Going Native 2012 for an entertaining visualization of its size compared to the Java or C# standard libraries:
http://channel9.msdn.com/Events/GoingNative/GoingNative-2012...
The IO libraries are kind of a mess but I don't find the algorithms and containers significantly more difficult to use than similar implementations in other languages.
I love the flexibility and breadth of things you can do with objective-c/c++. It's downsides mostly come from syntax, container class getting/setting is way too verbose, same with string operations. If there was a special syntax for just string and container classes, large swaths of my code would be smaller.
UIKit view controller classes are also not flexible enough, and crap out in a lot of custom multithreaded operations when they shouldn't. I could reproduce the same behavior with my own classes (animations, transitions, view control, etc) using just basic UIView classes and it would work significantly better.
I find Objective-C hard as well, but it has more to do with the nature of the message passing syntax that objective-c has.
For instance, for the life of me I CANNOT get calls to NSNotificationFactory right the first try, and there are no compiler hint to help you...it just doesn't work for some reason.
There are other items like that. Not much help when the app crashes, lack of Namespaces (class name collisions), etc.
Then throw in all the fun of submitting an app to the store. Working with iAds and In App Purchases will make you want to hurt something.
Rather than performAction:firstParameter withTwoParameters:secondParameter which is kind of confusing.
You should name it performActionWithTwoParametersFirst:firstParameter andSecond:secondParameter if you insist on calling your method performAction
Really the only thing that makes Obj-C the language hard is that the C layer pokes through often enough that you really do need to understand C as well and C presents a lot of pitfalls to a new programmer.
Nah. Not hard. Different. People have a natural inclination to resist the new. My guess is that a competent programmer with OO experience should be comfortable with Objective-C within a week or two of study.
I know this is a short article meant to explain the basics but I don't like how under the History section it makes no mention of NeXT or Stepstone which were the creators of ObjectiveC.
Before I took the jump to learn Objective C and iPhone development, I dreaded the huge learning curve needed. Now that I'm on the other side, I LOVE the fact it intimidates people :)
Is MacRuby truly abandoned? If so, then I'm saddened. I just started experimenting with it recently and found it to be the answer to everything that frustrated me about obj c.
That's a deficiency of the iOS platform, not the ObjC language. It's just that ObjC is brittle and tricky enough that there's no longer much reason to learn it unless you're working on iOS (or possibly MacOS, if you think that'll stay relevant), though GNUstep is portable if you really want to.
Forget the fact that we're not even talking about methods, really, we're talking about messages (a distinction I'm not going to make) and you refer to selectors like the one above as performAction:withTwoParameters:. Most people don't care anymore.
Well, those people have fairly low expectations of themselves then. People say this bullshit about the supposedly strange Obj-C syntax, whereas the part that it's not C is basically 99% "Smalltalk in square brackets". Nobody complains that Smalltalk has a strange syntax, even small kids seem to use Squeak just fine.
in Python, or any language with named parameters. The syntactic differences are superficial. You don't even have to know about selectors and messages to understand the gist of what that invocation will do. "OMG, method parameters have a name" --well, big effin' deal.
Ever seen C++ (especially the recent standard)? Or the beast with 1000 features C# has become? Obj-C compared is leanness personified.
The ACTUAL source of complexity in programming in Objective-C is the huge Cocoa (et al) API. But a huge API, especially nicely documented as Cocoa is, and with such breadth and nice MVC design, is a GOOD THING.
This tightly-coupled codesign is unique to Objective-C. There are other languages that run on .Net, such as Iron Python. Lots of language use the JVM besides Java, like Clojure. Even Ruby and Rails are two distinct entities and projects. The only significant effort to use a different language with Cocoa/Cocoa Touch and the Objective-C runtime, Mac Ruby, was a project largely backed by Apple before they abandoned it.
Actually, there are several efforts besides MacRuby to use another language with the Objective-C runtime: Nu (Lisp like), F-script (Smalltalk like). But the main difference here is that the Objective-C runtime is a very simple runtime, not a full VM, so the comparison to CLR and Java is not that apt. For one, CLR was designed from the start to support multiple languages, and Java didn't have any major language targeting the VM until like 2004-5.
It's many things to many people. To beginners, its a friendly cub to play with. To enterprise coders its the beast of burden to carry things. To newbie web developers/existing c++ programmers its familiar face on a new road. It won't devour you unless you have a death wish.
withTwoParameters is actually part of the method signature. It's not a keyword identifier.
Sure, but it's not like that distinction is really important for the programmer. For most intends and purposes, he can just think of those as keyword identifiers
--or, better, as a kind of keyword identifiers that also have to be present whenever referring to the method.
> Sure, but it's not like that distinction is really important for the programmer. For most intends and purposes, he can just think of those as keyword identifiers
(I'll use lisp conventions, it's what I'm used to)
What you're saying is for coders, saying
(doit x y :withFloat 5 :key "string")
has no important distinctions with
[foo doit:x:y withFloat:5 key:"string"]
But keywords have defaults if they're missing. And furthermore you can have them in any order. So to implement the equivalent of the Lisp method above, I'd have to implement all of the following methods in Objective-C
For one, CLR was designed from the start to support multiple languages, and Java didn't have any major language targeting the VM until like 2004-5.
I can't agree with this. Jython and Rhino started in 1997. JRuby started in 2001. Those are all ports of fairly major languages, and all saw significant use before 2004.
The fact that he chose the fake method name performAction:withTwoParameters: betrays that he doesn't know what he's talking about. Under the Apple style guide you would describe the parameters next to the arguments in the method signature itself, and in this case would use something more like performActionWithParameter1:parameter2.
You do this so that you actually know what you're putting into the damn function. As soon as you've typed 'object per' Xcode's autocomplete will have filled out the rest of the entire function, complete with parameter descriptions and little empty bubbles with the expected argument class for you to tab over and type your arguments into. This is indispensable if you have a method with a large number of arguments.
You know what I really dislike about Objective-C is that is really inconsistent with properties and messages, at one point I said out loud: JUST PICK ONE! Coming from Python this is a big thing for me, I like when there's only one right way to do things.
P.S. My only experience with Objective-C is with the iOS SDK.
>However, they're also adding to the language in ways that makes the code expressed in Objective-C simpler:
>Synthesizing properties
>Dot-syntax for accessing getters/setters
>Garbage Collection
>Blocks (closures)
>Automatic Reference Counting
>Weak references
Sorry, but none of these things make the language any simpler — they all add yet another style of doing things that only raises the bar and the learning curve for new developers when reading existing code, in precisely the same way that C++ and Perl have done. And this is true even of garbage collection (which, by the way, is deprecated in 10.8), because it needs to coexist with other frameworks and code that might not be garbage-collected, and more importantly because all heap-allocated C buffers consequently require their own low-level wrappers (e.g., NSAllocateCollectable, objc_memmove_collectable, etc.).
Also be aware, there is something called "Objective-C++" which is a superset of C++.
The syntax is not "weird" unless you just don't know the language. Acclimation is part of the learning process.
In my opinion, the syntax/language is pretty great. I enjoy ObjC greatly. My main beef with it is that it's less than portable. ObjFW attempts to solve this, and is a phenomenal framework already.