"We often see questions from developers that are asking from the Android platform engineers about the kinds of design patterns and architectures they use in their apps. But the answer, maybe surprisingly, is we often don't have a strong opinion or really an opinion at all." (1)
While that may have been a lofty ideal, in practice Android has many strict requirements on how you partition your code between Activity, Fragment, ContentProvider, and Service classes. Never mind testability and all the new semi-opaque / intelligent battery optimizations Android applies to your app.
After all these years, I still find the most difficult and un-natural thing is mixing concurrency / background tasks that must outlive the UI with complex UI component lifecycles. This is a frequent and necessary thing to do, and also quite awkward. The result is unnecessary complexity that often and easily permeates the code. Dianne says they have no opinions on architecture, but where I disagree is concurrency is an architectural concern and there are definitely many corner cases & snafus mixing that with Android APIs.
> After all these years, I still find the most difficult and un-natural thing is mixing concurrency / background tasks that must outlive the UI with complex UI component lifecycles.
Completely agree. I've been developing for Android since 1.0 and the complex interaction between background tasks and activity lifecycle is the worst part of Android that a significant majority of devs get wrong, introducing subtle bugs. (And the worst thing is that the documentation pretends like this issue doesn't exist last time I checked. Newbie developers have no chance getting this right, even experienced dev often ignore it.)
Overall, I wouldn't say Android is poorly designed, it's just mediocre, I would expect more from Google. In general, I'd say the design lacks simplicity and elegance. Some examples:
- Fragments. Activities were already overly complicated and for some mysterious reason they took it to the next level with fragments. (They should get rid of or redesigned activities too.)
- Older version of Google Cloud Messaging library - hundreds of lines of source code to implement a basic hello world example (wth)?
- Documentation is unclear in some cases, promoting some cargo cult patterns such view holders (zero effect on performence these days). Also, services, if you don't need IPC, the only "feature" of a service is that it lowers the probability that the app gets killed. I usually just create a simple service and start / stop it as a way to telling the system "don't kill the app".
> Overall, I wouldn't say Android is poorly designed, it's just mediocre, I would expect more from Google.
I completely agree and although I only do it as a hobby, there are lots of points that many in the community feel as pain.
- They never managed to write a proper working emulator, while other companies had no problem doing so;
- The whole debacle of C++ support, had JetBrains not decided to create CLion, to this day Eclipse CDT would be deprecated without any official path for NDK users
- Speaking of NDK, the four parallel paths to build applications (old ndk-buld, experimental Gradle plugin, stable plugin with ndk-build, stable plugin with cmake), each with its own set of issues
- Dalvik JIT/GC were always worse than the average J2ME /J2Embedded commercial offerings, regardless of how they used to sell the story of having to fork Java for performance
- Choosing to AOT on device instead of doing at the store like everyone else, thus leading to the half-backed solution in Android 7 that everything goes (interpreter in Assembly, JIT with PGO, AOT when docked).
- Android Studio just puts my computer fans in full throttle bringing back memories from Webpshere RAD, something that Eclipse CDT never did. And I am not a big Eclipse fan.
- Gradle needs a background daemon with 2GB of allocated memory to match Ant or Maven performance
- The SupportLibrary releases that are so well tested that always required a minor updated, because they always break something
- I do approve that the NDK is constrained due to security concerns, but being forced to use JNI to call libraries that are implemented in C++ like Skia?
- Forking Java community with cherry picked features from Java 6, 7 and 8. Which will only get worse now that Java 9 and 10 will bring features that will drastically change how the language is used (modules, linker, graal, value types, type inference,....).
There are plenty of other issues to rant about, these are just some of them.
They seem to just keep Android on some kind of "just good enough" for the masses. Literally the MS-DOS of mobile world.
At the end of the day, dear Google is (now) still just a marketing company, however well they may treat the intellect they are hoarding.
Yes, I'm ignoring that Google was founded on some really awesome technological ideas, or at least a very clever assemblage thereof, but monetizing those ideas required becoming a marketing company. Preserving their value means extending that core business throughout what they do.
That said, our personal phone gear at home is Android, cuz I'm cheap. BUT, I'm really getting fed up with all of the damn notices that literally cause my phone to beep several times an hour for nothing. I love my iPad mini - not only can it do web, email and MP3s, it also makes a pretty good (MIDI) synthesizer :-)
> Choosing to AOT on device instead of doing at the store like everyone else
There's a few major problems with doing it at the store.
1: Google can only AOT for their devices as the AOT depends upon the specific platform version that the app will be installed on (as symbols & offsets change, of course). It's a substantial scalability problem.
2: The compiled version is substantially larger than the dex code. That's a non-trivial cost (in terms of $$$) to put on people in the majority of the world that doesn't have high caps and/or unlimited bandwidth.
3: Apps are signed by the developer, so the store can't recompile the app without breaking the certificate chain. And then the app won't be able to update anymore, and the trust flow back to the original developer is lost.
>Overall, I wouldn't say Android is poorly designed, it's just mediocre
I think it is mostly have to do with company's talent pool and focus.
As far as I can see (I may be wrong, this is my estimate) Google tries hard to devote best talents to 1) ads and Search and it's maintaining 2) Chrome team 3) Google apps and services online and on iPhones.
After these option they try to develop Android as kinda (I don't know what is they correct word, so I am going with kinda) side project because they know the world needs an open platform for competing with apples devices.
The reason because why Android is unacceptable from Google is we all used Google's best products. Chrome and search and we kinda (by default) think about Google as non fallible.
But that's nowhere near the reality, they have limited number of super smart guys, and they try their best to keep core products (search and browser) much better than competitors.
Most of the time when Google does release a new app their iOS apps is better than Android's ones.
We would be okay if Android was from mediocre company. But it is not, it is from Google , which is the best in some areas. But they don't have unlimited talent. They are trying their best to balance.
I am not saying people who worked on Android are not that bright. No I am talking about more broader picture. Of course there some excellent people who work on Android. But I am not talking about 1,2 or three. I am talking about management perspective in broader sense.
This is my personal understanding, I may be completely wrong.
I think you're right. Or at least, they're not devoting a certain kind of talent to Android.
Look at Google's efforts on web frontend - GWT, Polymer, Angular 1/2, embracing TypeScript, RxJS, and so on. In other words, they've devoted significant resources to high-level application patterns, emerging paradigms, and in general - improving developer experience. The Chrome devtools are another example.
On Android, not only is their policy to "remain neutral" on architecture and paradigms, but they don't even bother to update the platform enough to let the community take on the architectural work. The tooling is 1 step forward 2 steps back, new language features (or languages!) do not seem to be a priority, fixing fundamental design errors (multidex) is also not a priority, things like databinding are trotted out and then stagnate, and so on.
The only conclusion I'm left to draw is that Android is the new Windows. Why should Google care about developer experience on Android when it owns 70% of the market.
I wonder how much that has to do with Android originating from outside Google. Never mind that Google may well be Apple's biggest hardware customer (the Chromebook Pixel seems to have been an attempt at getting employees off their Macbook addiction).
There seems to have been something of a civil war right around the launch of Android Honeycomb/3.0, as that was also when Chromebooks happened.
All of a sudden you had two platforms, both angling for "landscape" devices.
It's been over a decade and over 10 major releases since Google acquired Android. If they still haven't managed to completely absorb it, culturally and technically, then I've underestimated just how dysfunctional Google is.
I seem to recall that he was the one that insisted that Android was for phones, and thus devices without a mobile radio was unable to get a CDD certification.
> As far as I can see (I may be wrong, this is my estimate) Google tries hard to devote best talents to 1) ads and Search and it's maintaining 2) Chrome team 3) Google apps and services online and on iPhones.
I don't know where you would insert WebApps in this list but it looks like that they are taking bigger portions of the Google Keynotes lately. Seems that Google is pursuing to arrive quick to the point where WebApps can substitute native apps. You can skip app stores (manifest), run apps in the background, synchronize and receive push notifications (service workers). They have dedicated talented people to design the specs (like Jake Archivald) and they are early adopters of them (Chrome and Android).
WebApps are probably the greater threat to native apps. Apple resists hard to them (imagine the profit loss if anyone can bypass the App Store and it's commission e.g. Amazon).
Webapps can't be used for high performance applications like games, music or VR. But definitely can make that talent or atention to be taken from the native ones.
There is so much articles and news about it. It is completely rational decision. Google like any other company at the end wants to maximize its profit, and iPhone is where the money is (Google it, 84% of profit of whole market goes to apples pocket_ I don't remember exact number).
I have used HTC desire around 2011 (I don't remember the exact model name) and my gf was using Samsung note (around the same time) the experience was disaster, no updates, bloated software, no bug fix, nos security fixes. Nothing,pure marketing and pure rip off. After a year I switched to Nexus line and my gf using iPhone. We haven't thought about buying anything else since then.
And right now , with this update problem Google imposing on Nexus line, my last hope for Android is dieing and I am considering buying iPhone. I switched from Nexus 4 to Nexus 5. It is unacceptable to me, device as capable as Nexus 5 doesnt get N update. Why would I use Android when I have better support, long term update with iPhone almost for same price (I could have bought iPhone 5 for 100$ 150$ more when I bought Nexus 5. Which was total mistake)
And remember developing for Android is what I do for living. So I am so enthusiastic about androids future.
I have really high hopes for Windows 10 phone. Their continum can be game changer. And don't forget Google is not software company, they are IT company. But Microsoft is different, they are biggest most successful software company in humankind history. They know how to develop software. The problem with their failure was stupid manager (Steve ballmer). Which is now solved. I hope they can compete with iPhone with their surface phone line. We will see how it is going to pan out.
This is not a problem with android or google. The only way you can avoid this update problem is to convince ARM SoC manufacturers to standardise their SoCs instead of cooking their own soup every time. I think this is never going to happen.
Apple doesn't need to standardise. They can get away with just supporting their own devices.
Even if the SoCs are standardized and drivers are updated, do you think the manufacturers have any motive or incentive to provide updates to the customers for older models? Most of them sell phones for low prices at very thin margins or at a loss and rely on releasing newer models more often to keep up sales.
Even with standardization, hardware differences will necessitate testing on older devices. That does not come for free for the manufacturers, and it has been, and is seen as, an entirely avoidable and unnecessary expense (which is one big reason they don't provide updates for a longer duration right now). Are the majority of customers even capable of dealing with rooting the phone or flashing it with some un-tested firmware? The phone makers and customers are used to not having updates for a long time, and neither of them really care (if you look at the majority non-tech crowd). Why or how would all these other dynamics change just because SoCs get standardized (if that happens)?
If one get the driver side sorted, it will be easier for Google to push for a scheme similar to Windows.
On top of that Google could implement a standardized UI themeing system, allowing companies to give their products a distinct experience without having to muck with the code as much.
>By the way, I think Android UI works much better than iOS's.
You should be specific about what about Android UI is better than iOS?
if you are saying android UI design is better than iOS , I agree , but remember Material design is not android's design. Google wants to expend to every platform, it is quite childish if we think they could expand to every platform without having professional UI design language for themselves. They designed Material design for all of their product, not just for android. Yes android was first one to adopt, and I really like Material design. So if you are talking about material design , it was not only for android, it was actually Google Design language, and I really like it.
But if you are talking about performance and how ui fits together, I disagree 100% , for example they didn't had splashscreen for long long time (If I remember correctly they adopt splash screen a year o two ago), every time you were going to open an app, you were noticing a blank screen for quite a long time some times(1,2 second) which was quite ridiculous. At the other hand iOS had fixed splash screen for quite long time. Some third party apps tried to develop splash screen of their own , but almost all the time result was not on par (not even close) with iOS counterparts.
Right now their rotation animation have problems which can show itself under pressure. and so many thing I don't even remember right now.
I am not saying android is bad or something , android is wonderful product. But lets be honest, it is not core product for Google like chrome is, and it has huge problems. But its main advantage is its openness.
One of big Android advantages are activities and intents. It allows for application cooperation in a way that's not possible on iPhone. Even the default app concept is based on intent handling - and that's why I can use Firefox as my default browser, Nine as my default mail app and Sygic as my default navigation and send stuff from other apps over Threema. On iPhone, I would be stuck with Safari/Apple mail/Apple maps/iMessage no matter what others offer.
This also affects integration from third parties. There's no reason for an app to support just Dropbox, when there are intents, that all other services support. Unless you want to artificially limit the integration, or you came from iOS and are not used to that.
I don't really think so.
I never saw micro stuttering/slowness on the iPhone.
On the Nexus 5 it was a shit show.
I can't really see how anyone that owns both iPhone/Android can say just an outright lie of this level.
I used to think Activity lifecycle management is complex. Then I learned about Fragments and I started to loathe every moment I have to work with them. What a bloated mess:
I started making a semi-complex android app as my first project. Saw the recommendation to use fragments, and I have never looked back because I am way, way too far down the rabbit whole. It's main advantage to me is perhaps nothing. I have no idea and no real way of knowing. But is has tripled my code base, so that is a good thing!
And then there are nested fragments and all the caveats associated with them, but it's all good, it just requires some simple fixes everyone could come up off the top of their head, like http://stackoverflow.com/a/23276145/168719 (the top voted answer)
It's good, because it doesn't require you to use the workaround of deluding the user with a SCREENSHOT of your layout (sic; see the accepted answer above).
I'm pragmatic about them and they work out. It takes some discipline, but in exchange you get to use a standard back stack manager with more mindshare than any 3rd party replacement like Flow will have. My personal guidelines are:
Only use them in a single form. FragmentTransaction add/remove/replace, no fragment tags in xml to cause inconsistent behavior
Represent state with single AutoValue + Auto Parcel object and save it with Icepick so they work with process death automatically
Implement all interactions with other threads via RxJava so I can unsubscribe and avoid "after onInstanceState exceptions"
Use custom view groups instead of nested fragments since nested fragments tend to not be using the backstack anyways
Makes sense, that's a solid approach. No nesting or any other nonsense. But clearly it takes quite some bootstrapping in order to make this functionality solid (as it should be out of the box)...
Fragments follow the general story of Android Framework: tried to be too flexible, introduced unneeded complexity for everyone, suddenly flexibility is of questionable usefulness
Some historical context, fragments came out with Honeycomb which was the first version of Android specifically for tablets. As such Fragments were meant to be a way write UI in something both tablet and phone layouts could use. So, that's the mysterious reason for the extra layer of complexity. I think at this point, many people don't even use them unless they specifically need to for tablet/phone/tv shared layouts. There are also other alternative patterns that have emerged that are less complicated
Also, they envisioned tablets to be used mostly in landscape. That's why we got everything on a single bar at the bottom.
The basic concept behind Fragments was pretty much the same as WebOS' Enyo framework. I recall one of the big Enyo demo moments was when a browser window was resized, and the email app ui went from multiple columns to a single column, and back, without missing a beat.
Then again i wonder if Android was partially designed, before Fragments, to rekindle the experience of using Apple's Newton tablet. This in that one app would extend the functionality of another app in a virtually seamless manner via intents.
But as best i can tell, Android left too much of the plumbing in the hands of the app devs. Thus for every app that handled cross-app intents properly, you got 100+ that would jump back through the in-app activity history rather than roll back the intent chain.
And with fragments you got a in-app way to do the activity history backtracking, while still having the intent focused back button of the Android UI.
> Overall, I wouldn't say Android is poorly designed, it's just mediocre, I would expect more from Google. In general, I'd say the design lacks simplicity and elegance. Some examples:
Android is just one of those Big Corp projects were you hastily throw enough shit at the wall and see what sticks. It wasn't "designed", it's just the way it came out.
Are you talking about Services and Activities or something else? I never had any problems with Services and Activities using them and connecting to them, using them etc. but I can see how an inexperienced dev could get this wrong.
Same problem goes for not understanding detachable threads in other languages (eg. keeping a reference/pointer to them and attempting to use them when you have no idea when they've finished) but that's a newbie problem. I wouldn't say that the language or design of threads was at fault if I was using it wrong.
I've found that the best way to deal with this complexity is to ensure that any concurrent work is done in a fire-and-forget kind of style. Concurrent work is submitted to a IntentService (possible backed by a thread pool instead of a single worker thread to speed things up) and if it needs to talk back to the UI it does so via a local sql-db or similar construct. That way the UI can die and get restarted indepedently of the worker service and the worker service can update UI state without having to care about the UI existing or not.
This looks like absolutely the best approach, but there's one thing I don't like about it: communication between IntentServices and the UI thread feels so... wasteful. The sanest way to do it, I think, is keep communication to a minimum, relying solely on the local DB as a source of information. But then every operation you do to fetch remote data immediately results in at least a couple of DB operations (store it, read it) and a ton of serialization. As far as I know, there's no simple way to pass an object from the service to the UI.
> As far as I know, there's no simple way to pass an object from the service to the UI.
They are in the same process, you can communicate between them through all the normal Java mechanisms. For example, your UI can just register & unregister a callback on the service directly in its start/stop methods (or in onVisibilityChanged if you'd rather do this in a View instead)
Oh, that's better then. In documentation they always point you towards using complicated mechanisms for communication, but I suppose that only applies to services running in different processes. Thank you for the clarification.
I agree, that's why I don't use fragments at all and do all my multithreading in the way of communicating sequential processes: via BlockingQueue communication to hand-written background threads / services.
The trick with Android-development really is to acknowledge that large parts of the SDK are just crap and that you are better served writing custom code than trying to use it.
I agree completely. After a while I abandoned fragments completely and started using custom views everywhere. Life got much easier and I kept waiting for the other shoe to drop - decreased maintainability, performance, robustness... something would bite me in the ass for "going my own way." But the shoe never dropped, life did get easier. Lesson learned.
Thanks for sharing your experience with using custom views only. These views would then simply act as the view controller I guess, having not much presentation logic. I was always having this in my head but then I never tried it out so far. Are you going even that far to use one Activity per app only?
No I have several activities, but they're very coarse-grained. Similar to how a large SPA may consist of a grand total of 3 or 4 pages, and routing works within pages, not just between them.
Also, I'm currently using this technique with Xamarin, which opens up some techniques that would be hard to pull off with native. So in this case I use an Angular-style MVVM pattern, so the custom views are all bound to corresponding viewmodels. So the custom views all share one small piece of code that takes care of some binding-related plumbing, and otherwise they're nothing but AXML with binding statements in it. This adds up to a rather pleasing approximation of a web-frontend-style component-based architecture. Xamarin should really be more popular!
> After all these years, I still find the most difficult and un-natural thing is mixing concurrency / background tasks that must outlive the UI with complex UI component lifecycles. This is a frequent and necessary thing to do, and also quite awkward
this is being fixed. The problem is that this is political as well as technical. The solution went from GCMNetworkManager to JobScheduler to the now recommended Firebase Jobdispatcher (https://developer.android.com/topic/performance/scheduling.h...).
P.S. Firebase was a newly acquired startup at Google.
Firebase Jobdispatcher should be able to take care of your concurrency issues in a power efficient way.
"After all these years, I still find the most difficult and un-natural thing is mixing concurrency / background tasks that must outlive the UI with complex UI component lifecycles."
This and the weaknesses of AsyncTask are the reason why RxJava has been adopted so quickly on the Android side. Event buses are dead or dying.
With that said, Rx is still a moving target and it seems to attract a proliferation of redundant, but slightly different, operators that scare away newbies. Observables, schedulers, and map/flatMap/filter are immensely useful to any old school Android developer.
Just like the web platform, Android is hugely improved when you leverage the community libraries and tools they provided. Gradle actually helps a lot with that, since its clear dependency management makes use of external libraries a breeze.
If in 2016 you're still complaining about AsyncTasks and its management, you certanly have missed a lot of progress in the last few years. It's not unlike having people complain about JavaScript issues without ever looking at jQuery, React or newer tooling.
The most useful things you can do for yourself currently are:
* Adopt Kotlin as a language. It's not all that different from Swift, comes with IDE support, tiny standard library and really fixes the pain of Java 6.
* Use MVP patterns and its friends. There are a few libraries which take most of the pain of lifecycles away.
* Use RxJava, Retrofit, Glide, etc. libraries that make your life easier with concurrency. A lot of these tools are better and easier to use than what even exists on iOS. Using AsyncTask in 2016 is just silly, it was never a good API.
* Use Gradle! Driven by its scripting language, you can do so much to script and automatize your build.
Other than that, I agree, after years of development:
* Some Android APIs just aren't well thought out.
* Gradle badly needs performance improvement.
* State of NDK is just sad. Fixable, but sad.
* MultiDEX is a result of a very very dumb decision in Android 1.0 and it's going to hurt us for a long time :/
All-in-all, I honestly think the author didn't really look into Android all that much to have complaints that he had. The blog post is somewhere on the level of someone complaining about IE6 JavaScript bugs when we've already moved on to React and are dealing now with very different issues.
(And no I'm not saying Android is great or even a great platform. It's not. It's just that, just like on Web, you can do a lot to fix major pains.)
> It's not unlike having people complain about JavaScript issues without ever looking at jQuery, React or newer tooling.
But this is exactly what makes being a Javascript developer a miserable experience.
And I can give Javascript some benefit of the doubt since it's a multi-vendor language with complex standardization processes and all the stakeholders have their own interests and limited financial backing. By contrast, Android is a single vendor platform with practically infinite financial and labor resources behind it.
It's just a mess. I'd expect a relatively painless experience: install dev tools, start a new project, build and go! But no, it's jumping through hoops, requires understanding a myriad of different SDK/platform versions and a constant churn in keeping your apps up to date with new versions while trying not to break backwards compatibility.
And a big part of this mess comes from the fact that device manufacturers and mobile operators are unwilling to keep old devices up to date (and Google can't force them, while I think they should with some kind of licensing contracts), leaving customers exposed to security flaws while keeping the developers churn high with multiple versions to be supported.
You can open Android Studio, create a new project and "Go!". It'll work. Then you can add a single line of code to dependencies and you get even better APIs!
I also have no idea why are you mixing the device updates into the developer tooling argument. New apps are (should be - and the tutorials tell you so) developed with API 19+ in mind which means that you'll have to work for quite a while to get any problems with fragmentation and updates. You can even start development on API 21 and ditch support libraries all together and STILL not shed significant amount of userbase.
I believe you, but Android Studio has come a long way since.
I agree it was abysmal at first - when it didn't crash, every new update would break compatibility and your projects wouldn't build anymore etc.
Right now it's pretty decent, considering, although I'm not a fan of Gradle either.
By the way, even though it's a fork of IntelliJ Idea, it's ultimately shaped by Google which has sort of a tradition of realeasing alpha-stage software into the world. Remember first releases of Chrome? I do; they crashed like crazy and lacked basic functionality such as printing.
> It's just a mess. I'd expect a relatively painless experience: install dev tools, start a new project, build and go! But no, it's jumping through hoops, requires understanding a myriad of different SDK/platform versions and a constant churn in keeping your apps up to date with new versions while trying not to break backwards compatibility.
> And a big part of this mess comes from the fact that device manufacturers and mobile operators are unwilling to keep old devices up to date (and Google can't force them, while I think they should with some kind of licensing contracts), leaving customers exposed to security flaws while keeping the developers churn high with multiple versions to be supported.
Is it only me or there are tons of similarities to C# and the .Net environment?! More and more I feel Google just ends up with the same solution MS came up with in order to handle the diversity of vendors using its platform and their conflicting interesse.
ASP.NET used to be difficult to configure and deploy (just some things like basic defaults being pretty bad for real apps), but even then I'd say the alternatives at the time (EJB, Spring, etc.) were significantly worse. For all other uses, I've had zero tooling problems. Oh sure, there were times I painted myself into a corner with regards to my app design, but that will happen in any language. I've never found any platform as easy to get up and running as .NET.
Hey, Im a Javascript developer and my experience es far from 'miserable'. Anyway, I see your point, and makes sense. I think node and npm helped a lot to actually achieve the flow you mention ("install dev tools, start a new project, build and go!").
I think you just proved the point of the author. I haven't done any app development yet, but from how you describe it, it seems to be as hard to keep up with it as with the JS ecosystem. No wonder then that the author has trouble keeping up with two entirely distinct ecosystems, and chooses to focus on only one.
(That said, I'm very glad that this is the top-voted comment, since it looks like a very valuable resource for lurkers who want to get into the topic.)
IMO it's really not (but I might be biased by experience) - the library churn is nowhere near the speed of the JavaScript world. All the libraries I mentioned are a few years old, battle tested and any larger conference or a resource (Android Weekly, Fragmented podcasts, etc.) will have numerous articles and talks on how to use them.
The only exception is Kotlin as a language, which is pretty new and the tooling is still improving. You're not going to hurt TOO much if you stay with Java6 though.
Remember, we at the end of the day we're all just old Java farts and we like our stability ;)
I'm not super familiar with the Java world. Will eventually adopting Java 8 improve the experience developing Android apps where you might not need Kotlin?
Uhh, is there a platform there that isn't greatly improved by community libraries? At PSPDFKit we use a lot of additional third party tools and libraries on iOS as well, C++ practically needs community libraries to be usable, Python's biggest strength is in its excellent OSS library community, Ruby as well...
I mean, yeah, in perfect world we'd all have neatly packaged by Google, we'd write 6 lines of code and have the greatest app ever. But software development never kinda worked that way for me and I have a feeling it never will :/
I'm not sure about the meaningful distinction here. Programming on any platform will be greatly improved by using good (preferrably opensource) libraries. What difference does it make who made them and why? It's such a strange nitpick.
Difference is at the beginning. When I started using Python I absolutely loved that stdlib covered almost everything I thought I could need. These days packages I use mostly come from elsewhere, but at the beginning it was great.
I'm fine with Javascript development these days because I built my fundamentals years ago. I am not sure how comfortable I would be otherwise and when I meet people who are where I used to be, they often look lost.
I am also thinking about learning Android development and don't really know where to begin.
Because outside the startup world, in many enterprises developers are only allowed to use sanctioned libraries outside what the platform owner advises as best practices.
So everyone suffers when anything more complex than hello world already required third party libraries, specially ones that aren't even acknowledged by Google.
Regardless, it doesn't reflect on the language/os/ecosystem/platform/system/library/runtime
If a company wants to vet all of their software, good on them! But that doesn't mean that anything that's not vetted by them is somehow worse for the majority of users, just that they haven't vetted it yet. So any arguments about how "library x" isn't usable by them won't apply to the vast majority of people, because most don't have that issue, or have no problems vetting that library as well.
Somewhat off topic. Is there a word that encompses the langauge, platform, runtime, etc...? I hate the word "ecosystem", but it seems like the best bet. And using any one of the others brings out comments of "well it's not a language" or "it's not a framework it's a library" or something else.
If you're using N separate libraries, that's N times the likelihood of breaking changes that force you to migrate at inconvenient times, and O(N^2) opportunities for bugs caused by incompatibilities.
Two libraries can misbehave because they share a common dependency.
If you assume a roughly logarithmic intersection of shared dependencies that's O(N^2 log n) chances for a bug. But in some communities I'm sure even that is optimistic.
Python biggest use case is a scripting language - you don't want to bother with dependency management and all that stuff when you're writing scripts.
When you have a more complicated project it's fine to rely on repo packages (Python does as well for eg. web frameworks like Django aren't in the core). In fact python std lib is hugely inconsistent stylistically as a result of needing to keep it stable over the releases as people expect it to be backcompatible and package versions are tied to language versions. If it had proper versioned dependencies doing breaking refactoring over the years would be really easy and non-intrusive.
Apple didn't come up with dependency/package managers like Carthage or Cocoapods, the community did. There's just enormous amounts of OSS libraries provided by the community too, with ones like Alamofire smoothing out the rough edges of iOS's networking APIs.
The difference to me is that under iOS, you can go without these things (third party libs, Cocoapods/Carthage) and still be OK — you'll be at a relatively minor inconvenience, not ripping your hair out. Particularly with your example of Alamofire/AFNetworking, there's not a ton of functionality you're missing out on if you instead use NSURLSession, since they're both just light wrappers around it. This wasn't always true, but Apple recognized the gap in functionality and fixed it.
I have multiple personal Obj-C/Swift projects on iOS and macOS that don't use any kind of dependency management and use very few third party libraries and working on them has never been an issue or source of pain.
Actually, I make a point in my iOS projects to avoid third party libraries wherever possible. The first-party SDK is good enough that the dependency cost of libraries like Alamofire is usually not worth it.
I had to do some Android development not too long ago, and I felt like I had entered some crazy la-la land until I found a bunch of third party infrastructure like Kotlin, RXJava and a pile of libraries to make life bearable.
DRY can be taken to extremes. It is ok to sometimes write code yourself than to tie yourself and depend on some other library by "some guy". The whole NPM "left pad" debacle is proof of that.
IMO, this is exactly right. Not only is the quality of APIs often a disaster, the recommendations/best practices that Google is broadcasting have been way off in the past.
Rememember that Google has been advocating AsyncTask in the past. You could use something else or roll your own, but it doesn't make sense to do that when AsyncTask is being communicated as the way forward. They should've embraced third-party libraries, but instead they still communicate how to use AsyncTask. Example: last March 2016 they published a video on how to use AsyncTask [0]. They do highlight the red-flags around it, but do not mention any of the third-party libraries that are considered standard for a lot of Android developers already.
The same goes for Fragments. It is still being advocated as the right way, but I have some doubts. I imagine they will be advocated against in a couple of years. They help at the time, but are far from logical/simple when you don't know all of the crooks and crannies (and there are quite a few).
Another API that needs serious work is the Storage Access Framework (SAF). Previously Android applications could use the Java File API to access files. With recent Android versions this has been closed off with good reasons and intentions. Instead of using the File API directly, you now need to use SAF. SAF doesn't support all operations you could do with the File API. I would say this is a regression and existing applications that relied on these features are now broken and unrepairable. In addition, applications now needed to explicitly ask the user for permission to specific directories or files. This was so badly implemented that every file manager needed to instruct the user on how to use the directory picker of Android using screenshots before showing the picker. This is still a problem and even if it is fixed, it will still be a problem for years due to phones being unable to update.
To give you an impression on how those APIs are designed: many calls of SAF will return null to indicate something went wrong. No exceptions or error-codes. There is often no indication why a certain file cannot be retrieved. It could be non-existing, it could be denied permissions, it could be some strange behavior in the rom. There is sometimes a way to find out why this returned null and that is looking through Androids global log. I've implemented this in the beta of my app so that I could find out why files weren't accessible on some devices.
Also, because of the slow adoption of Android versions, your application needs to support both SAF and the Java File API. The support library has wrappers for Java File API that works like SAF, but it's still a downgrade to use this API as exceptions from the Java File API are just ignored [1].
For me Gradle is actually a step in the right direction. Builds seem to be more reproducible and do not fail randomly compared to the Eclipse/Ant days. The performance is however awful. Incremental builds take 30 - 60 seconds. When comparing this to building pure Java projects it shows that it is an Android-only issue and not one with Gradle.
This is also very much the case with iOS. If you just stick to what Apple provides you're going to suffer a lot more than necessary. Libraries and tools like Snapkit, R.swift, Fastlane, and Alamofire sand off so many rough edges on the APIs and Xcode.
Trying not to add dependencies after being burned in the past (cocos2d-swift disappearing was a particularly harsh one), but they all look interesting. R.swift in particular I might have to add. Autocomplete and compile time checking of resources is something I wish was more common in other environments.
R.swift is brilliant. Apple should just write them a check and make it part of Xcode instead of playing cute games with rendering tiny images inline with code.
"All-in-all, I honestly think the author didn't really look into Android all that much to have complaints that he had."
The author states pretty plainly the biggest reason for leaving Android development was he didn't have time to keep up with all the developments in both Android and iOS communities.
Then he's not a very good programmer. This is technology: at the rate it's advancing, even tools from 5 years ago are out of date. You have to constantly keep up with the latest advances in programming if you want to stay relevant.
The gradle and multidex complaints are spot on. Even as an enthusiast developer making my own personal apps for myself, I see this as true. I'm ok with Fragments, Rx makes everything nice.
Agree.
I was expecting the author to lament on something like java vs swift.
Even that would not be the best argument ever since you can write your android apps entirely in kotlin and it is vastly similar to swift.
A real downside of kotlin is that it complexifies your build (but only the first time you have to setup a complex project with kotlin, dagger, databinding, etc) and that the tooling is not on par with the base java one.
Multidex ? it is far from perfect but it is not a day to day issue. I work on a 200k methods app and multidex actually makes it almost entirely painless.
Gradle ? Nobody can seriously say that builds have gotten worse with gradle. Before it, we had no way to hook up libraries with resources (aar). Now we just have to 'import libname:version'
The complaint on the build times is very fair though. The solution is easy though .. just use an high end computer
I use a mid 2015 mbp, it can handle an huge android app without any issue (except that build times are only acceptable, not great).
It really looks like he has been submerged with having 2 platforms to learn at once and has not been able to tackle the basics.
For kotlin, no book needed.
Go to the language website, read the introduction, maybe a couple of pages on the language features and then complete the kotlin koans. It will give you an overview of most of the language features.
After that, find an android project written in kotlin.
Here is one : https://github.com/LostInContext/LostContext-App (there are many others) .
Configuring an android project in kotlin the first time can be painful, especially if you have some code generation (dagger, databinding, ...) , so it is way easier to have a working sample to work with.
My problem with gradle is the built in Java dependency resolution is way too aggressive. If you change a class with a static constant it recompiles everything. Even if you don't, it will compile the whole dependency graph to handle some theoretical edge case when it would be enough to compile just the modified file 99% of the time. Oh well.
For me it depends. If I'm in a development session with lots of iterations where build speed is the bottleneck, I'd take my chances (I do anyway, have to resort to manually compiling individual files). For production/CI/other builds it's a different story, although those usually get built clean so it's not even a consideration.
Hey, what libraries for assisting with MVP/etc. patterns did you have in mind?
Incidentally, our Kotlin-based app actually has a completely custom one that an opinionated product guy wrote that works reasonably well, but it still needs a lot of work and probably to be fully extracted for an OSS release. We also have some Kotlin delegated properties and other magic for cutting down on lifecycle & state management boilerplate.
I don't think that the person is being lazy. I've tried my hand at Android. I will continue to do so. I don't like it for the reasons enumerated in the post. The whole damn thing is a hack at this point. No one has a good generalizable architectural model for laying out an Android project. Attempting to target multiple devices is truly a pain. You can do it. You have to really, really think about. You also have to end up making whole sub sections of your product for a specific form factor.
I'm not saying that iOS is better. Don't really know. I've played with Swift and a few of the Apple tutorials.
In general, I'm against the current incarnation of mobile. After years of having an Android (and now having an iPhone due to work), I switched to a Samsung Juke. That's right, a feature phone! The Android just got slower and slower. No updates, not improvements. Finally it screeched in my ears while working. I picked it up and slammed it on the desk (felt great for a week). Both iOS and Android are slow, overly wrought operating systems.
That being said, unlike the author, I'll still use Android and iOS for my products. I have products that need to be mobile. They have to target the machines my customers have now. That also means that I'll have two native app code bases. I'll have to keep track of native app UX/UI standards. I'll have to keep such hardware around ('cause Android's emulator is still sucks when I've got an VM running).
I was surprised to learn that Android in 2016 doesn't even natively come with an equivalent of JavascriptCore. You have the pleasure of somehow getting V8 to run and then serialize/deserialize all your native objects manually in order to talk to it if you want javascript outside the browser. Really? From the company that is the most web native around, and even developed the most common browserless JS runtime (V8)?
Google's user faced architecture often just seems like an unmanaged jumbled up bunch of code that they throw at you, good luck have fun.
I maintain my own fork of JSC that I use on Android. I've been focused on reducing its size and hope that it one day can be way smaller than it currently is. I managed to get it down to less than 800kB when compiled for ARM.
> I was surprised to learn that Android in 2016 doesn't even natively come with an equivalent of JavascriptCore. You have the pleasure of somehow getting V8 to run and then serialize/deserialize all your native objects manually in order to talk to it if you want javascript outside the browser.
That's a little bit like saying that Chez Panisse doesn't even deliver sewage to your table, and that if you want to pour it over your food you need to bring your own chamber pot with you.
Java is not a great language to program in, but if there's any language it is clearly better than, then that language is JavaScript. Why would one want to program an Android app in JavaScript rather than in Java?
There's only one reason to use JavaScript: because one is deploying in a browser, and JavaScript is the only language supported by the vast, vast majority of browsers. In every other way, it's a misbegotten mistake of a language.
The big reason: Cross platform compatibility. Not just between mobile platforms, but even between mobile devices and webservers. Our devices are so powerful now that you can run a full blown webserver on it if you choose your DB system wisely. That's why we use CouchDB - replace it with CBL on mobile and take your whole webapp with you for offline use. I'm Swiss - we have lots of tunnels ;-).
Take a look at https://github.com/appcelerator/tijscore it's a port of JavaScriptCore to android developed by Appcelerator for their Titanium product. It also includes a bridge between JS and native code.
Google doesn't have one single mind. There are lots of different people and groups within Google, and I don't think all of them are "web native". The Android team never seemed very close to the web at all.
Ever since i picked up a Nokia N800 back in the day, my go to setup has been a featurephone with bluetooth and a "smart device". Thus if i need a net connection right damn now, i can pair the smart device with the featurephone and get online.
Can you elaborate on how you do this? I wasn't aware that you could use a feature phone over bluetooth as a wifi hotspot. Are you doing something different?
Well it is not as fast as a wifi connection would be, but it gets the job done. That said, i do not require 24/7 social media and other such coverage.
Basically pair featurephone with smartphone or tablet, and then tell the smartphone or tablet to use the featurephone as their internet connection.
Mind you this is all on Android, and i am not American so i do not have to deal with carriers mucking up my featurephone firmware.
Edit: Just reminded myself that the term used is tethering. If i can't find a wifi hotspot to use, i tether smart device to the featurephone over bluetooth and thus use the phones mobile connection as the net connection for the smart device.
Double that complexity if you also want a web app that works across all modern browsers.
I wish the big 4 would get their act together and stop pissing on each others legs.
FB is the most recent entry into this clusterfuck. They should be ashamed of the Oculus/GearVR developer experience, it was the determining factor in my abandoning Oculus.
It is the developer that pays the cost in the end.
Aw, that makes me sad. I've been telling myself I'm going to get into the VR scene and dive in with Oculus. I was hoping it would be a dev-friendly environment.
I've heard (and experienced) great things about Unreal Engine 4 with HTC Vive. I have a couple friends using it for an independent study at Uni. It's great fun, and the Vive and its controllers are honestly excellent pieces of hardware.
I think there are no developers that are true experts in both Android and iOS platforms. Every developer I know either leans towards one platform or another. There are also developers who know both platforms pretty well, but I won't call them experts in either.
I consider myself an expert in Android development. The only point that I agree with is multidex, but there are historical reasons for the limitation and I think Google engineers are trying to fix this problem or make it as simple as possible to use multidex.
I occasionally do iOS development and some things in iOS don't make sense to me, but I'm sure they would be obvious for an iOS expert.
My advice for mobile developers or future mobile developers is to specialize in a single platform that you like more. For me it's Android. For the author of the article, it seems to be iOS.
I am an expert in iOS dev but I use and want to like Android. It is quite annoying... I really do not like the walled garden of Apple and on Android devices, even cheap MTK ones I am quite handy at installing what I want, but for some reason I find Android development a real pain compared to iOS where I find most older things (stuff, as it goes in life, needs to mature) very obvious and easy.
I would say what most annoys me about iOS is the GPL thing which is the reason, among others, there are no emulators in the Appstore, at least no relevant ones. As owner of a museum and fan of old machines I cannot do without emulators. On my Android tablet I have around 50. Emulators is a niche but no GPL is hampering a lot for everyone on iOS.
The GPL incompatibility issue has nothing to do with Apple not allowing emulators. (After all there are BSD/MIT emulators) They don't want (non-sandboxed) interpreters running unreviewed code. Right now the only interpreter allowed to be used with downloaded code is webkit's JS JIT.
For instance [0] which I use by far the most has both problems; however with another license the runtime issues would be fixable. For instance, recompile the runtime with emscripten. Luckily there is webmsx now which looks promising, but that is not there yet.
I develop iOS apps professionally. Sometimes upon completion, I am urged to develop an Android app that is similarly functional both in UI and code behind logic. I can get there, but it's never, ever fun.
Employers need to wisen up and not expect mobile developers to be masters across these dual platforms. Plumbers don't do electrical work, paramedics don't chase criminals. I like to browse iOS/Android mobile dev job descriptions for fun, and I grit my teeth when it is expected that you be a master at both. Not a chance, everything moves too fast.
I develop for Android professionally and I am quite good at it (now senior dev in a team of 15 android devs).
I am growing curious about iOS though. I am currently writing an Android app in my spare time with a friend and our next objective is to learn swift in order to make its iOS counterpart.
I have also been offered a couple of iOS jobs (from companies wishing to hire me but not currently hiring for Android).
Finally out of our 15 android devs, 2 of them are also iOS devs (and work for us on both platforms).
They quite proficient with both platforms (well, Android anyway but I heard no complaints from the iOS team). Sure, they are not as proficient as our best Android engineers, but they are still pulling their weight.
Maybe the difference is that we are not an agency but a mobile team working on a single app.
We can take the time to learn the platform and teach to our colleagues. Maybe that a freelancer does not have this luxury or the adequate structure.
> Google’s adoption of gradle has been a disaster and proved to be a terrible decision. It did help out with some previous issues, namely multiple app targets, but it’s slowed down compilation severely. It also makes for masochistic configuration files with major redundancy and fragmented dependency hosts. Getting an app to compile shouldn’t be a challenge.
The only thing worse than gradle is ANT, which it replaced.
> Wait, what is picasso? Oh wow, I hadn’t heard of that one…I was busy learning Swift.
I just minimize the amount of platform specific tech in the codebase as much as possible. RxJava? That's going to be real fun to port...
Ahh, this didn't even touch on my least favorite part of Android - command line tools that tell you they've "successfully" deployed APKs. And returned successful return codes. That in reality silently failed due to ever so slightly loose USB connections.
Moving to Gradle was a terrible mess. Ant was simple, effective and fast. Even now every time I fire up an Android build with gradle my laptop feels like it is going to fly.
However Android has come a long way since the early days. The author focuses on the bad things we all know about. Android development is incrementally getting better every month.
On top of that Android Studio is a way better development tool than Xcode. To the point that I wish Apple drops Xcode completely and starts using AppCode.
- Android Studio has powerful refactoring tools. Xcode lets you do some basic refactoring only on Objective C
- Despite gradle being a pain, it gives you dependency management. To get that on iOS you have to resort to third party tools like cocoapods or carthage (though swift has now one too)
- Autocomplete works reliably, every single time
- Translations are way easier to do. Android studio comes with a translation editor and translations are in one place. I welcome you to try to translate a storyboard on Xcode and maintain that.
- Android Studio UI editor is now way better than what it used to be. Despite that, Interface Builder is still the king of UI editors.
- Editing the UI XML is trivial on Android. Try to merge a storyboard or a xib... and Android Studio does not automatically modify your UI xml files every time you open them.
- Signing on Android is a thing you configure once and that's it. I have wasted days dealing with signing problems on iOS.
- Etc...
However I enjoy more doing iOS development. I cannot explain why, it is a more pleasant experience. At least to me.
I wonder, what's wrong with Gradle? I didn't develop for Android, but I use Gradle for plain Java projects and it's greatest build tool I've ever worked with. It's even better than Maven. Is it because of bad Android plugins?
It's slow, underdocumented, and unreliable. There are too many ways to do anything, and it's too easy to add a "1-line fix" that does something unmaintainable. It's better than ant, but a lot worse than maven.
Ant builds have always been unmaintainable messes IME, via the incredibly verbose syntax and difficulty of organising tasks effectively. I have many issues with gradle but it at least seems to result in reasonably concise well-factored builds most of the time.
And that's about all it does. Gradle has proper dependency managment with Maven repository support (add one line and your library is in, no fsckery with JARs and whatnot), has scripting in an imperative language instead of craziness of XML (we use it to automate releases, uploads, Git commits, etc.) and really good plugin support.
Ah I agree on the dependency part. I suppose it was easy enough for me to just pull in the jars but I suspect my projects had rather simple dependency needs. Sounds like what you were doing was much more involved!
Almost every ant file I've had the pleasure of working with in the wild is made badly (having to run things in specific order instead of depending on each other is a classic).
With Android, they seemed to change the build files every minor release (my info may be out of date here).
> Almost every ant file I've had the pleasure of working with in the wild is made badly (having to run things in specific order instead of depending on each other is a classic).
Dependencies were a mess. Depend on A and B with both depend on C? C gets pulled in twice, build fails. Workaround? Hack up A to depend on B instead, and have your project depend on A.
Surely adding a new dependency isn't supposed to involve mucking with the build files for half of your existing previously fine third party libs just to get the bloody thing building again - such that all dependencies are only referenced once, yet such that every library has it's dependencies indirectly satisfied by whatever it's been configured to depend on.
I assume I was doing something wrong. I probably sunk a good week into consuming docs and trying stuff out to figure out exactly what. I still have no clue. My coworkers couldn't figure it out either.
I wrote wrapper scripts - and later a full blown partial adb wrapper - to fix the silent deploy failures, after one too many hours of wasted debugging sessions (caused by debugging stale builds) made me crack. A few commands to check file sizes and timestamps... a few regular expressions to parse the results... a few debugging sessions when e.g. the installation defaults for "adb logcat" log output format changed between SDK versions...
After reaching a similar snapping point for ANT, I took the slightly less drastic option of switching to gradle for the next Android project I tackled. I remember gradle being merely 50% as terrible as ANT. I seem to have successfully repressed many of my more detailed memories of dealing with both. Huzzah!
If not then you shouldn't assume it's Gradle's fault. A lot of Android Gradle builds are slow because they proguard lots of class files, merge large dex files, etc. Which is all optional functionality of the Android plugin, not inherent to gradle.
> Do you realise how insane it sounds to tell someone to profile their build config?
This is absolutely not insane.
I've saved over a minute per project per build simply by switching linkers (bfd -> gold IIRC?) in a large set of C++ projects. There were several projects - this easily saved 10 minutes per build when touching core libraries.
Considering my experience was that gradle builds were faster than corresponding ant builds (when not much has changed, in codebases with relatively small amounts of Java - e.g. I'm doing small iterations, which I care to optimize for), profiling what exactly is to blame seems worthwhile. I do it for 200ms hitches, why shouldn't I do it for multi-minute builds?
You want a build pipeline so fast that you don't even have a build step? So do I, but devs will still figure out how to add one which may take several minutes (e.g. unit tests.) - better to aim for a practical compromise that lets you iterate fast (hot reloadable scripts and data for example.)
> The fact that people need to do this is the problem.
To move this towards the realm of tautologies - performance problems are problems, yes. So solve them. But profile to ensure you're actually solving the right problem first.
Yes, ideally our build systems already have perfect caching - and could rely on filesystem events instead of directory scans for cache invalidation - and have blazing fast parsing steps, all resulting in subsecond build times.
On the other hand, it's insanely complicated to reach perfect caching for all buildable things - and there's only so much you can do if one of the buildable outputs is a large compressed file, for example (which is a common feature in pretty much every project I see, containing all the assets bundled and compressed in some form for better runtime performance.)
Dumb and simple tends to be more reliable, more understandable, and usually not too much worse. Although sometimes you'll be left profiling the edge cases and adding or fixing the caching involved.
I'm just suggesting that people are assigning blame in the wrong place. It's typically not Gradle's fault, but just that Gradle is being asked to perform a bunch of expensive (and often unnecessary) operations, like
- The build might have dependencies with dynamic versions or changing modules (aka snapshots), meaning that Gradle periodically has to download the latest version.
- The project might have proguard enabled for dev builds, which isn't really necessary. (As long as you smoke test release builds, which is a good idea anyway.)
- The project might have dex merging enabled for dev builds, which isn't necessary unless you need to test something on a pre-ART device.
- Engineers might be passing --clean or --refresh-dependencies, out of habit, when they're not needed.
You could criticize Android's architecture for not being conducive to incremental builds, or you could criticize Android Studio for generating build files with some expensive functionality enabled. But none of this has much to do with Gradle.
I'm sure there's room for optimization in Gradle build process, but if build times and fan noise annoys you, a few tips:
- in my experience, it looks like gradle builds can make 100% use of all cores my CPU has. If you are working on Android projects day in and day out, it may be worth getting a CPU with more and faster cores, it will increase your productivity and pay for itself in a short time
- you can build a powerful, but virtually silent PC: big aftermarket CPU cooler, semi-passive GPU, semi-passive PSU, SSDs, no case fans.
I used maven from the start and it's wonderful. Actually declarative builds. A well-defined config format rather than the unspecified .gradle . Plugins with reasonable defaults rather than turing-complete code all over your build definition.
I have the opposite experience -- I tried Buck because I really wanted a Blaze/Bazel clone (it wasn't open source at the time). It works but I find it very slow.
My project involves lots of native code and genrules, though. If you're mostly building Java code I imagine Buck might work well. If starting from scratch I would try Bazel first.
I have 2300 lines in about 40 BUCK files, plus 500 lines in 2 DEFS files. Just parsing the BUCK files seems to take an inordinate amount of time (tens of seconds from cold).
I found that as I get older, and still try to learn different languages and platforms, that I have to try and work on my research skills as much as my coding skills.
As the OP pointed out - straddling several languages means that syntax and keywords are not always readily apparent when you sit down to code, but knowing where to look for them and refresh your memory quickly becomes an activity all on its own.
This from a guy who is a week away from being 50, and still occasionally puts a ';' at the end of his ruby code blocks.
Unless you are writing a Tier 1 application that needs every fancy do-hicky, I don't see the point of 100% native anymore. Use React Native, then if necessary have a devoted native developer for each platform to handle parts that absolutely have to be executed in native and provide a JavaScript API. If you are a big company with tons of money, sure, create duplicate teams to make the same app for each platform. But if you are small or just starting, React Native is the best choice.
I'm all for write-once-deploy-everywhere, but React Native is better suited for apps that could be easily written web-only. Maybe I don't get what's better about React Native over a native app that is just a web view.
React Native apps are written in JavaScript, but the components you use are wrappers for actual native view components. So you get native performance (the views can be GPU-accelerated), and native styles.
I've been working with Android for a couple years, and (while I like it) it has it's warts. The architectural patterns required to keep a medium-sized codebase understandable are simply not agreed upon. There are lots of ways to do things, and (as the article points out) even the basic activity + fragment API's are very complex.
I just started playing with React Native, after having some React experience, and it's a breath of fresh air. It feels like cross-platform without obvious negatives, and the rare bonus of being able to wrap + use native components or libraries whenever you need to. You can even share most of your code (even view code for basic views) across IOS and Android. React has a super simple API. The tooling is younger, but is out of the box faster to develop with than Android.
That said, I haven't built a large app in React or React Native, and maybe it bites developers at some scale. No idea how it deals with long-running services, multiple threads, bluetooth or other hardware APIs. But for now, I'm very optimistic.
> Maybe I don't get what's better about React Native over a native app that is just a web view.
It's better because React Native is actually rendering native components. Stuff like PhoneGap never feels or works quite right as a side effect of being rendered in a web view.
Next to the fact that something like push notifications require a native implementation, people like apps more.
I'm currently working on a way to make our mobile-tailored website into an app, just because customers keep requesting a _real_ app instead of a webpage.
I have an Ubuntu phone, where the majority of stuff is web based. One problem I noticed is that many apps won't load unless you are connected to the internet. Is it possible to cache the stuff that would get loaded for using apps offline?
Like the op, I do both iOS and Android development work. However, as an indie, my platform preference is primarily market-driven. So most of my work has been on iOS
Unlike the OP, I like and use Android AsyncTasks. I have no problems with Android fragments either.
>primarily the low quality of Google's SDK for Android.
This has always been my biggest frustration with Android. I've written a few applications for the platform, and while their architecture leaves a lot to be desired I've never had any real problems with it. However, the immense disappoint and anger that comes from their SDK idiosyncrasies is astounding. Google really just hates stability and nice API's, in my opinion. Everything from GAE to Android, it's always just so terribly frustrating to keep track of any one-off decisions they make without any form of actual communication from the development team.
I do not think writing Android apps is straight-forward and would not recommend it to new developers or engineers like the author who want to do other things like iOS. People's expectations for apps is only getting higher, and there are many, many things you need to understand in order to be a produce great Android apps.
As an example, another commenter noted the Android platform engineers not being opinionated about application development as a "lofty ideal", but more likely this is a consequence of the framework team having enough on their plate. The vanilla Java AOSP API surface (not even considering the NDK, support libraries, Google Services, etc.) is enormous. The Activity lifecycle is complicated, and it's very easy to leak memory or write spaghetti code. There are many ways to do similar tasks, and these also change with time. Etc.
That said, I love being an Android developer, and it is improving at an accelerated rate. To start, Android Studio and Gradle are far superior to the pain of getting Eclipse, Ant and the SDK tools working together.
Yes, Swift is awesome and shiny, but there is so much more to a platform than the language. Java has excellent tooling - great support for debugging, monitoring, automated testing for CI/CD (automated UI testing still needs work, but also improving), static analysis, etc.
The support libraries are also a godsend, enabling you to make an app that looks modern while being frequently updated and handling a lot of the compatibility headache from the platform's diversity.
Finally, there's a lot of great content online. The conferences are great and its expected to see their content on YouTube. Google's Android Developer YouTube channel is also fantastic (shout out to Jo and Ian!), and Google is slowly but surely improving the developer docs and integrating their sample code into Android Studio.
So yes, you do need to understand the new functional reactive approach. You need to know how to write a Gradle build. You have to understand the complexities of proguard rules. It's all pretty frustrating. But I also feel that many of the skills are more easily transferrable - I can also write a Gradle build for a library or I can use IntelliJ to better debug a servlet. With Swift and iOS, there's only vendor you can build for.
> So yes, you do need to understand the new functional reactive approach. You need to know how to write a Gradle build. You have to understand the complexities of proguard rules. It's all pretty frustrating. But I also feel that many of the skills are more easily transferrable - I can also write a Gradle build for a library or I can use IntelliJ to better debug a servlet. With Swift and iOS, there's only vendor you can build for.
I couldn't agree more. Android experience is hard-won. You have to 'discover' and develop an intuition for application architecture and how to use UI components effectively over time.
The upside is a well-architected Android application --using tools like MVP, dependency-injection, and functional-reactive programming-- will have the positive characteristics of a Service Oriented Architecture: encapsulation, statelessness, composability, loose-coupling. This may seem like overkill for an app, but it minimizes the effect of UI lifecycle issues novice and intermediate developers tend to complain about.
Architect a few Android applications in this way and you gain valuable experience composing abstract services together. A competency that is transferrable to backend services and other platforms.
Any resources you'd recommend in particular to learn FRP and how it applies to Android, specifically.
As a frequently frustrated intermediate developer I'd greatly appreciate it. I've just begun using RxJava and Robospice to tame my server endpoint call logic, but I know it's just the tip of the iceberg.
It's taken a few false starts and experiments for me to become comfortable. Eric Meijer is an excellent resource for understanding the theoretical underpinnings and the 'why' of it all.
A good bet is to search Github for projects where common Android APIs or libraries are Rx-ified. You'd be surprised at the economy and simplicity of code you can find in some of the best Rx implementations.
Without mentioning which platform(s) I prefer, I have to agree with the author. On one hand, it's nice to be able to work cross-platform, and architecting your app such that you maximize code sharing is a worthy and satisfying goal. But quite a bit of code (the UI mainly) cannot be shared and it's kind of tiring to finish your Platform A app, look at it with pride, and then face the need to essentially re-write the exact same thing for Platforms B and C, only in different languages and frameworks "because platform competition". You're splitting your attention, expertise, time, and learning N-ways.
As a once iOS and Android Dev (before moving to game dev in UE4) I found the act of having to rewrite an app again on another platform a great way to refactor the code and do things better the second time around.
If I had to go back though I'd probably try to find a way to do things once on the web, is React Native this?
I don't get it. How can there then be all of these weekend tossoff games in the Play Store that work just fine? Unless I'm unclear on what you mean by "platform."
Can you point me to a good resource for "a game which uses a graphics engine". Last week, I built both an android app and an iOS app, and would like to build something that compiles cross-platform to complete the set.
Unity is one example, another would be libGDX[0]. A game engine will give you a blank window, like an OpenGL context, which you draw raw onto. Any decent graphics library will provide useful functions for drawing 2D and 3D graphics, updating the screen, handling input, etc. to make using it easier.
Making a typical UI application means reusing lots of widgets provided by the OS, like buttons and sliders, to keep things consistent and make development much faster. If you tried to build a normal UI app inside a graphics engine, you could do it, but you would have to build all widgets from scratch and it wouldn't look like a native UI app.
Sure, but I think I'm still unclear on "platform" here. Does it mean "Android" and "iOS" (et al), or does each of those have multiple platforms within them?
I've been doing Android development since the G1 release. I agree with most of these points, but think they're also not a huge deal after you've been working with the platform for awhile. The biggest issue is by far the iteration speed though. Build/install take way too long--Android Studio for some reason takes 15 seconds to start my app even if no changes have occurred.
Instant run is supposed to solve this, but it's too buggy to be useable right now. Sometimes your changes just don't apply, but you never know if that happened or you messed up your fix.
I actually preferred ant since I better understood what it was doing. Gradle error messages are often very vague and confusing, and build times are strangely inconsistent.
What is surprising is that google with their much celebrated hiring practices for senior devs managed to hire no one who could say no to such insanities like Fragments and Async and the entire Android dev platform/toolkit as it stands today.
Apple, formerly famous for being design hippies who couldnt care less about performance, have topped the mobile performance charts for nearly a decade straight now and with no end in sight. An entire decade! They merely had the wisdom of not choosing java to base their platform on.
Where do you base you claim about apple not caring about performance? They have pushed the performance envelope ever since the first Apple computer where Woz built it with way fewer chips than their competitors. Their ui's have always demanded extreme performance.
I don't think lifecycles, phone rotation etc. are much of a problem, because once you know them, you know them.
I think the problem is Android keeps changing the current best practices. I'm not even sure how to know what those are - if you enter code from the Android tutorial into Android Studio, much of it is deprecated.
In January 2011, the way to do tabs in Android was via LocalActivityManager. Then in February 2011, ActionBar.Tab was added. By July 2011, the LocalActivityManager way was deprecated, and ActionBar.Tab was was pointed to as the way.
That's what is maddening with Android - they have a way to do tabs, add a new way, deprecate the old way within five months, then three years later change their mind and dump the new way - without changing the documentation and telling you the new way to do it, it's all just deprecated.
The tutorial is full of deprecated code. What's the new way to do it? Who knows?
A few years ago some corporate director at Google must have gotten a directive to push Google TV. So then they were pushing all apps to work with Google TV. I guess that fizzled out. The latest thing is making sure our legacy apps work with Chromebooks which allow multiple apps on the screen at the same time.
I don't mind Google and Android continually chasing the new shiny, but I wish they wouldn't have a tutorial full of deprecated code, I wish they didn't change how they do things such as tabs every three years (three new ways to do it in three years). I wish they fixed bugs instead of implementing new features. On code.google.com, developers post bug reports, then many other developers jump on saying they see the same thing, and...a few years later, it just closed for being obsolete.
Another example of continuous churn - Google Analytics looks like its going by the wayside, to be replaced by Firebase. Admob is now integrated into Firebase. So that's a whole other thing that needs to be redone in an app. You have to run to stay in place.
I've only dabbled in Android but this is what drives me mad. I spend a month digging in and learning "how to develop for Android" and then six months later when I go to use that knowledge for my real job, everything's changed. What was best practice last time I looked is now frowned upon, deprecated or flat-out broken, whole generations of new best practice have churned by, and current 'best practice' isn't compatible with any handset more than three months old.
If you're doing it 40+ hours a week then maybe the ongoing investment is worth it but to me it's a colossal waste of time.
Gradle has problems, but it doesn't seem worse to me than making builds in Eclipse was. I would love to hear why people think it is better or worse.
I agree multidex is terrible. Google please fix this.
I do not know when I should use a fragment instead of a view. I know the layout reasons google gives in their developer guide but I don't think I have seen a piece of code that really uses fragments that way. So why?
Gradle is just incredibly slow on Android compared to Eclipse. particularly with larger projects. Instant Run has helped alleviate this to some extent, but it's still not uncommon for a build to take 2+ minutes.
I guess this is mostly due to all the resource crunching that Android does, and the fact that Android Studio doesn't perform incremental compilation by default.
Another annoyance is that every time a minor change such as incrementing a version number is made, Android Studio grinds to a freeze syncing gradle files.
We are building a bit larger and more complex Android project, and every now and then just the Gradle part takes over 5 minutes for me. Just the Gradle syncing, before actually compiling anything.
I think it's Gradle getting somehow into knots and taking a while to resolve everything -- I've got a very beefy computer so it's not about processing power. Gradle is just ridiculous.
Incremental compilation seems to be on for newest release of Gradle, but Android is still on 2.x due to the plugin.
For our project, the most time consuming part is... Packaging. Making apk splits is slow, why does it zip the whole thing again if it is just replacing resources?
I've been often thwarted by Gradle's anything-goes syntax (everything is a Groovy program) and obfuscated error messages. Also the switch to Android Studio. Leave an app alone for a month and I can't compile it anymore.
I tried to use gradle when I moved one of our apps to Android Studio for a few tweaks and updates - ended up using Ant and Visual Studio Code instead, such was the slowness and pain of the whole thing.
I started to think that Google employees on Android Studio team are using something like hexacore Xeons with 64 GB and 1TB SSD as their development machine.
Kind of an OT whine relative to the contents, but I'm still extremely frustrated at how hard it is to get other languages working on Android due to the multidex-style issues.
Google should be spending millions being able to get app development up and running in something like Python. Have your app up and working in 2 minutes without having to learn Java. It boggles the mind that this isn't the case, and that the current tooling environment is soooo impossible to wrap ones mind around.
The same could be said about any platform - e.g. why can't I write my iOS apps in python?
Transpiling languages like that nearly always ends up messy, and adds another layer of potential bugs to the system. Taking the time to learn the platform will end up better in the long run (at least with the current state of transpilers).
And as it stands, you can write Android apps with a native UI in Java, Kotlin, JS (React Native) and C# (Xamarin). And you if you want you can write business logic using any language that will compile to a C binary (using the NDK).
So the multidex issue I was referring to makes it pretty hard to write things that will compile to C with the NDK.
In order to get any access to the Android APIs, you have to do C-JDK bridging through JNI. And in android VMs, there's a hard limit on the number of symbols you can reference in the lifetime of an app.
It's the equivalent of if python was like "yeah you can use the standard library, but only up to 100 functions!"
iOS, since it goes through the LLVM framework stuff, doesn't have that many barriers to transpiling from other languages. You can just do raw function calls to the OS libs. But Android doesn't allow this, forces you to go through the JNI, and makes you have to work around this symbol limit.
I've found, from experience[1], that large code bases written in statically typed languages like Java tend to be much more easier to read, understand -- which is extremely critical when working with a team. I use Python for a lot of my personal projects, but for a team project, I would rather use Java than Python.
In practical terms, you can only use frameworks like Kivy [1] to do UI on Android. But the result is completely non-native as Kivy renders its components on top of an openGL window.
If you want to use Python and native UI components at the same time, you must run Python as a kind-of backend layer, where you call JNI functions from Java which will be redirected to the Python interpreter through C/C++. I have done this with an app that I have on the Play Store [2] which shares all non-UI code with a web app [3]. Even the charts are similar as they are generated as SVGs on my Python code.
This architecture allows me to have native frontends on different platforms, and I have open-sourced PyBridge [4] to be used as starting point for everyone who wants to use something like this: basically, you send JSON messages from Java to Python with the name of a function and the arguments, and you get the response.
Nah, they should just embrace Kotlin, even if it wouldn't solve the platform API issues. Java is not for this day and age, but frankly so wouldn't be Python.
iOS has plenty of developer pain points as well, although in their case a lot of them are as much a result of dumb Apple policies as they are technical mistakes. I'm going the other way. After six years of iOS work I've had enough. I still think the web is the smart long term bet so that's where I intend to focus my energies.
I agree. However, I would have never thought that developers would be so miserable in 2016. Say what you will about the desktop; developing for it was a pleasure (except for you, Win32 API) compared to what we have in 2016.
The sad part is that things were looking pretty hopeful in 2008-2010.
I particularly liked Cappuccino. It's basically a port of Cocoa to the web. You could design your interface in OSX's Interface Builder. The project is still around and is still being worked on. Sproutcore was pretty similar, and is also still around. Neither of them are being developed as heavily as React or Angular 2, but they have both been mature for a long time, and perhaps don't need a lot of work to be done on them.
Maybe it's just me, but I still find 280Atlas[1] and 280Slides[2] more impressive that many web apps that ship today, and they're nearly 10 years old and ran in browsers far slower and less capable than what we have now.
I actually think more developers would frameworks like these if they didn't feel they had to stay on the latest and greatst JS treadmill to remain employable. And I write that as someone who likes React and Angular. They're both sane ways to develop complex apps, but they don't feel that much better than what was possible 8 years ago.
Very true. Having done both professionally however, I can say Android is on another level with frustration and tooling problems. Yes, I've become biased, and yes, I will remain much less enthusiastic to take on an Android project. I appreciate both platforms, but man, iOS allows me to sleep a bit better at night.
When I started development on Android platform I was told that we should not do IO on UI thread. Fine enough. What is the alternative? Every tutorial out there including Google's suggested AsyncTask.
Only after few weeks I learned that we should never use AsyncTask for IO because internally it uses a threadpool of hold your breath 2 threads !
So how do I make all my REST requests ? Through something like Volley.
This library is even more disgusting as it does not support some simple urls such as http://example.org/?id=1&id=2.
So I turned by focus on Services only to learn later that Services are meant for background tasks but run on UI thread.
The best approach to do your own IO is through IntentServices. Something that should have appeared in the first tutorial.
Other than the slightly different standard libraries, what else is different? Android Java is effectively the same as java 7, and any difference is solely attributed to library/SDK/api.
I've never been an Android developer, but i am happy Xamarin (Forms) developer. It is not without some hassle, need to deal with a little amount of platform-specific code, but it's the way mobile development should be in 2017.
Code the experience and functionality, do not loose (too much) time on platforms.
It's still limited in that it's not a good choice if your app requires UI functionality that is very platform specific. I think that's intentional - it isn't intended to be the right solution for every application.
It's still a good choice if your app mainly involves displaying lists and tables of text and images, and data entry forms. The cross platform Map control works quite well too.
I don't remember exactly what is different now from a year ago, but if you list a few of the limitations you faced, I can tell you if they still exist. A quick look at the Pages, Layouts, and Controls section of the Xamarin Forms page might tell you if they've added things that were missing last time you tried it: https://www.xamarin.com/forms
Thanks for the info. From what I've seen things haven't changed much. The idea still seems to be that Forms is for simple apps that use a number of common components.
Yeah, it is similar to React Native ins that way. There's a limited set of cross platform components. In both Xamarin and RN, though, you can also include your own platform-specific components/plugins when necessary.
They recently added a way to use native components _directly in XAML_ inheriting all the binding goodness.
I agree that it is still difficult to have a 100% platform-specific look&feel, but it's an incredible time saver for the majority of apps which deal with _functionality_ and good Ux.
At least Android doesn't require me to buy a Windows license or Mac computer (not just the license: OS X requires Mac hardware which is notoriously expensive). I'll keep it at Android.
I was an Android developer, then flipped to being an iOS & Android developer. I chose to go back to solely Android for similar reasons to the OP. It felt like it was impossible to be excellent on both platforms. I was struggling to keep up with the pace of change.
I ended up choosing Android because I found more demand for Android developers in the market. I certainly understand some of the OP's frustration with Android development.
Oversaturation of iOS devs. Many people chasing the perceived glamour, diluting each other's pie.
There's a lot of silver lining to doing android development for other people. Inflated demand, and all the client side - server side fires have already been put out when the company did the iOS project. So I would say, easier or less stressful.
But I've been at this for a while, so there's the possibility I'm just good at it.
Given all the grief expressed in the post and the comments, it seems like if someone architected a really nice, developer-friendly native mobile API for any platform you would get droves of devs flocking to it.... Despite the two major players, the market seems wide open as long as they focused on the development experience like what Matz did for Ruby... Programmers are customers too! :)
I've heard a lot of praise for the Windows phone developer experience, but what really matters is the audience. Dev's flock to the audience. Even just a small pay difference will dev's to deal with the most terrible of developer experiences(ex. Sharepoint)
React Native is getting there. It's still early software with a lot of problems but it has a lot of promise. My anecdotal experience as someone learning mobile development was that React Native was a lot easier to pick up and learn than native Android development. Despite a large number of issues the developer experience has been really slick and I've been able to iterate much faster.
Give it a couple of years and I think it'll be exactly what you're asking for.
I like comparing iOS to Android development as similar to playing Mario Kart and Dark Souls... It truly is a mess and I won't be surprised if/when someone releases an Android SDK SDK (SDK squared!) that generates all the horrible boilerplate so that you can start a network request, rotate your device and get the result in your view with no additional headaches.
As an Android only Dev I cant disagree with any of his reasons, the tools especially are just terrible, its not unusual to spend hours trying to figure out why my project will not build, this is compounded if you live in a country with poor internet connectivity, GB sized updates to the tools are not fun.
I'm not an Android developer, but I've seen my Android phone and tablet greatly deteriorate in performance over the last couple of years, mostly due to Android updates which were forced on me. I rarely use my tablet anymore because of this.
My battery lasted from 100% in the morning down to about 15-20% just after lunch. Thats crazy.
Then I removed all Google apps, Play services, everything to do with Google gone, wiped with a reinstall. Replaced with free open source alternatives, such as OsmAnd and K-9 Mail.
Now the battery lasts 2 days! Yes, 2 whole days! Yesterday morning I had a full charge, now more than 24 hours later it is still at 46%.
That is the same with all computer platforms, though. I think it is OK if companies have a limited time of support for their OS, but you can debate how long it should be. With Android, it is also the responsibility of the phone manufacturers, not just Android/Google alone.
This is certainly true of Windows (which I don't use anymore on the desktop). Until as late as Windows 7, I would do a wipe/install every couple of years, just to get my performance back to the way it was after a fresh install.
I haven't done with this macOS, but I've heard of people doing this.
Even with many Linux distibutions you can't run the latest version on older Computers anymore. But at least you can probably find a special Linux for older hardware.
I've heard this from a lot of family and friends, and they almost always get the performance back when they reset the device and all the apps are reinstalled. A friend just did it with his Note 4 (or 5, not sure which).
After writing a comment elsewhere i find myself wondering if why iOS is preferred, is that once an app is launched the app can behave pretty much like they could back in the dos days.Each app its own island, having the run of the device until the user hits the home button.
Android on the other hand is more about stitching apps together into a larger whole via intents. Thus you have not not only care about what the user will see and interact when tapping the icon in the launcher directly, but also when getting a intent from another app.
And most Android apps seems to fail on the latter. Various IM apps and such have been notorious for not going back up the intent chain when hitting the back button.
The comparaison with JS is perfect.
Like JS is the asm of the web ecosystem (no one wants to write its code just in JS, we generate it instead), default android and IOS way of programming have to be be generated.
Try Xamarin [0] or the newcomer flutter [1]. It's such a pain in the ass to dev in raw android or raw ios, use multi platform sdks.
And if you start with xamarin, you can target windows phone too, even if nobody cares ;-)
I've had some terrible experiences with Xamarin. The tooling is very rudimentary compared to Android Studio, performance is subpar, app size is huge, and development is so slow it's maddening. I wouldn't recommend it other than for some very specific use cases. I'd much rather deal with just plain old Android issues than wrapped-by-Xamarin Android issues and Xamarin-specific issues.
> Google’s adoption of gradle has been a disaster and proved to be a terrible decision. It did help out with some previous issues, namely multiple app targets, but it’s slowed down compilation severely. It also makes for masochistic configuration files with major redundancy and fragmented dependency hosts. Getting an app to compile shouldn’t be a challenge.
To be fair iOS dependency management is also a huge pain. CocoaPods is worse than working with Gradle in my opinion.
For me I quit android studio to use aide to develop directly on the device. I got the remix os tv box for $50. I got fed up with needing and ultra expensive machine to develop my free software. I don't profit from Android but aide enables me to continue to contribute free apps without bankrupting myself to run an expensive development machine to do hobby software.
We really need a native cross platform solution that doesn't involve either running a JavaScript VM (ReactNative, NativeScript), writing UI code for each OS (Xamarin), and doesn't rely on a WebView.
Xamarin Forms could fit the bill, but last time I checked it wasn't there yet.
Has there ever been a cross platform solution in history that was superior to the native development platform unique to the OS? This seems to be the pipe dream that everyone repeatedly hopes for then gets burned.
I'd actually prefer if Apple and Google had their own development platforms so there are competitive driving forces to improve the platform. It will also accelerate experimentation with new tech where one platform can validate one tech so the other can adopt it quicker.
Then if you have a simple lightweight app you can use some form of cross-platform browsery JS app (about the only point where this makes sense).
Working with game engines is a joy since you write "client" code, so to speak, and the engine is compiled for each platform.
QT is also really good, but also really expensive, and you lose native UI which can be a good thing or not depending on your use case.
NativeScript is working on becoming a universal cross platform solution similar to ReactNative. It's already on mobile, and I've read there is a macOS version being worked on.
I worked on android for 3 years, then did something else for 3 years, now am back to Android, there are so many vendors selling android phones, so it shall work just fine I assume? or is it getting much worse for developers(system and app developer) these days?
I see two types of dev methods 1) Write it all down, read through the code a few times. Then compile and do a full test. 2) Write in steps, compile and test after each step. The later is a Pita on low level. And produce more bugs.
Regardless of the platform, for any application with a non-trivial amount of complexity, iterative development is the best approach. I don't understand how anyone could "write it all down, read through the code a few times and test" and not produce a buggier application that doesn't meet most expectations. It does help to do some initial design work, create abstractions, loose coupling, etc. Iterative development does not mean jumping into coding without a high level plan on how to create the application pieces and how things would work (or do what's called as "cowboy coding").
With the amount of complexity involved in developing applications, considering the architecture, design, interactions between application components, error handling, different classes of platform APIs, UI, UX, state management, concurrency, hardware configurations to support, and many other things, I doubt if any human could really keep all that in one's head and really work through all the possible ways that assumptions during coding would get beaten, when compared to a real execution of the application and using that practical feedback in changing, fixing and improving things.
I think everyone prefer iterative development. I love web dev because I can get instant gratifications (no compile time). I think it's mostly because I'm lazy though.
I also manage a legacy system where I make hot fixes in production, where setting up a testing environment is not practical. While I do introduce bugs, they are far less because I'm forced to plan carefully, and know how everything works.
One problem with development is that it's hard to predict what will happen, and thus it gets difficult to make estimates. But if you carefully analyze and plan, things will go much smoother.
You are confusing scopes here. The parent almost surely meant those approaches applied to a single task (meaning a few hours to a day of work), not to building an application.
Being pedantic - the parent was commenting on the article, not on a comment here dealing with a single task or one small piece worth a day's work. I see no indication on how you could make a statement saying "...almost surely meant..." Based on the information available, either of our premises could be true.
The authors main complain was that it takes two minutes to start the emulator. And I agree that it is a Pita. But as a mental exercise or fun experience, you should 1) Picture it in your head 2) Make a plan 3) Write it down 4) Read through it and fix errors 5) Test it on the emulator.
You could start with a small ten minute fix, then work your way up. The feeling when it compiles and just works on the first try is amazing.
If you need to build an iOS and Android app and you don't need to really push the envelope on design then React Native is a godsend. It's not the right solution for everything but for that vast array of apps that are mostly just list/tableviews talking to a REST backend it's a massive timesaver.
I've been doing native iOS work for six years now and I'd still recommend RN over straight native for a lot of projects.
Developers do need to pick a set of core technologies, and stick to it or a career can fall apart I bet. I don't know from experience. I've been a Java DEV ever since MSFT tried to hijack the Java language with a proprietary version back in 1998, and I abandoned MSFT and never went back. I can imagine trying to be both iOS and Android developer would be about as insane as trying to be both .NET and Java developer. Oil and water. Doesn't mix. For me there is one and only one word that i need to hear to know I want to avoid iOS: "proprietary". I wouldn't touch any Apple product with a 10ft pole. I like open systems, and Apple is all about maintaining their closed little private invite-only walled-garden of an ecosystem. No thanks. They may be successful a while longer but "open" ALWAYS wins in the end.
> Developers do need to pick a set of core technologies, and stick to it or a career can fall apart I bet
I hope not for my sake :) Im currently contracting writing C++, but my previous contract was with the ASP.Net stack. Before that I was working on a lot of Java services (introduced Scala to good effect)
I also have started a side business helping small companies by making small apps helping with their everyday stuff - sometimes Ill make iPad apps with Swift, but more recently Qt/QML to make something that runs on both OSX and Windows desktops natively. I also made (very small!) web apps for them hosted on both Azure (MS stack) and AWS (linux stack)
After a while learning the core aspects of a new language doesn't take very long, and they generally have to address the same high-level concerns. That said, learning iOS development was a continual hassle but feels like it was worth it to have a native app that I can deploy easily using HockeyApp etc.
edit: I will undermine my whole point here by saying that I abandoned a personal Android project because the dev environment felt so ad-hoc and duct-taped. I spent a ridiculous amount of time trying to get the simulator to render GL properly without any luck (on Windows)
> They may be successful a while longer but "open" ALWAYS wins in the end.
That only works when there is an "open" that is in competition. iOS is certainly the most closed platform but Android isn't really open enough to be competition. Microsoft is making their platform more like iOS and Android every day. Linux on mobile is a non-starter as usual.
The fact that the past played out a certain way is no guarantee that it will play out the same in the future.
I kind of agree on being a mostly single-stack developer. I dabble in C, C++, C#, etc and I'm perfectly ok with using very different platforms for very different problems. But using very similar platforms to solve very similar problems feels like a huge waste of time and mental effort.
I started as a Java dev in JDK1.1, moved to c# @ 1.3 and now apparently im an andriod and iOS dev too with Xamarin ;) I pretty much spent the last month creating a x-platform mobile app, i share 90% of my code across the server/client and havnt really even had to learn too much about the intricies of each platform to replace the current version of our product which was written in ObjC and Java.
I can program directly against viewControllrs or Activities, but i dont have to. I use Rx for all async stuff and my Views/Models are eventSourced just like my aggeregates an their views are on the server.
Not saying Xamarin is bee's knees or anything, it has its own quirks and own bugs, but in terms of andriod vs iOS dev, i dont have to buy into any of the core OS's to get the job done.
> I can imagine trying to be both iOS and Android developer would be about as insane as trying to be both .NET and Java developer.
Why is that insane? Sure, your core APIs and tooling change, but software developers should be able to learn, and read documentation. Chances are, you're writing the same sorts of applications in both, and the same principles of good software development apply. Why would a Java developer be water in .NET's oil? Or vice versa? They're reasonably similar languages targeting similar niches. It's like saying that a Django web-dev could never move to a RoR job.
I'm curious because I spent a few years self-learning .NET (C# but I love F#) because I was trying to get into the industry in a town dominated by .NET, and ended up being hired by a Java shop.
To clarify my original post: I do pick up new technologies when they come along. For example I'm using TypeScript in the two projects I'm working on, and I didn't even know it existed one year ago. My point about sticking with something for the long term to build up experience was more a long the lines of "If you think you can be both a Java developer AND a .NET developer" that is very wrong thinking. Pick one. Focus on it. It's just the "jack of all trades and master of none" effect I was getting at. Certain technologies are designed to compete with each other and using them both will only cut in half your "experience" level on each one. I appreciated (and agreed with) all the commenters saying how wrong it is to just lock onto something and try to make a career out of it: however the following are the winners: Java,JavaScript,TypeScript,HTML+CSS, and anyone who has those skills as primary on their resume has a bright future. If you don't, you're taking a risk that your "flavor of the week" will be obsoleted. 10 years from now those core technologies will ALL be in demand.
Android is not open at all, "russian Google" - Yandex - tried to build their own android phones but vendors dropped them because google just came to them and say "hey guys if you will be with them - we won't allow you to use google play on your phones at all". How it is open? i have no idea why it is open.
Every so often is OK, but every a new language / set of technologies with every app means you will be writing junior to intermediate level code every time.
When we are interviewing and I see CV's that have so many technologies listed that I am fairly sure they have very little depth of knowledge to many of them.
On our case we are required to stay flexible and take whatever projects come when one is released from a project, it is not always possible to say "I don't do X".
So every few months there is a new stack to work on.
Coincidentally I am certified in .NET development (MCSD.NET) and Java development (SCJP and SCWCD), and now I do iOS and Android development. It's possible!
> Developers do need to pick a set of core technologies, and stick to it or a career can fall apart I bet.
Over time any set of "core technologies" will be legacy and your career will be over. In this business, you'll always need to absorb and learn new ideas, platforms, technologies, languages, etc. or eventually you'll have to do something else.
I did both iOS and Android for a few years and it was fine. The only issue was the lag time for our customers for features across the platforms. With a small team it takes a lot of time implement on both. We have since rewritten the app as a SPA app and share 90% of the code across the web and Android/iOS using Cordova.
iOS and Android are actually very similar and become more and more similar with each major revision. Where the syntax diverges, the unique concepts remain the exact same.
Librarians often don't realize that they can't keep plugging in method bodies and get what they want forever. Engineering isn't f&@!ing Madlibs for Christ's sake. Learn to build things that will work 100% of the time and never compromise!
employers are also skeptical you are content with both, I wouldn't consider it a value add unless you can land a lead position for a mobile team
aside from working on multiple platforms diluting your experience with the latest development patterns, it is just as easy to be stuck maintaining an old project or legacy code with an employer. This isn't unique to mobile of course, but multiple platforms really exacerbates how fast it happens.
What does it prove? Not everyone is an independent developer / single-person software house.
The way it goes, more often than not, is that people write apps for various companies and then these apps don't get published under their own name, but these companies'.
Something someone uploaded to Google Play individually could be just their hobby project created in their spare time.
I haven't got anything in Google Play in my own name, and I've been an Android dev for several years, writing serious software for big businesses.
i have to say, refusing/unable/lazy/ to learn is the most terrible thing ever happened to a developer. I'm always excited about new things even I have to learn them hard, even they start with poor quality, but I believe these are what make developers happy
I feel you are concentrating on a very small and negative part of the post in a very disingenuous manner.
I read the article more as: "I have found throughout the years I've become a decent programmer in iOS and Android, I'd rather be a good or excellent iOS developer."
The end just briefly describes why he decided to go with iOS instead of Android briefly, but that's about it.
You're clearly still young and haven't yet burned out on learning new technologies that (a) are just poorly thought out rehashes of existing technologies, with everything renamed to sound new, and (b) go obsolete within a few months, flushing your precious time investment down the toilet with them as they go.
Eh, I think it is a natural process though in one's professional life. Especially with something like phones where the complexity has just exploded within each platform. The number of ways to deploy code on iOS alone (ObjC, Swift, Cordova, React native, etc.) has increased rapidly, and the size of the system apis alone has increased massively. Some form of specialization is likely inevitable. There was a time when the web only had webmasters, and they did everything from devops to backend to frontend. Gradually it became handy for a lot of people to focus their professional career on one area they were particularly good at, especially to move to the absolute peak of their potential.
By all means people should try out things, and keep up with things, but for a lot of professional reasons it makes sense to have a core competency. Of course it also makes sense to move that as needed.
I have little patience for learning the same thing over and over but only slightly different. It's just a big waste of time. There's no enjoyment or happiness down that path.
Learning totally different and interesting stuff, that's a whole different story.
OP is not lazy to learn, it is just that he finds out that learning android development is a poor investment. He would rather learn swift instead, which I think is correct.
Yeah well, Craproid APIs are awful. It's gotten marginally better in tooling (the new IDE framework shits itself like only half the times compared to the Shitclipse based one),
but there's still just too many brainfarts in the APIs. And ofc the implementatations are super fragmented and buggy.
Here's a case in point: Androids MediaPlayer and its state diagram. This is pretty straigthforward pipe to the underlying Khronos OpenMAX API (that is totally brain damaged and horrible by itself).
https://developer.android.com/images/mediaplayer_state_diagr...
"We often see questions from developers that are asking from the Android platform engineers about the kinds of design patterns and architectures they use in their apps. But the answer, maybe surprisingly, is we often don't have a strong opinion or really an opinion at all." (1)
While that may have been a lofty ideal, in practice Android has many strict requirements on how you partition your code between Activity, Fragment, ContentProvider, and Service classes. Never mind testability and all the new semi-opaque / intelligent battery optimizations Android applies to your app.
After all these years, I still find the most difficult and un-natural thing is mixing concurrency / background tasks that must outlive the UI with complex UI component lifecycles. This is a frequent and necessary thing to do, and also quite awkward. The result is unnecessary complexity that often and easily permeates the code. Dianne says they have no opinions on architecture, but where I disagree is concurrency is an architectural concern and there are definitely many corner cases & snafus mixing that with Android APIs.
(1) https://plus.google.com/+DianneHackborn/posts/FXCCYxepsDU