Hacker News new | past | comments | ask | show | jobs | submit login
What we wish we'd known before building the Prismatic Android app (github.com/nstevens)
171 points by nstevens on Dec 11, 2014 | hide | past | favorite | 34 comments



I primarily program Android and have talked to many other Android programmers.

The primary blunder companies make on doing an Android app is they pretend they're making an iOS app. This is very widespread.

The company has a database. They have a website. They have an iOS app which connects to a web API connected to that database. They have a designer who makes his 2 (or 3?) designs for each iOS form factor.

Then they decide to make an Android app. The web API is usually OK. But it becomes decided that the company will ignore that Android's come in tablets, very small phones with low resolution, phones with very high resolution, "phablets" etc. It becomes decided that the goal will be to make a pixel perfect design (just like for the iPhone!) for the Android phone the designer has, and perhaps the other one the boss has, and then to just pretend that the other 99% of the Android market does not exist.

There is more Android work then experienced Android programmers, so some inexperienced kid right out of college is the programmer on the project, who is easily intimidated by the designer. They proceed this way until weeks and months go by and the CEO realizes the app looks like junk on tablets and most phones, and then blames the programmer for this situation.

Anyone thinking of stepping into a role as an Android programmer should get everyone on the same page about the UI goals before he agrees to work on the project. Especially between you and the designer and whoever your common management is, perhaps all the way to the top. Because the easy way is for the designer to do what he did for iPhone - 2 designs for the 2 phones he has access to, which are pixel perfect (which is completely pointless if it looks like crap on 99% of phones).

This situation arises again and again and again.

If there's any secondary thing, I guess it would be how far back you want to support Android in terms of versions. You really don't want to support something before v3.0 if you can avoid it.


Completely agree. Some of the unhappiest engineers & designers I know are on teams trying to copy the company's iOS app on Android.

We were admittedly in the position you describe: a company of majoritively iPhone users who shipped the web & iOS apps, cleaned up the API and were starting on Android.

I don't think there's a perfect solution but we found digging into some of the guides & sample apps listed here to be particularly useful: https://github.com/nstevens/androidguide/wiki/General-Androi...

We also ordered a range of devices and hooked them up to our Play Store alpha channel so every build had to go through the 3" phone and 10" tablet test.

Finally, it helped to have a couple hard core Android users on the team who could explain why simple features like the 'back' button are truly game changers. There's a good post on some of these differences here: http://paulstamatiou.com/android-is-better/


I do both iOS and Android, but stronger on Android. In addition to the "pixel perfect" nonsense is not actually writing an Android app, and instead limiting it to what is normal on iOS. A good Android app should be a mashup, and it should have pieces that cooperate including outside the app as appropriate. A very quick telltale sign of an iOS port is the list of other interactions being built in (eg which services it can "share" with).

https://developer.android.com/guide/components/fundamentals....


As someone who is rapidly becoming an old-timer, I have to say I miss the old days of Make. It was everywhere and had weird quirks. But you didn't need to spend too much thought on it. These days, I see projects using Gradle, Maven, Ant, Ivy, ... and it makes me feel like yet another tool I have to to learn in order to use X. As a technophile, I'm not against the new new thing. Newness in language is fantastic .. lets us express ideas or concepts that were previously hard to do. The explosion in tools that do somewhat similar things makes me feel like we're doing the wrong thing as a community. I have an equal disdain for devops solutions like Chef, Puppet, CFEngine, Ansible, whatever. Is software not mature enough to just standardize on a few and get on with it? Surely there is not much in it for the winners (maybe I am deluded about this). Anyways, rant over.

P.S. I was being facetious about missing Make. Point stands.


It might be accurate to say that we're doing the wrong things but iterating fast. That's why we have this tumult of tools. Software is definitely not mature enough to just standardise on a few of them.

I feel confident in saying that because the thought of permanently standardising on any existing build or configuration management tool fills me with horror. In the JVM world, ant is inadequate, Maven is diabolical, Gradle is a huge step forward but has many warts and one or two fundamental mistakes, and sbt is just vile. Gradle is good enough to be getting on with, but i'm looking forward to the next step.

Provided, that is, that the next step is taken after learning from previous steps. Sometimes that happens - Ansible and Salt are clearly attemps to improve on the overcomplexity of Puppet and Chef. Sometimes it doesn't - i'm not sure that Gulp or Leiningen do anything to help.


> Gradle is good enough to be getting on with, but i'm looking forward to the next step.

Check out Buck. It's fast, the codebase and general complexity is a tiny fraction compared to Maven or Gradle, and it's more sound in at least one fundamental way. (It uses file hashes instead of modtimes to figure out if a task has to be rerun.)

I agree with the rest of your post. Most build systems suck. Here's an off-hand idea:

All build systems I know work share the same core concept, a directed acyclic graph of tasks. Each task has inputs (which might be the output of another task) and if the inputs are changed then the task is rerun. That same idea also covers a lot of systems for data processing.

Why can't we have a simple, minimal tool for doing only that? Then, we could plug in different task definitions for different uses (eg building a Go project, a Java project, or running a data pipeline).

This seems preferable to a bunch of different application-specific build systems that each roll their own DAG, and their own DSL for defining tasks.


We do have such a tool. It's called make.


And the “modern” reimplementation of make is ninja: http://martine.github.io/ninja/


> The explosion in tools that do somewhat similar things makes me feel like we're doing the wrong thing as a community.

Yes. And, that thing is forgetting to teach history to people. There was a story a few days ago about a father pushing his son to play video games in chronological order [1]. Perhaps we need to do something similar in software development.

[1] https://medium.com/message/playing-with-my-son-e5226ff0a7c3


In my experience, two of the most important attributes of a good developer are awareness of historical context (understanding important lessons that have been learned, and how those lessons influenced major decisions) and knowledge of common idioms of the technologies/tools/languages they use.

I really like your idea. Imagine a book, class or structured tutorial that sliced up the history of web development (or UI design, Windows app development, parallel programming, game development, network communication, systems administration, mobile development, or any other kind technical topic) into a handful of important "eras" and spent a couple hours or days on each one doing a technical deep dive. Boot up a VM, install the dev tools of the day, have your hand held through some characteristic tasks so you could see first hand how people were thinking during that era, what kinds of things were easy to do, and what was hard. As you move to the next era, you get to see the results of the lessons that were learned (or not learned!) in the previous one.

The more I think about it, the more I like this idea.


Mixing a progression though programming language/machine/tooling history and nand2tetris, and perhaps a companion tetris2nand [1], would make for an amazing foundation for future programmers. You would just have to make sure the high school students still pass the AP exams, to justify the multiple courses.

[1] Tetris2nand does not exist. The idea would be to build the "ideal programming language" and then work your way down to the hardware, exploring things like GCs, parallelism, and more along the way.


I've been trying (and failing) to go through Peter Norton's Assembly book to start learning x86 assembler. The plan was to then do some DOS VGA programming and work my way through Turbo Pascal and C. I'd love to take a course that's historically focused as you described.

Sidenote: If anyone knows someone at Penguin, can you help me get permission to put Peter Norton's book on GitHub and update it?


i've done exactly that when i was in school. you can still do it. i believe turbo pascal/c is available even today and supports asm blocks directly, so you can do assembly programming in pascal/c.


I hear that. This is from the readme of a pretty big OSS project:

"a port from Ruby to JS using Opal transcompiler, a Rake build script, a Grunt build, using the Rake build underneath."

I suddenly felt as if I didn't understand computers anymore.


How can the people who maintain these projects be satisfied with this experience? I can't imagine ever starting to setup a project like this and saying to myself "this is a good idea, we should keep going this way, and not start over from scratch." Especially in more established projects, stuff that lots of people have already been working on for a while. It's been a serious impediment for my own attempts to get involved with open source software.


I just looked up this project Domenic_S mentioned. It says why it does what it does right in the first line of the README:

> This project uses Opal to transcompile Asciidoctor—a modern implementation of AsciiDoc—from Ruby to JavaScript to produce asciidoctor.js, bringing AsciiDoc to the browser!

The whole point is to not rewrite it in JavaScript but to piggyback on the original Ruby implementation and just compile it to JavaScript so it can run in a browser.



best practices sounds a bit pretentious to me, 'how we dev at Futurice' would be less presumptuous. Some of these points are common senses, other are arbitrary library or architecture choices that are not going to fit every project and some just need to be updated ASAP.


Regarding the use of Google Play alpha/beta testing, I created a Jenkins plugin to automate app uploads: https://wiki.jenkins-ci.org/display/JENKINS/Google+Play+Andr...

Though personally I use HockeyApp as you can upload iOS/Android apps and they become immediately available to testers — no waiting for several hours for your apps to appear, as occurs with Google Play. HockeyApp doesn't tie you to a particular form of user management either. Hopefully Microsoft doesn't change too much there.


great! added a link to the wiki


Every time I see a post like this I always learn a few new snippets for best practices. There almost seems like a need for a primer site for all sorts of languages and ecosystems.

I'm sure something like this exists right?


As an Android developer building and maintaining a larger app, I'm curious: Are there any performance hits or worrying complexity issues caused by including multiple paradigm-shifting libraries like RxAndroid, Dagger, and Butterknife? I've always tried to keep my app slim by keeping out unnecessary libraries (particularly ones that require I follow non-standard platform development), but I'm open to change if it's worth it.


I'd say the two most important libraries for our app have been Retrofit & Picasso.  Both are built on RxAndroid so using it as well is a natural extension.  I think most people would probably only need RxAndroid or otto but we've enjoyed using both.  We mainly use the former for chaining network calls and the latter to maintain a bus of state updates.

Dagger 1.x (which we currently use) certainly helped slim down code size but it's mix of compile time and run time injection made using tools like Proguard a bit messy.  We haven't made the jump yet but it seems like Dagger 2.x solves for this: https://github.com/google/dagger

Butterknife is just plain useful to avoid a ton of view boilerplate code.

Our app is admittedly not too complicated but so far we haven't seen any performance issues.


RxAndroid does bring some overhead due to you having to create alot of anonymous classes (lack of Java lambdas really hurts here).

Butterknife and Dagger both use compile-time generation, so they're basically free and make your app well structured. I really wouldn't avoid 3rd party libs in your position, using good ones will make your apps less buggy and will let you easly create way better UX for your users.

For example, EventBus (or Otto, both do the same thing) is pretty essential for coupling things together and keeping them working over orientation and other configuration changes. I've seen so many apps locked into portrait position just because devs couldn't figure out how to decouple model from views and keep the app running over orientation change.


When I think Prismatic, I think clojure(script). Did they write their app in Clojure?


Engineer from Prismatic here. The Android app was written with Java. While we would have loved to build it with Clojure, but support/perf on Android isn't quite there yet (at least for us).



Lack of Google login while supporting Android is kind of glaring, in the app itself. Not everyone has Facebook or Twitter.


Anyone know of a similar doc for iOS?


This document has some good advice: https://github.com/futurice/ios-good-practices


I would ask how they dealt with nested fragments pain


Don't nest fragments. Just don't. Fragments are half baked. Nested fragments are half (quarter?) baked.

In most cased the inner fragment could be made into a custom View, and those work a lot better. Most new Android developer do not make enough custom views. They are the fundamental unit of UI re-use. It's relatively simple to extends FrameLayout or RelativeLayout, encapsulate a view normal views, layer your logic on top, and you have a nice usable component.


As an experienced Android dev, I cannot agree more. I have never had a particularly fun experience with fragments, although they definitely have their place. Custom views though... I love them more every day.


We had high hopes for nested fragments, especially to support UI interactions like those in the Gmail app: http://stackoverflow.com/q/12253965/2561578

However, in our v1 the only fragments that share an activity are our main feed & story view. All other's have their own activity. It made data flow a lot easier, our manifest file a lot cleaner when filtering for intents, etc.

Interested to hear any tips if you have them!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: