you are.
There is no real time compilation going on.
What they are doing is writing model code once in java and then use J2ObjC once in order to transpose it in ObjC.
The interest of this approach is obviously to only write model code once. Main benefits :
-same implementation everywhere, you can't hit a road block because one platform implemented its model in a very different way, so they need to refactor it entirely for new feature X. You also can't have a platform specific model bug.
-Less time spent coding the model.
I work on a very large scale mobile app where we have completely separate clients hitting the same REST API. We already thought about such a move (or something similar, like a common c++ module) but so far we continue to have totally separate web, Android, iOS and WP applications :
- the cost of a migration to such a solution is huge for a big app
- any bug in the generated code is the promise of a nightmare to debug and solve
- I am not sure it would save us so much time. The 70% figure does sound impressive though, so I might be wrong.
- our product managers are obsessed with features and don't care about the technical side of things. It is already very hard to restrain their shitty ideas like iOS design everywhere or never do any maintenance because there are always urgent new features. So, even if we wanted to, we would never get them on board with such a big refactoring.
Why did they have to create a data model in Java and then cross compile that into other languages for their various platforms?
Why not just create a RESTful API on top of Gmail, and then just build clients to consume that API?
What's the difference between the two approaches? What is the benefit to doing it the way they did it?
It feels like sending JSON down the wire, would be quicker than waiting for some real-time compilation to be done and sent to the client.
Am I missing something?