I wouldn't say that the sun is setting on Rails-style MVC frameworks, but I do think their role in the ecosystem is going to change. Before, most people could get away with writing the entirety of their app as a single Rails or Django app.
I think the shift first started with the ascendancy of native mobile apps. Now, developers had to seriously start considering their HTTP APIs as first-class citizens and not nice-to-haves. Once that happened, it's not a big leap to realize that treating your web application as somehow different from any of your native clients is a bit, well, insane. You can either choose to write a server that is a mix of JSON API and rendered HTML, with conditionals all about trying to figure out the right thing to render, or you can pull all of that logic out into a stand-alone JavaScript application, with better responsiveness to boot.
I think this approach is a winner. The server guys can focus on building a kick-ass server, and the front-end guys can build an awesome UI without having to muck about in the server bits.
One thing that still blows my mind is how hard it still is to get data from the server to the client. Everyone is writing custom code specific to their app. As the article says:
There's no reason for us to all separately think about these problems and solve them in a million different ways every time we're confronted with them. Aside from the years of wasted time this involves, we've also got a bunch of non-standard and sub-standard APIs to interact with, so all the client code needs to be custom as well and nothing is reusable.
I think this is a huge problem, and Yehuda and I are doing our part to try to solve it. Our Ember Data framework (http://github.com/emberjs/data), by default, assumes certain things, such as the names of your routes and how associations should be loaded. We want to enable people to start building apps right now instead of writing hundreds of lines of JavaScript that are custom to their application, and do it in a sufficiently comprehensive way that you very rarely need to drop down "to the metal." For example, we handle things like mutating associations on the client-side even before a guid has been assigned to a record.
Personally, I'm excited for how this is going to all shake out. I think Rails will continue to be an important piece in the toolchain, but it will no longer be the primary one, and I can't wait to see how it evolves to fill that role.
"You can either choose to write a server that is a mix of JSON API and rendered HTML, with conditionals all about trying to figure out the right thing to render, or you can pull all of that logic out into a stand-alone JavaScript application, with better responsiveness to boot."
You get better responsiveness if you send down pre-rendered html for the first page load. This means you still need the conditionals, or suffer on responsiveness.
Not in the slightest. Speaking categorically, JSON + templates are smaller to transfer over the wire than fully-rendered HTML -- and can be cached in exactly the same way ... even bootstrapped into a single HTTP request.
If you care about optimizing it, you can get your JS UI to render faster than the equivalent pre-rendered HTML would have.
You can make your ajax content crawler friendly by following a specification google currently supports. I'm not particularly sure how much of this spec the other search engines support but it's a good start : https://developers.google.com/webmasters/ajax-crawling/
that's correct. You basically end up serving your content to crawlers dressed up in html. it's not very hard to do but it presents quite a challenge. We already do this for blogs where we serve alternate content for rss readers which is an apple and oranges comparison, however the idea is more or less the same. The takeaway is that there is a way to serve alternate content to search engines and it involves following an ajax crawling scheme.
Nope. Screen readers have been able to handle javascript-generated content for years. According to WebAIM's most recent screen-reader survey, 98.4% of screen reader users have javascript enabled: http://webaim.org/projects/screenreadersurvey3/#javascript
Crawlers, less so. The major search engines are getting smarter about it, but for now, you still need some kind of HTML output from the server if you really want to be indexed properly.
It doesn't rule out a pure client-side app at all, but you do have some extra work involved to output HTML from the server. Which is why NodeJS will ride on the coat-tails of this approach; less redundancy.
How effective is Javascript with screen readers, though? It's been a while since I played with a screen reader (late 2009 or so), but when I did even really simple things like a pop-up div confused it. It didn't inform the user that something new was being rendered or anything.
Surprisingly effective. A colleague of mine published some techniques to add consistency to Javascript events across screen readers: https://github.com/ryanfitzer/Accessibility
The approach in the Java world for many years have been to implement Services (or Repository if it doesn't need a complex operations) and wrap them via facade pattern by utilizing:
- JAX-RS (RESTful)
- JAX-WS (WebService)
- MVC (view-based web-app)
And in contrast to what people believe, you don't need to write tons of code and still get full re-usability from the Services/Repositories layer.
On the front-end, you have a choice of:
- GWT for thick-client via RPC (return serialized Java object) or AJAX (return JSON)
look at spring roo, its a more modern approach and uses active record rather than the service layers / repositories, it makes generating json api's really simple
I'd add that when you take this approach there is little reason your myapp.com endpoint needs to be anything more than a resource loader. Nginx + AppCache is all you really need to make your app load instantly with plenty of connections. Your API can, and probably should, set on a completely different server.
This also requires a different way to work w/ designers. I would love to get parts and chunks designed and delivered ready to inject to existent app (e.g. a js template) rather than the whole page/psd approach.
7 looks a lot like 5 actually. You could say it's already happening in Cloud gaming (Gaikai, OnLive), but for general purpose application I think we need to have either:
- new types of (underpowered) devices: body augmentation?
- or much larger UI and processing needs: Photo filtering is more and more back in the client... video processing a-la Animoto? Siri? 3D UIs so favored in SF?
Where did AJAX apps go? If you ask me these are thick clients. The funny thing is that HTTP explicitly omitted AJAX in order to acheive linkability and thin clients. With AJAX linkability got thrown out the window and the clients just keep getting fatter. No wonder that the server gets a less important role. I hope that people understand that they are breaking REST when they bild these type of apps.
How are they breaking REST? What does the method of consumption (e.g. AJAX enhanced pages vs single page apps) have to do with restful API design... at all?
edit: also possible I misunderstand your statements
I was talking about REST websites, A.K.A web 1.0 websites.
What do I mean? Well, a 1.0 website generally follows REST. REST means representational state transfer, which means that each application state should have a representation on the server. Another way of expressing this is that each possible state on the website should have an URL. No more "go to example.com and click X, then Y then scroll then Z". Instead, every possible state has a link, so you just give the link. HTTP was designed to enforce linkability.
Enter AJAX. Suddenly the server is out of control. You can now deviate from the linkability principle, and a lot of apps do.
When the linkability constraint is lifted, the client state is allowed to deviate from the server state. This gives less responsibility to the server. No wonder that it gets less to do.
That is what I meant.
You can still use a REST API from a web 2.0 website, that is another question. But a web 2.0 webpage plus an REST API to fetch data makes the total app non-REST (at least if you don't actively try to make it such).
You missed the round of 'apps that are local on your machine and the data is local on your machine so you better have a backup'. That is the part we are escaping from now.
There is still the issue that once you grant access to a third party you can't revoke it, but that's unsolvable through technical means in somewhat the same way DRM is unsolvable.
> Front-end frameworks like backbone.js, as well as advances in web technologies like HTML5's history.pushState are now making server-free views a realistic quality of cutting-edge front-ends.
This is not exactly correct. Many web applications are now foregoing client-side templating and are back to doing it on the server. GitHub is a great example of this, and Twitter is going this route too.
> Many web applications are now foregoing client-side templating and are back to doing it on the server. GitHub is a great example of this, and Twitter is going this route too.
They had client side templating, and they switched back to server-side? Got any links for that, I'd love to read them (seriously, not snarkily).
They were switching from hashbang routing to pushstate, not from client to serverside rendering. Although they might to pre-rendering for IE, which will get support of pushstate only in version 10.
I think there are examples in both directions, facebook and more recently linkedin seem to be have embraced "let's build a page out of many different calls in js", if I am not mistaken.
Those sites aren't using Rails (edit: too heavily) either, which was the basis of comparison in the article.
Github/Twitter are massive with millions of visitors... they need to minimize every millisecond. It's not just about choosing the best developer-friendly or cleanest option.
Performance is #1.
That's not the case for the average smaller apps.
Frameworks like Backbone.js and Ember are growing very quicky and every open example app I've seen uses client-side templating.
Main Twitter UI is JS talking to api.twitter.com written in Scala.
They use Rails for pre-rendering the first page and some non-js pages like user settings & company info for what I can tell looking at their server responses.
There are several issues that make moving to a simple json api with all templating and routing on the client still very difficult.
1. Models on the client just aren't there yet. No relations, hacky one-off routing for sync. This is a big deal on more complicated apps where you want to represent multiple collections at once on a page and have nested resources. There are lots of other issues: sync timeouts, server validation failures, authorization, etc that you will have to write yourself. Also, then you are dealing with two different ORM which sucks.
2. i18n. This is basically unsupported on any of the client side frameworks I've seen. Even if it were, if you go partially client and part server rendering then where does your i18n live? You have to go all the way or not at all or deal with the mess.
3. SEO. If you care about SEO on the pages you are loading, you need to render on the server for the initial page load which is a PITA and redundant with client templates. Google is fixing this but their solution creates a different kind of mess.
Not such big issues:
1. Browser compatibility. This will fix itself in a year, or two?
2. Performance?? I guess it depends on how you measure performance and where. I think in general it is only the initial page load that is impacted and only the first time with caching. You still have to do a second request for the data itself I guess. Whatever. This is the least of my (and probably most people's) concerns.
The reality is, frameworks like Rails have built a lot of supporting features that are necessary and client frameworks like Backbone up to now just don't do everything they need to do. When they do, you'll still have SEO issues.
All these problems can and will be resolved one day. I especially look forward to being able to use the exact same models/relations/templates on both the sever and client which hopefully is where some combo of express/node/jade + some client widget library.
i18n. This is basically unsupported on any of the client side frameworks I've seen. Even if it were, if you go partially client and part server rendering then where does your i18n live? You have to go all the way or not at all or deal with the mess.
Keep i18n strings and templates as JSON or some other format that can be easily parsed by both the client and the server, one for each language. Have both your client and server code use the file corresponding to the user's locale.
Could this be easier with better i18n support in today's client-side and server-side frameworks? Most definitely. Is it really that messy? I don't think so.
(And yes, I'm aware that not all i18n can be expressed as simple string templates, but those are corner cases that don't add THAT much complexity to the system.)
> (E.g. I can't see a client going "oh!, there's a new business function I haven't seen yet, let me invoke that!".)
Of course not, nobody thinks that. That notion does not exist.
> With the rels/links, you're just moving the coupling away explicit URLs to the names/identities of rels/links in the response.
I suppose, but that's a much looser coupling than the alternative (i.e. writing in the documentation "the comments URL is http://example.com/comments - you can't rearrange your URL structure, you can't start using a different domain (e.g. a CDN) for comments, existing clients can't use other sites that implement the same API, etc).
HATEOAS is about building general protocols rather than site-specific APIs. That it makes it easier to change your own URLs is just a bonus.
I'm pretty sure Fielding does, see "improved on-the-fly":
"The transitions may be determined (or limited by) the client’s knowledge of media types and resource communication mechanisms, both of which may be improved on-the-fly (e.g., code-on-demand)."
Hypermedia is great for intelligent clients (e.g. humans) who can adapt to, say, a webpage changing and new fields suddenly showing up in the hypermedia (HTML) that are now required.
However, for an application, it's going to be hard-coded to either do:
1) POST /employee with name=foo&age=1, GET /employee?id=1
Or
2) GET /hateoas-entry-point, select "new employee" link, fill out the 2 fields (and only 2 fields) it knew about when the client was programmed (name, age), post it to the "save employee link", go back to "/hateoas-entry-point", select "get employee" link, fill in the "id=1". (...or something like that).
In either scenario, the non-human client is just as hard-coded as the other--it's either jumping to external URLs or jumping to internal links. Either way those URLs/links (or link ids) can't change and the functionality is just as frozen.
Perhaps the benefits of hypermedia would be more obvious if Fielding built a toy example or two that we could all touch and feel instead of just dream up. But so far there seem to be a lot of non-HATEOAS REST APIs that are doing just fine sans hypermedia.
I think it's amazing that you're the first person I've ever seen make this very obvious point, which was the first thing that popped into my head when I first started reading about REST and HATEOAS APIs (or as I would call them, navigational APIs*). It's always seemed to me that REST tutorials and evangelism ought to address this basic objection upfront, if they do have good counterarguments, but I've never seen them do so.
(A good HATEOS client would take the form of a graph navigator - this style of programming is one I associate more with "AI" than with typical web programming patterns. Which doesn't make it bad necessarily, but the REST material I've seen doesn't actually get into the navigational client programming side of things, which is the actual interesting part.)
This has been my feeling for a long time, and I have never seen a HATEOAS proponent address it to my satisfaction. Frankly I think it is a pretty important point. Are we expecting automated consumers of a REST API to be curious and spontaneous the way human users of the web are?
For me, the 2 things that set off my bullshit detector about HATEOAS are these:
1. Everybody who buys into it, including Roy Fielding himself, has to constantly say "no, that's not what I meant" and "you're not doing it right" if a REST service doesn't use HATEOAS. It reminds me of the response you get when you point out to college kids how bad Communism works out in the real world (see USSR, North Korea, et al): "No no no, it could really work, just nobody has done it right yet"
and 2: All the arguments eventually come back to an Appeal to Authority. "Well, Roy Fielding's dissertation says...". I sense that the real argument eventually comes down to whether a service can be called RESTful according to Fielding's dissertation, vs whether or not HATEOAS is actually a good idea.
I use HATEOAS because I derive very specific benefits from the client/server decoupling it enables. I have used it to build a large business application.
I agree that far too many people talk about HATEOAS that have never really used it on real projects and that is unfortunate. However, I suggest you avoid throwing out the concept because the messengers are inadequ
With the exception of spiders and other AI-like things, no, we are not expecting clients to spontaneously consume RESTful services in meaningful ways. REST clients are generic. They don't know anthing about specific services, thus allowing those services to evolve independently. A client that is coupled to a specific service is not RESTful, nor is any API that can only be consumed by such a client.
You comment is correct, despite the downvotes. The hypermedia isn't there so AI or spiders can navigate, it is there to reduce coupling.
--
A Rant follows. Disclaimer: I'm not an expert on the subject so feel free to correct me or anything.
The WWW itself is "RESTful". Even Hacker News is RESTful. You don't care about the URLs that show in your address bar. You don't construct them based on an ID-number on the side of each post. You just click around and submit forms.
With hypermedia, your consumer doesn't need to know about those specifics of URL construction. URLs are transparent. You just query for Resources and follow links.
YES, absolutely ZERO coupling is MUCH easier with boring CRUD-like stuff, such as Microsoft's OData, GData or AtomPub.
Maybe when you're writing a good consumer tuned for usability (like a Twitter client) you, app builder, will need to know some details on the service beforehand, but even then, having hypermedia would be cool. With a nice and pretty RESTful API, Twitter could just roll out new features, such as 'people who recently retweeted you' and you'd have a new section show up on your app instantly, since it discovers resources. Maybe you could even, like, load icons for new sections via links on the API itself...
Another example: I used to work on an App that used hypermedia: an Enterprise Content Manager with an OData API. Enterprise people used an decoupled enterprise client called Excel to connect to our Enterprise server and build random reports themselves.
So say you want to get data from a RESTful web api, do you have to customize your generic REST client? Because everything I've ever written that called an external API had to know what it was looking for in advance. Like to interact with Twitter's API, I went to their documentation page and read up on what URL's to call for the information I needed.
If you want a client coupled to a specific service then you don't want REST, you want a classic client-server architecture, which is more or less the antithesis of REST. But everyone insists on calling it REST when it goes over HTTP, then they complain that the apple tastes nothing like an orange.
> both of which may be improved on-the-fly (e.g., code-on-demand).
It's funny that your'e using this quote to prove your point, when it actually identifies the perfect example that you're looking for.
Image that we have a relatively "dumb" client that can only understand our media type and follow URLs to the next resource. We already agree that this is useful when you have an intelligent actor (human), so let's move on to the part that your'e interested in: "improved on-the-fly".
Let's take our dumb client and add one feature: A Javascript engine. This gives us the "code-on-demand" that Fielding referenced. You can now improve your dumb little client, by adding application logic that can be executed on-the-fly. Your client can now be upgraded to understand new media types, or to change the behavior of interaction with existing media types. And yes, this means the client can be upgraded even when there is no human interaction.
Want a real world example?
I recently wrote a javascript application that automatically runs a set of comprehensive tests on HTTP requests (and their caching properties) and collects the data. I was able to turn the dumb client into a smarter one (for my purposes). It has a single entry point and programmatically follows many different kinds of links. My tests run on approximately 1 billion clients, and I didn't have to update any client software to make that happen. If I want to check the cache behavior of resources behind my business partner's proxy server, guess what: I can upgrade his client (his browser) on the fly to do the testing for me.
Want another real world example?
We used javascript in a webview (dumb client) on mobile platforms to create unit tests for some native functionality. Our dumb client happens to have a bridge to the native code on the mobile device. This allows us to write one set of unit tests in javscript and run it on multiple mobile platforms. This is a great example that falls outside of the standard desktop web browser examples.
So let's take a look at your example.
> In either scenario, the non-human client is just as hard-coded as the other--it's either jumping to external URLs or jumping to internal links.
We were smart enough to develop a dumb client that can execute code on demand. Now let's upgrade our client to validate the input fields _before_ POSTing back to the server. Bam, we just improved our client on-the-fly, without modifying the client software.
So you may be thinking: But how useful is this to me? If I'm developing a new network-based business analysis tool for my company, it's probably not going to run on 1 billion clients. Do I really need to consider hypermedia as engine state and on-the-fly updates? Well no, of course not, that would be overkill. As Fielding put it:
"REST is intended for long-lived network-based applications that span multiple organizations. If you don’t see a need for the constraints, then don’t use them."
I would not necessarily expect my client application to invoke these functions automatically. It can be very helpful writing client apps when there is a human making these decisions.
One of the primary benefits of HATEOAS is state management of your resources becomes easier. My client does not need to burden itself with interrogating the state of the resource, knowing available business functions and when they should be invoked, knowing the location of these business functions, etc. This is managed on the server via API and the client can stick to "following the links."
e.g.
"Create Blog Post" is a REL that I have discovered at application root. After invoking this function and creating the post, I receive a 201 with a post resource that looks like below:
PostResource, SomeData, Link -> "/Modify Blog Post", Link -> "/Delete Blog Post", etc.
In the above example, my client application can display the links it is given and as a user I will modify or delete the post by simply selecting these functions. The client app is acting as a state machine for my user, and they are given the option to transition to various states based on the links returned with each response from the API.
There are many other benefits to HATEOAS, of course, one of which is it allows your application/api to grow and change over time (maybe I want to change a non-core URI down the road?)...this can be done without a lot of pain points by leveraging HATEOAS.
TLDR: HATEOAS constrained APIs are meant to provide direction (in the form of links) to consumers. This can lead to lightweight client agents that do not need to worry about application state.
The assumption here seems to be that a user is interacting directly with the client. in that case i can see where HATEOAS has some benefits. I see far less benefit in using HATEOAS for systems that are primarily used within lower layers of a system where the primary consumer is another computer, yet REST purists generally don't seem to acknowledge the difference. So tell me, why should I use HATEOAS in an API designed for non-human consumers?
A web app is a browser for your business domain. When implemented on top of a REST/HATEOAS service, it should be a layer that knows how to interpret and present interactions to work with the business objects coming across the wire.
URIs and hypermedia just turn out to be a really good way to structure this kind of architecture.
Personally from an ASP.NET perspective, its the same. To me ASP.NET is done. Mixing in server side with client side UI is just wrong, they should be decoupled completely. I want to build a html5/js/css UI. I want to keep it clean and I want to talk to a data service/ or mocked out data service to do what I need. Starting out with ASP.NET, I used to think this is just totally the wrong way to do things! Then ASP.NET MVC came along which was much nicer than all the pain of web forms ( viewstate etc..) , but now as I said already, I want my client side to be written with no knowledge of the server side workings, just plain old html(5)/css and leverage the power of javascript and associated frameworks ( i like knockout.js and amplify.js).
Its the convergence on standards and the sophistication of browsers that really matters. We all use browsers, all the big vendors now are appreciating this, enabling progression to standard presentation technology. Innovative successful businesses are delivering great services and UX using these standards and they are showing the rest that they need to move to keep competing.
Responsiveness and performance are strong drivers for a rich/smart/thick client, there really are very little advantages to a thin client in application terms from a UX point of view.
Managing upgrades is a sinch with browsers, so if you have a sophisticated client side javascript UI its easy to re-release.
Sure, I guess if you fundamentally misunderstand MVC but not in the real world.
You don't have render your view on the server, using a js framework is perfectly fine in MVC. There are plenty of rails plugins to make this even easier and I whole-heartedly disagree with, "Rails-style MVC frameworks have a horrible routing solution for RESTFul JSON APIs". Rails has simple routing that works great for restful resources and auto renders json that can be easily overridden.
Having a unified/standardized REST interface for web services would no doubt be nice, but it's existence has little to nothing to do with MVC.
I'm very much looking forward to (and hoping to help build) thick-client frameworks with the same level of polish and maturity as Rails 3. Obviously there's a long way to go, but I think it will happen.
I'm not a Node expert, but I'd be willing to bet we end up with at least one Node framework at the head of the pack, if for no other reason than it enables easy sharing between server and client-side logic. (E.g. write your validation rules only once.)
You could of course write the backend in any language, and you could even have some code sharing if you automatically generated JavaScript from your backend language. (E.g. ClojureScript.) But I'm attracted to the simplicity of just using JS, and not having to worry about all the weird little things that can happen when you compile one high-level language to another.
at least one Node framework at the head of the pack, if for no other reason than it enables easy sharing between server and client-side logic
I'm surprised this still hasn't been seriously tackled. The major node frameworks (namely express) all seem to follow the old-fashioned rails-model which seems like an anachronism on that particular platform.
Our Bones[1] library for node.js does this, in most applications we have written probably in excess of 90% of the code is shared between server and client.
We have written cloud based hosting systems[2], desktop applications[3] and more standard web applications with it. I think it wouldn't be too hard to make it possible to build phonegap'esque apps with it too.
It's pretty great stuff, but it opens you up to very new and interesting problems due to the environment being so different. The client side has absolutely no concept of a 'request', and there is a lot of stuff that just can't be done on the client (is this email address unique?). The server side does not have the long running state that the client does, which causes another category of problems.
I think it's going to become a more dominant approach, because it's just so damn convenient, but it's going to take a bit more time to properly 'crack' it.
"This probably means that 'controllers' need to go away in favor of 'resources', and routes can be vastly simplified, if not completely derived/inferred by the framework."
Also, Zope maps URLs to method invocations on objects or their containers - http://server/ob1/ob2/m1 is served by storage['ob1']['ob2'].m1() (there are ways to mess with that, of course)
There's often a distinction between APIs intended for general consumption (platform APIs) and APIs optimized for JavaScript and/or mobile clients ("private" APIs).
A platform API tends to be stable, versioned, well documented, and "unoptimized" or strongly RESTful. GETing a resource (noun) returns just that one representation.
e.g. GET /v1/user/123/profile or GET api.linkedin.com/v1/people/id=123
"Private" APIs tend to return more data in bulk, optimized to reduce the quantity of remote calls and improve page load times. The responses tend to be structured in a way that's easier for the client (browser/mobile app) to render content, usually by including more tangentially related data than a traditional REST resource would contain.
e.g. a browser's load of https://github.com/rails/rails does GET github.com/rails/rails/graphs/participation
Twitter uses the platform API in the browser.
e.g. GET api.twitter.com/1/trends/1.json
I'd be interested to hear from others leveraging APIs in their browser/mobile clients what they're using for MVC (e.g. backbone.js vs server side) and whether they've "optimized" their APIs for the client.
On related note, it surprises me that client-side frameworks like cappuccino.org are not used more often for building full-featured web apps. It seems to me that the abstraction from HTML and CSS is quite a desirable thing, I find the experience of modern desktop frameworks like Cocoa thousand times better than fiddling with HTML. Does anybody here have substantial Cappuccino experience? Why isn't it more successful?
Why not? I ask sincerely. It seems like a reasonable approach to me for web apps, just NOT for simple sites that rarely change or are not "application" like
Firstly, web app vs web site isn't a binary distinction - it's a gradient. Is Flickr an application or a site? It provides extensive tools for uploading and organising photos... but the bulk of the site is flat pages people use for navigating vast amounts of content.
Secondly, URLs really, really matter. Twitter have a big engineering task on now to clean up the mess made when they switched to using broken-URL #!s. The ability to link to things is the most important factor in the Web's success. An application built as a fat JavaScript client that talks to an API is opting out of the URL-driven web.
Even if something uses #! or pushState to emulate linkable URLs, if I can't hit a URL with an HTTP client library (not a full-blown JS+DOM implementation) and get back a content representation, it's not a full-blown member of the web - it's no better than content that's been wrapped up in a binary Flash container.
Don't think I'm not pro-JavaScript - I'm a big fan of using JavaScript to improve user experience etc (heck, I have code in the first ever release of jQuery). I'm just anti JavaScript being required to consume and manipulate the Web.
I'll concede that there are plenty of applications for which a fat client does make sense (image manipulators, data vis tools, possibly interfaces like gmail although I'm continuously infuriated by my inability to open gmail messages in new tabs). But the thinking embraced by the original article, that Rails-style frameworks are on the way out because EVERY site should be built as a fat client, is dangerously short-sighted in my opinion.
Hashbangs are just a hack while we are waiting for the browsers of the world to all support push state. It is easy to use, well supported, and works really well. What twitter was saying is that in a public, content based app like theirs, the trade off of using that hack is not worth it, so they are moving to push state, and degrading to a worse experience for browsers that don't support it. Twitter isn't an argument against js apps, it is an argument against js hacks to provide fancy functionality to old browsers.
But then we need to use the same set of templates on server-side (on full page loads) and on client-side (when updating via JSON) ? Or we do like Quora and generate HTML on the server-side?
I'm not saying I agree with twitter, was just trying to explain their argument :) I think it is better to go fully one direction or the other. Either don't support IE9-, or go full reloads until they feel comfortable not supporting IE9-. (or stick with the hash bangs)
In a more general way, I use backbone to make data driven components. Those components are always rendered client side, and the layout/static content is rendered server side. I think duplicating would be theath to madness. Generally, it's fine to bootstrap initial data on page load, and render everything. But in times where that takes too long, I have rendered a "dead" version on the server (like, greyes out with a spinner) then replaced it on the client.
I believe SEO is one of the main things that hinder client-thick architecture adoption for many apps that depend on their content showing up on search engines, especially among content-driven websites (e.g. Stackoverflow, Quora, etc.). But I agree, all signs are pointing at the direction of thick-client implementations and eventually search engines will solve the indexing problem.
I agree with the author, but for diametrically opposed reasons. It's more that I don't think my web framework should decide that I need to be using MVC. If I want to tightly couple my view logic to my model logic, that's my (and my team's) business. If I want to drink the MVC Koolaid, that's my business as well. It isn't the business of the person who wrote my framework.
It's much simpler to handle views and view logic in only one place, and that place is slowly moving away from the server side.
I see. So one is simpler than two? That's a bit simplistic. Every situation is unique, and it's impossible to make such a sweeping statement that's true in all cases.
So MVC frameworks are going away for thick-client apps because their routing is not convenient for defining resources? And that their templating abilities are too powerful (or not powerful enough?).
Not sure I get it.
(Current routing schemes do not seem overly difficult for this, and depending on your MVC framework, you can plug in your own routing.)
1. I dumped view helpers for decorators (namely, the Draper gem). I got rid of complex logic in my view templates. Most of the logic I need in my views is transformed object properties. I need a date to appear as mm/dd/yy. I need a filesize in bytes to appear as KB/MB. I want a comment formatted as Markdown to be HTML. So now I'll decorate my objects so I have a nice post.formatted_date instead of needing to write the helper formatted_date(post). At worst I have some conditional statements in my views looking for the existence of a property. CSS also comes into play; the :empty selector allows me to cheat in some cases.
2. I write my view templates in a JavaScript templating language. I am a fan of slim for Ruby, so I went with Jade for JavaScript. I use therubyracer to call on these precompiled templates within Ruby itself, and render them like anything else. I then go on to use these same templates client-side. The reason why I abandoned view helpers to a large extent is because of this. Any logic that I need within a template would have to be duplicated server-side and client-side, which is antithetical to the goal. For my use-cases I've been able to do this successfully. It requires an adjustment to mindset, but is feasible. And really, my templates are a lot cleaner now than they've ever been.
3. When a user lands on a page, they get rendered HTML. Subsequent requests use AJAX and JSON responses to load things dynamically from there. Best of both worlds. Also, users who have JavaScript disabled can use the site albeit with not as much slickness.
4. Using to_json is hell; don't do it. I use the RABL gem for assembling my JSON, and use the same decorated objects. In the case of JSON, depending on your API, you might want to include say an ISO8601 date as well as a formatted date. Not a big deal, just vaguely duplicative.
The downside was how much code I had to write for myself. Using JavaScript templates was a biggy. But this is something that could probably be packaged as a gem, if I or someone else took the time. The framework (in my case, Padrino) still provides lots of tools that I need. Ruby ORMs are a big part of this.
There's still the issue of duplicative routes. I have routes defined in the app itself, and then within my JavaScript framework (currently Spine, but previously Backbone) I have to hardcode these same routes. I don't like this, however, routes are probably the last thing to change in my app if I put any thought into them ahead of time. This is something that requires additional thought, but I'm thinking there should be a way to get my app routes available client-side.
Well, the ASP.Net MVC 4,0 beta, released a few days ago, has better support for "Web API" (i.e. Data endpoints) and for Single Page apps so it seem that they are aware of these trends.
I seen this and I downloaded it earlier this week, it wouldn't build straight off, there were bugs in the script references, also EntityFramework 4.1 wouldn't resolve for me locally (had to add web.config ) , etc, not a great start (no biggie, just saying). Resolved these issues anyhow.
On review, its fair enough if you love ASP.NET MVC, but I really don't like the mixing of the View Logic with the Model in ASP.NET MVC, like I already said keep it clean. From a View point of view keep it decoupled I say, keep any server side code out of there. At the end of the day , I have to agree ASP.NET MVC has got rich resources and it is a great offering for developers, no doubt.
And MVC 4 useS knockout js behind the scenes anyhow, which is really great and personally I think that knockout goes far enough.
The only reason I say this, is because I just refactored about 20 Web Form Views to pure HTML/CSS/js on the client side and turned out I had no need for any server side at all except for the restful service (hidden away in my knockout view model, used amplify.js to abstract the service calls) created on top of my domain model.
so I am thinking, why do I need MVC if I have Restful WCF/ ASP.NET Web API.
Going this way means I am totally decoupled!
I am much happier building a restful service, knowing that any client can consume this, and I think I am happy building client side anyway that I so choose and not tying myself into ASP.NET MVC unnecessarily.
So here's the deal for me if go pure W3C on the client :
1. No mixing in logic
2. Don't care about the server or server side code.
3. I can mock my backend real easy ( e.g. Amplify.js )
4. I don't need Visual Studio for client side dev.
5. There's a growing wealth of open source libraries
6. Makes me think more about the structure of my server side behavioural domain model.
Anyways, just how I see things..always open to more compelling arguments!
Can you expand more on "the mixing of the View Logic with the Model in ASP.NET MVC" – I must have missed a trick in how to further separate concerns.I favour strongly typed views, ViewModels and thin controllers. But are you talking about how the generated HTML is just output, and contains both page structure and page data?
Also with this release of MVC, the Restful data controllers are added on the side. Designing this from scratch, it would probably be different – if your client wants JSON, XML or some other format, then the Data endpoint is for you. If you want text/html, then it's special and you go somewhere else.
We can already serve JSON or XML data off controller endpoints in MVC3, and using a bit of extra code, even switch between them by checking the Accept header. But never mind, it's not done until it's in the framework.
If you really are past using MVC views entirely, and have just pages without server-side markup (except perhaps feeding in data URL) + a rest API then you are an outlier and can consider other fameworks beside MVC, such as OpenRasta. Or Ruby.
MVC is not an opinionated framework – you can do things any number of ways. If it supports Data API + static HTML websites, which it looks like, that will not be the only thing that it supports.
I am not past anything, I am just saying this paradigm is now kinda defunct to me. I would prefer to work a little differently.
No doubt MVC facilitates all of the above, if you favour strongly typed, then that's good, continue that way, but its not necessary ( controller is now basically going to be a restful web service )
"We can already serve JSON or XML data off controller endpoints in MVC3" - true, and you'll probably be using Web API to do that soon.
I am just saying its not necessary to mix, I feel it over complicates the client by making you mix serverside logic a la razor, webform whatever.
I don't particlarly see the point in it now though. If you look for example at the knockout mapping plugin for knockout js, it will take a json source and automatically resolve into your js viewmodel. so on the client all I am doing is thinking about the client. give me a rest api and some json objects and I am off and running.
All I am simply saying, Pure Html/CSS/JS can be done without the need for any knowledge of the server on client side, just the interface contract and like I say a restful wcf service / Web API is good.
I am not an "out lier" when I say I think its a good way to develop, its my opinion. The truth is, I did refactor a load of "webforms" lately and the result was,
1. Html/Js/css - knockout.js and amplify.js
2. JSON Service ( c# in behind serving the data and validating business rules )
> "We can already serve JSON or XML data off controller endpoints in MVC3" - true, and you'll probably be using Web API to do that soon.
Yes, I will as soon as I'm on a released version of ASP MVC 4. I get that format flexibility at no extra cost using no custom or server-side code. Win. It enforces the separation of concerns between serving html pages and serving API data. Which may or may not be good depending on how you think about your App. For my existing apps I think it's positive.
ASP MVC is a framework that is not opinionated as I said - it won't insist on doing things the right way, and it is also a framework that is not innovative. Most or all features have been pioneered and proven elsewhere. I'm not saying that as an insult, there are big upsides to that approach and I can see why MS stays in the mainstream.
No disrespect intended. I meant to say that you're well ahead of the ASP.NET pack. Many of them are still making webforms, never mind getting over MVC or even experimenting with various ways of doing things that have not yet been blessed by Microsoft.
I don't think Rails-style MVC frameworks are going to disappear for a lot of content heavy internet sites, but API first architectures are going to continue to become popular I think. That is part of the reason I built radial https://github.com/RetroMocha/radial
The tooling around rapid API development seems very immature right now, but it's going to keep getting better. Once it is drop dead easy to write API's first, you will see a lot more apps having a HTML5 app, Android app, iOS app, Windows 8 app, Mac app, etc. as more of the standard and less of an edge case.
Monoculture is a terrible thing. I don't think there are any frameworks or methodologies which should never be used, nor any that should always be used. Learn the advantages and tradeoffs of each, and use the right tool for the job.
That said, I've personally found that as nice and clean as MVC frameworks can be, they aren't always necessary. As long as there is a separation between logic and presentation, it's possible to write clean, readable, maintainable code without a formal MVC structure.
That's all well and good, if you do your API with Rails.
Assuming this is an actual API, consider a rouge script slamming your API, whoops, your main app is out as well, nobody is signing up for your service.
Assuming this is for backbone/batman/other-generic-clientside-framework, nope. Too much JavaScript makes everything seem sluggish, and I really don't want to reimplement everything twice (once in Rails for conventional browsers and for people who disable JavaScript, second in client-side MVC)
I stopped caring about this long ago. I write database-backed web applications that require authentication, and in this context if a user has JavaScript turned off, that's their problem, not mine.
Sort of funny; as soon as I hit a new site that requires javascript, I often run away! But of course that's my problem; I get why user preference would get in the way of your clean implementation. :)
So all I'm seeing is complaints about inflexible URI routing, and excessively opinionated model classes. This stuff is basically solved in at least one other "rails style" framework (https://metacpan.org/module/Catalyst::Runtime). Web controllers are a rat's nest anyway, the whole thing is basically broken, and the only sane thing to do is to seek for an acceptable comprimise.
What I'd love to see is a set up where I log into any computer and my stuff is all right there. Say I borrow your iPhone to do some stuff. Instead of using your setup, I log in and all my apps are right there. Some run from the swrver, some get pushed to the phone and used locally if that's more efficient. Let's call it client & server, anywhere you go computing, or something like that.
A client & server setup probably means that you buy access to a mobile carrier's calling/data plan that isn't attached to a data plan. You buy a handset, tablet, laptop, yada yada that is a standalone device or use one at the local Internet cafe. A set of standards are put in place for how code runs on a client and server. Your data may remain on a client, but ideally is stored entirely in the cloud, too. The cloud data is encrypted.and can't be accessed without your permission.
This is my dream and I'd be surprised if I'm the first one to think of it.
Bashing? Nope. Rails is simply not how people write web apps anymore. Most people write a UI-independent API layer, and then write the UI in Javascript. This makes Rails largely unnecessary; you still use the individual pieces, but the framework as a whole no longer makes sense.
None of that is true. If you think it is, you're in a sheltered bubble too focused on the new hotness from the valley. Out in the real world, where there are millions upon millions of business apps being written, most people are still banging out apps server side.
I am from a sheltered bubble: one where we enjoy engineering efficient systems, and one where I work with the smartest people in the industry.
This is how we wrote boring business software at the investment bank where I last worked, and this is how we write software at Google, where I work now. People write apps this way now because it's faster, easier, and more flexible. (For me it means I don't have to tweak UI code anymore, there are lots of people that like to do that and now they don't have to care about the backend. Just use my API!)
But sure, I believe that not everyone does this; I still maintain legacy apps too. Why rewrite millions of lines of code nobody really cares about?
But that doesn't mean you should write a new web app in 2012 like you did it in 2008. Just like we stopped writing CGI scripts when good frameworks became available.
> But that doesn't mean you should write a new web app in 2012 like you did it in 2008.
There's actually plenty of reasons to continue doing things the old way. Just because new approaches are available doesn't mean they're always automatically a better fit than the old ones.
You can't overlook the people issue either. Companies don't just hire new staff because new stuff came out; they use their existing staff who may not have time to keep up with the latest and greatest, and it makes perfect sense for them to keep doing things the old way, which works just fine.
It was a bit of a link bait title, ironically rails is still probably the best serverside framework to write the kind of app he is advocating (just ignore the view bits). It is more that we need a "next evolutionary step" in how frameworks work to make writing rich client apps easier.
If you poke around the newly redesigned http://html5rocks.com/, this is pretty much exactly where they're going with it. Thick rich HTML5 app, cached in the client, and passing data back and forth between the server.
The background are really distracting (and not very aesthetically pleasing). There is not enough contrast with the light colored links. These two things combined make the whole sight difficult to read.
The approach breaks a lot of 'separation of concerns' rules between the view and the model, but for certain kinds of APIs (read-only, in particular) this may be fine. Would be interesting in feedback/seeing where similar projects have gone.
Yeah, tell that to the overwhelming number of clients worldwide that don't support client-heavy operations (and will probably remain so for the next ~5 years).
I'm writing my web applications in a "resource like way" since 2009. This small framework supports resources represented as FOOP objects. Look at the wings example over here:
http://rundragonfly.com/dragonfly_routes
No Ajax, no URLs, no REST-ful religion. In fact ajax just feels wrong now. I unfortunately have to add some backward compatibility to my websocket apps, and I just get this really icky feeling like I really shouldn't be doing this arcane stuff anymore.
REST is just a layer of indirection over a regular RPC backend anyway. It may be easier to reason about things like caching and side-effects on the HTTP level. Of course HATEOS is mostly useless, unless you can stay within the general purpose protocols, like AtomPub and extensions.
> RPC's over WebSockets is what will replace REST/Ajax.
If SPDY takes off, there’s hardly any advantage to WS besides push events. Stateless protocols are easier to scale.
The first problem to solve is to eradicate completely JavaScript from the entire programming world 100%.
After that, program the server and the client in Java or if on Windows, C#. GWT works this way although they compile the client code to JavaScript. If we remove JavaScript we get better performance, more deterministic behavior across platforms and totally eliminate one language we have to know making our lives easier and the browser thinner and better performing.
When someone comes out with a browser that does that we will be seeing the next killer app.
I think the shift first started with the ascendancy of native mobile apps. Now, developers had to seriously start considering their HTTP APIs as first-class citizens and not nice-to-haves. Once that happened, it's not a big leap to realize that treating your web application as somehow different from any of your native clients is a bit, well, insane. You can either choose to write a server that is a mix of JSON API and rendered HTML, with conditionals all about trying to figure out the right thing to render, or you can pull all of that logic out into a stand-alone JavaScript application, with better responsiveness to boot.
I think this approach is a winner. The server guys can focus on building a kick-ass server, and the front-end guys can build an awesome UI without having to muck about in the server bits.
One thing that still blows my mind is how hard it still is to get data from the server to the client. Everyone is writing custom code specific to their app. As the article says:
There's no reason for us to all separately think about these problems and solve them in a million different ways every time we're confronted with them. Aside from the years of wasted time this involves, we've also got a bunch of non-standard and sub-standard APIs to interact with, so all the client code needs to be custom as well and nothing is reusable.
I think this is a huge problem, and Yehuda and I are doing our part to try to solve it. Our Ember Data framework (http://github.com/emberjs/data), by default, assumes certain things, such as the names of your routes and how associations should be loaded. We want to enable people to start building apps right now instead of writing hundreds of lines of JavaScript that are custom to their application, and do it in a sufficiently comprehensive way that you very rarely need to drop down "to the metal." For example, we handle things like mutating associations on the client-side even before a guid has been assigned to a record.
Personally, I'm excited for how this is going to all shake out. I think Rails will continue to be an important piece in the toolchain, but it will no longer be the primary one, and I can't wait to see how it evolves to fill that role.