Hacker News new | past | comments | ask | show | jobs | submit login

I agree that it sucks when the "worse" technology wins, but your points on why React is an example of one of these "worse" technologies are seriously misinformed regarding its shortcomings and have largely glossed over some of its most significant merits over existing technology.

> React seems to be designed to ignore 40 years of accumulated software best practices.

Best practices in software is not set in stone. Sometimes you need to break from existing best practices of the past in order to arrive at truly powerful new approaches to solving problems that may not seem intuitive at first, but eventually manage to redefine best practices through their technical merits.

The Virtual DOM and re-rendering on every change is one such approach popularized by React, but it's definitely not the only one, nor is it even the most significant one, in my humble opinion.

Before React, mutable models was the only game in town. Angular imposed mutable models, Ember imposed mutable models, Backbone imposed mutable models, your average custom ad-hoc JQuery framework probably also imposed mutable models. Everyone blindly followed this "best practice" simply because this was the status quo ever since the earliest days of UI development.

React is the first JS UI framework that does not impose a mutable model. And as a result, the ClojureScript community was able to build on top of it and show the world that change detection can be implemented as a constant-time reference comparison when immutable models are used [1]. The immutable models approach was highly unintuitive (who'd have thought using immutable data structures in your model could make your app faster than using mutable data structures, which are intrinsically faster), but its results are clearly evident, and enables nothing short of a quantum leap in UI performance.

[1] http://swannodette.github.io/2013/12/17/the-future-of-javasc...

> Separation of concerns? Who needs that any more?

Your definition of separation of concerns seems to be the need to keep templates, styling and logic in separate files, which seems like a rather superficial distinction to me, to be honest. In any case, as mercurial already mentioned, JSX is completely optional, and React easily allows for you to keep your templates in a separate file from your logic and your styles.

Ironically enough considering your rather petty criticism, React and Flux-inspired architectures like Redux have in fact popularized a much more important separation of concern: the separation of application state vs operations on the application state.

This new separation of concern, along with Flux's insistence on a unidirectional data flow, has removed much incidental complexity from app state management, and enabled a whole new generation of developer tooling like component localized hot-reloading and time-traveling debugging [2].

[2] https://www.youtube.com/watch?v=xsSnOQynTHs

And this is the essence of why React has won over the likes of Angular, Ember, and Backbone. Developers have always been able to build great applications, even in the frameworks that come before React. But React and Flux-like architectures allows developers to manage complexity significantly better than in other frameworks that come before it.

This is why "it allows teams of 100 developers work together on an app". And as a front-end developer who has worked never worked in such 100 developer teams, and only on solo projects and in projects with small 2-10 dev teams, I can state with confidence that I can reap much of the same benefits of this reduced complexity. In fact, I probably benefit even more from this reduced complexity because as a solo/small-team developer, my complexity budget is absolutely tiny compared to what bigger teams can afford.

> Open standards? Nah, how about lock-in to custom language extensions that will prevent you from migrating your code to the next web standard!

The standardization process on the open web is painstakingly slow compared to the rate at which new technology is generally adopted and refined. This slow, methodical approach allows standardization committees plenty of time and input to think about every standard being proposed, but it's also one of the main reasons why very few libraries, even those with standardization as the end goal, begins as some kind of standard proposal.

It is much easier to gain traction as a standard if you already have an established implementation in a library/framework that is mature and well-adopted, and can demonstrate the merits of your approach. This is the approach taken by projects like TypeScript, and it's probably safe to assume that many aspects of React will be integrated into various standard tracks in the not too distant future.




>Ironically enough considering your rather petty criticism,

A lot of your response seems to be emotionally laden. Bad form.

>Redux

I've been reading about Redux, and as soon as another app project comes along for me to try it out, I'm planning to give it a try.

As I've said more than once in this thread, to a large degree my own opinion is still forming on this topic. React does things that feel to me like "bad code smells," but it's possible that I need to adapt to the New Way of Thinking.

>And this is the essence of why React has won over the likes of Angular, Ember, and Backbone. Developers have always been able to build great applications, even in the frameworks that come before React. But React and Flux-like architectures allows developers to manage complexity significantly better than in other frameworks that come before it.

I've been writing software -- mostly games -- since I had to use assembly language for everything. Having a language (Java!) or framework (in this case React) explicitly protect me from myself almost always ends up slowing me down and slowing down the resulting app (yes, even React [1]).

Despite this I'm probably going to give React a real try at some point, even if it's in a toy project. I've been coding for 35 years, and it doesn't take me long to get the flavor of a new technology and its limitations when I finally stick my teeth in. It's all about finding the time...

[1] https://aerotwist.com/blog/react-plus-performance-equals-wha...


> A lot of your response seems to be emotionally laden. Bad form.

You're definitely right, I apologize for that. I cringed at parts of the post myself when I went back and read it again, but by then it was too late to edit.

> I've been writing software -- mostly games -- since I had to use assembly language for everything. Having a language (Java!) or framework (in this case React) explicitly protect me from myself almost always ends up slowing me down and slowing down the resulting app (yes, even React [1]).

Yes, micro-optimized, low-level code will always have an edge in terms of absolute raw performance, but there's a huge cognitive overhead involved with working with code like that, and you simply can't afford to do it on your entire codebase if you want to build new features and make changes quickly. What frameworks like React offers is a way to architect your application in such a way that makes it amenable to global optimizations that obsoletes entire classes of performance optimizations that you'd otherwise have to micro-optimize by hand on a case-by-case basis. This gives you more time to actually work on features and more time to thoroughly profile your code and micro-optimize the parts that actually lie on critical paths in your app.

Regarding your linked article:

If you take a look at the Vanilla benchmark, you can see the rendering time out-pacing JS compute time as we approach the 1200 mark, whereas for the React benchmark, the rendering time essentially stays constant. This is one example of a global optimization at work: the Virtual DOM spares us from having to micro-optimize DOM rendering for all of our components.

Regarding the JS performance scaling characteristics, I believe it probably has something to do with this:

> Did you set shouldComponentUpdate to false? Yeah that would definitely improve matters here, unless I need to update something like a “last updated” message on a per-photo basis. It seems to then become a case of planning your components for React, which is fair enough, but then I feel like that wouldn’t be any different if you were planning for Vanilla DOM updates, either. It’s the same thing: you always need to plan for your architecture.

This brings me to another example for a global optimization that React enables (this one doesn't come by default in React, but it's the first JS framework that made it possible): the use of immutable data in change detection. This allows you to implement shouldComponentUpdate as a single reference check for every component across your app. The change detection for the state array in the example would then become a constant time operation rather than the much more complex deep object comparison that React had to perform in the example, which is probably the root cause of the poor JS compute performance scaling as the number of photos increased. I strongly recommend taking a look at the first link in my original post if you're interested in more details.


>Yes, micro-optimized, low-level code will always have an edge in terms of absolute raw performance, but there's a huge cognitive overhead involved with working with code like that

I disagree. Code written to be optimized for a particular use case may itself be challenging to follow, but using it, if it's just a component and has a well documented API, doesn't have to be difficult at all. The Vanilla code in that benchmark article wasn't particularly hard to understand, for instance, and it could be wrapped in a Web Component to isolate the complexity.

Think about OpenGL/WebGL and the complexity and math and parallelization and optimization tricks that API conceals. At this point writing pixel shaders is almost easy, and yet they enable insane levels of parallel math with very little cognitive load.

I've written game SDKs that concealed a lot of complexity and yet were easy to use [1], so my gut reaction is to want to start with generic components that don't restrict what I can do, and then build up a DSL that is very easy to reason about that the app is assembled from. Based on my development history, I'm also likely to be making apps that are more than simple CRUD (with real-time interactive features, including games), so my personal requirements and point of view may be a bit different than the typical front-end developer.

>I strongly recommend taking a look at the first link in my original post if you're interested in more details.

OK, I'll take a look.

[1] Look at the listings for "Playground Game Engine"; the list is incomplete, but it gives you an idea: http://www.mobygames.com/developer/sheet/view/developerId,13...


> I disagree. Code written to be optimized for a particular use case may itself be challenging to follow, but using it, if it's just a component and has a well documented API, doesn't have to be difficult at all. The Vanilla code in that benchmark article wasn't particularly hard to understand, for instance, and it could be wrapped in a Web Component to isolate the complexity.

I don't think we actually disagree. =)

By "working with code like that", I meant actually writing, understanding and changing micro-optimized, low-level code. Your example of building on top of a micro-optimized, lower-level SDK is the perfect example of a global optimization that alleviates some of the need for tedious, case-by-case micro-optimization from the code that uses it.

I'm just saying that micro-optimizing every single piece of code case-by-case is not the best use of our time, and that we should opt for global optimizations where ever possible, and that React and Flux-inspired architectures like Redux can enable some very practical global optimizations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: