Hacker News new | past | comments | ask | show | jobs | submit login
Vanilla-todo: A case study on viable techniques for vanilla web development (github.com/morris)
311 points by zdw on Oct 27, 2020 | hide | past | favorite | 153 comments



I don't know who originally said it but "If you're not using a framework, you're building a framework". This repo even has the following caveat: (2) These usually end up becoming a custom micro-framework, thereby questioning why you didn't use one of the established and tested libraries/frameworks in the first place.

That said, I don't hate it. For quite some time, I've taken the stance that a web development team needs an opinionated framework, but it's fine for it to be a bespoke creation rather than off-the-shelf. The biggest value of choosing React, Vue, Svelte, etc is, in my opinion, less about it doing the heavy lifting for you with the DOM and more about adopting an established valid opinion to guide the team's development.


> These usually end up becoming a custom micro-framework, thereby questioning why you didn't use one of the established and tested libraries/frameworks in the first place.

Often a custom micro-framework better suits the needs of a particular project.

I've written micro-frameworks for specific projects intentionally, because they did a few things that the established frameworks either didn't do, or it was very difficult to get them to do.

An additional benefit was that using the micro frameworks ended up being simpler, and the startup time was much, much faster.

Whether or not you use an existing framework depends on how much effort it is to write and test a custom framework vs the amount of effort you'd need to put in to use an existing framework.

The ones I've written have been pretty quick to develop, and were also intended to be used for a few different projects that had similar needs.

Edit: Just for clarification, the micro frameworks I've written are server-side, if that makes any difference.


> Often a custom micro-framework better suits the needs of a particular project.

Fitting the needs of a particular project is frequently a local optimum however. Often, it's much more optimal to focus on the needs of a whole team or even whole company. You can hire people who already know React/Vue/whatever, but there is no one in the world who knows your micro-framework.


But due to its micro-size, it won't take long to learn.


Micro-frameworks have a nasty habit of outgrowing the description, IME.


It's always "tear it down" or "build it up" at the end of the day. Which one takes more effort is the right one to optimize for with your particular use case.

Rails vs Sinatra is a great example.


I'm sure there are exceptions (yours might be one) but this:

> I've written micro-frameworks for specific projects intentionally, because they did a few things that the established frameworks either didn't do, or it was very difficult to get them to do.

Is a false statement in most projects. I'd even argue in all projects except those that have a very strict limit on time or memory usage (or a legal one).

Can you share an example?


Framework churn is real. I no longer recommend JS frameworks simply b/c the opinion of the framework developer drastically changes over time. So, you are constantly re-writing completely valid and working code to keep up with the latest version of the framework. The JS language itself, on the other hand, seems to be very stable with long deprecation cycles and steady improvements. So, it is much easier to build on.


React has been stable since ~2015, and there are no signs of that changing.


I guess this depends on your definition of "stable". Sure, React code written in 2015 will still work, but it's hardly the "modern" way to write React and if your organization's code was written entirely in 2015, engineers will probably be clamouring to rewrite it.

Taking the somewhat-arbitrary but reasonable standpoint that "stable" means that there aren't any major architectural changes required to bring a codebase up to modern standards, I think the best you can say is that React has been stable since hooks were introduced in Feb 2019. So about a year and a half of stability, which isn't horrible but it's a far cry from stable since 2015.


The key thing for me is that old API is not deprecated, and the React team have publicly stated that they intend to maintain it indefinitely (and indeed are still updating it to work with newer React features).

At $DAYJOB we have no intention to rewrite our older class based code to hooks (except where we're otherwise making significant changes to that bit of code), and indeed we're still writing some new code in the class style.

If you feel like you need to upgrade then I think that on you, not the framework.


Apparently modern housing uses PEX water pipes, but my house has a mix of PEX, iron, and copper pipes. The water flows just fine. I'll probably rip out the iron and copper eventually when I do a major remodel, but I'm not stressing about it.


Agreed with "the modern way" to rewrite React code. Not simply for the sake of keeping current with version changes, but for one, to make sure your engineers are not constantly switching mental models between React Classes, HOCs, and Effects.

Another reason is the ease and speed of online documentation/resources. We all use StackOverflow to get answers and insight into problems we face on a daily basis, and as the framework (or library) progresses, the majority of Q&As adopt it, giving us a wealth of resources at our disposal.


Is the mental model switches between classes, HOC's, and effects really that taxing? Rewriting your code to match the current framework flavor is really less exhausting?


One time work to get knowledge outside your head vs mental notes for context switching while you code? For some people/projects, the first one would be much easier.


Personally I don't find hooks vs. classes much of a context switch. The mental model for the component lifecycle is more or less the same, it's mostly just a different surface syntax (there are a few extra rules for hooks like not putting them in conditionals, but you'd have those in a pure-hooks setup too).


If you understand each approach well enough they aren't context switches at all - they're just different tools for solving a similar problem. If anything, understanding each of them gives you a better understanding of the approach your team has converged on.


Disagree; the "done thing" in React has shifted since then (moving from class-based components and HOCs to functional components and hooks), and in the wider ecosystem, Redux is being abandoned in favor of more React native things like Context.

While React apps in 2015 probably still work with the newest versions of React, you can't put a 2015 React dev in a 2020 project and vice-versa as if nothing's changed.


Hooks are new but they aren't really that complicated. The junior developers on my team were more or less up to speed after a couple of hours reading the docs and a couple of hours implementing their first hook based component.

Our app is still using Redux, so I guess that one hasn't hit us yet (but there's also no real reason for us to update to a newer method).


I can assure you that Redux is not "dead" or "being abandoned":

https://blog.isquaredsoftware.com/2018/03/redux-not-dead-yet...

https://blog.isquaredsoftware.com/2020/10/presentation-state...

Sure, it's definitely _peaked_, because there's a lot of other great options in the React ecosystem these days. But, there's still plenty of good reasons to use Redux. And, with our new Redux Toolkit package and the React-Redux hooks API, "modern Redux" code is a lot different than what you've seen in the past, as shown in our new "Redux Essentials" tutorial:

https://redux.js.org/tutorials/essentials/part-1-overview-co...


If you use context, you'll just wind up rewriting flux, redux, Rx, Event, mobx, or whatever in what is likely a worse manner.


and in the real world react apps use countless npm packages, which get regularly deprecated as react versions move forward .. the real world churn in react land is most unfortunate


> in the real world react apps use countless npm packages

This is often the case in practice. But it's a pretty easy problem to avoid. Usually those packages simply aren't necessary in the first place.


I would say that the React ecosystem as a whole is in constant flux, which is what really matters.


How much of an ecosystem do you really need? For me the key component are a state manager (e.g. mobX or Redux) and a Router (e.g. React Router). React Router has indeed been quite unstable, but that's gotten a lot better over the last couple of years. But React and Redux have both been super stable.


As a consultant who often sees existing systems in enterprise environments, react apps often have dozens of additional packages. And the churn on these packages is significant.

React project in the wild seems to exist in two modes from what I can see, the constantly tended garden, or the write and move on and leave it to someone else to rewrite in future (otherwise known as write only, or abbreviated to perl). /s


> react apps often have dozens of additional packages. And the churn on these packages is significant.

IMO that's just poor engineering and not an inherent problem with React. The app I inherited at my current job had many such packages. And we have indeed had to update/replace some of them. However most of them were implementing functionality which could be trivially replicated in "plain react" so we've mostly replaced them with simple internal components and are not anticipating having the same problem in future.


I agree, for a significant portion of the packages yet not all.

Regardless, the diaspora of packages is just common culture in Javascript in the wild. As a reference, I site leftpad. As a remedy I offer the saying 'its better to laugh than to cry'.. :)


I can't speak to React. But, both Vue and Angular have seen dramatic changes over the last few years.


Most of the code people have written in Vue 2 is compatible with Vue 3. Vue 3 has just added some additional features. Sure, there are breaking changes, but not to an alarming degree.


I tried to upgrade a substantial codebase from vue 2 to 3 and hit a brick wall with Vuex early on.


Most of the code people have written in Vue 2 is compatible with Vue 3.

All of the code written with React 15 is compatible with React 16. And React 17. And React 18. And so on probably.


Actually React 18 is likely to have breaking changes due to Suspense. Specifically the deprecated methods that are currently prefixed with UNSAFE_ will likely be removed.


This is simply not true. React 15 is not compatible with React 16 and it can even take substantial effort to upgrade if you used componentWillReceiveProps (though which in hindsight could be avoided in the first place)


I did that migration on a very large react project (hundreds of thousands of lines). As I recall, after dealing with warnings for the later 15.x releases, the upgrade was seamless. They're also good about creating codemods to make the transition easier -- something somewhat rare in the world of software libraries.


But not all of your packages!


My opinion comes strictly as an indy dev with very limited time and resources. So, any breaking change is frustrating.


Can you run code written in 2015 on the latest version?


Not necessarily; just offhand I remember these: https://reactjs.org/docs/react-component.html#legacy-lifecyc...

Admittedly those are trivial renames, but they've been renamed as such because they're very likely to be buggy with async rendering.


bwahahahahahahahahaha


Organizations love the churn because they can get away with age discrimination lots of well qualified devs are left out to starve to death while constant churn hides a system designed to filter out the most experienced.


That hardly makes any sense. Most widely used frameworks are open source projects; it seems much more reasonable to assume that constant changes in direction are more an expression of developers wanting to play with the fancy new toys instead of sticking with one paradigm and keeping it stable.

Even setting that aside, there's nothing keeping older developers from switching to a newer technology; it might be a bit bothersome to adapt to ultimately inconsequential changes again and again, but that applies to younger devs just as much.

It's also not like web development is old tech in any way. There's constant improvements being made and things are often changing for the better.

Well, for the most part at least. Looking at the situation with serverside resource compilation I have to wonder when the world of web will re-invent make and call it a revolutionary achievement.


That's a very strong statement, do you have anything to back that up?


I think the problem with bespoke frameworks is that many career driven developers won't want to become a specialist in proprietary tools that won't transfer. However...

I have this strong suspicion that most projects that use something like React don't really need it. Like, if you're creating something that actually acts like an application and data might be shown in many places it makes a lot of sense, but for something that's just displaying data from a db on a page hit it can be massive overkill.


Thanks for reading - original author here. For the precise quote you mentioned I came up with somewhat format rules which prevented building a framework (code-wise) for this case study. The "caveat" you mention is just an explanation for the chosen ruleset; I believe I haven't introduced any general-purpose DOM/UI code in the case study and instead only describe patterns (correct me if I'm wrong!). I agree with your view of framework value as I professionally use React on a daily basis and will continue to do so, for similar reasons :)


Using frameworks is far more about hiring and human resources than technology I feel.

However, if your "micro framework" follows language idioms and is thin enough you get the best of both worlds, whilst releasing yourself from some of the downsides that come with using a general purpose framework.


Frameworks make hiring a bit easier it's true.

I'm amazed at the number of developers who can't seem to cope with working with custom in house frameworks. I'm talking about fully working systems, with full source code and being walked through the code by the author / maintainer.

They fall apart, constantly complaining that the approach is non standard, deprecated, dangerous, unprofessional, untestable.

So we are trying more and more to use frameworks, just to be able to hire more easily.


Definitely agree, developers can get away with a lot of missing knowledge and not realise it, if they're using frameworks from the start of their career.

Ideally you could hire senior developers with the skills to work inside boutique software, but I have found "legacy" code turns a lot of people off a project, and most boutique software eventually gets called "legacy" even if it's well architected and running perfectly fine.

The skill which I think new devs could differentiate themselves with, is debugging. Familiarising yourself with software patterns definitely puts you ahead, but being able to use debuggers to understand code that has no pattern, that's when you become the kind of bug-squasher/problem-solver that projects like that require. If you have good debugging skills, you can work on any project, because you can find out all the information you need by stepping through the code.


This just displays the core incompetence in basics that many devs who are dependent on frameworks have.


With es6 template strings you can do a lot of heavy lifting with some very simple code. This example still uses es5 to mr that’s insane who on the web is still using ie6?


IE11. IE11 remains the problem.


IE11 is an unsecure browser and doesn't even support numerous HTML5 security features. Any business that still relies on IE11 is asking to be hacked, no more, no less, because it also means that these businesses are still running older versions of Windows.

Now IE11 isn't even an excuse because $current_year ES version can be transpiled to older ES versions.


In the corporate world, yes (sadly), but for personal and free projects I'd say it's acceptable to just ignore IE entirely and at most include some JS snippet to tell its users to get a real browser.


Can I claim I don't violate the "No general-purpose utility functions related to the DOM/UI" if I use only one such function? O:) https://github.com/stefanhaustein/notemplate (the demo is just TodoMVC, though)


:D just to be clear, the rules were made just for the case study. I'd never recommend following these rules in a professional project. They are part of the method of the case study, not its outcome; the outcome is a couple of (hopefully) interesting patterns and insights, without trying to prescribe anything.


> adopting an established valid opinion to guide the team's development

Different requirements allow for different settings. Coming from an Enterprise Application background, I found this being more often the case or the only case. Working with different teams, developer fluctuation, different time zones, different skill sets, a framework guides the development process far better. Vanilla JS is something that should be avoided in team setups. There is too much mental overload that is usually not justified by the benefits. It is like deliberately choosing assembler language when Java or C# would do the job better. Frameworks got a bad rep. For me, they are tools that should be used.


I've heard the Assembly vs. C analogy a couple times now, and find it hard to apply:

Assembly languages come in many flavors whereas C is a single, standard abstraction over these.

DOM is a single, standard API whereas frameworks/libraries come in many, ever-changing flavors and provide different abstractions over the DOM.

It's a very different situation. Also, the level of abstraction that C provides over assembly is amazing. The level of abstraction that React provides over the DOM is comparably low (they are based on the same programming language).


I have experimented with something like this for a new side project. The plan was to have a TypeScript class for each component, and a render function to render them on dom. Then the listeners would be handled directly via addEventListener. Eventually I started having thoughts like - "maybe I could have the render function parse attributes and auto-attach dom events".

It became obvious it would become a side project of its own and not a smart use of my time. I settled down for Svelte and I love it.


Of the three you listed Vue is the most 'opinionated' in that it ships with official router, official store (vuex) etc and that is a huge deal.

With React and particulary svelte (for now) it's a mishmash of possible choices.

Where that bites is when you have 3 choices to make with 4 options.

4^3 === 64 - so any project you pick up/come onto has a 1/64 chance of using a stack you've seen before.


True, but routing libraries don't tend to be that complex. Picking up a different is not exactly a big deal.


It's amazing that a repository like this is even a thing. A demonstration that you don't need 3 million dependencies and a giant framework to build a simple application in the environment where a language was created to support.


I don’t see the point really. In the real world applications aren’t “simple”. They are complex.


Some applications are complex, many are simple (just over-engineered).


In the real world most JavaScript developers cannot write original code at all.


In the real world they don’t really have to. They aren’t building libraries.


Developers that can’t develop. What could go wrong?


I'm quite happy to see fewer libraries being developed.


This is nice, patterns over libraries.

I would question the choice to use ES5, as it makes it significantly more complicated to handle dependencies between modules in a scalable way. I understand the point of avoiding a build step, but if the code can run in modern clients without it I don’t think it breaks the spirit of the project to use a bundler to support certain older browsers. It’s a lot like using polyfills when needed.


Agreed, I originally thought this might be an older project from a couple of years ago because of the ES5. “Vanilla” doesn’t have to mean “supports IE11” in 2020 (before anyone jumps on me, yes, depending on your users you may very well still need to support it, but it’s very clearly on the way out - finally)

I love the motivation behind the repo though. The author’s write-up is fantastic and refreshingly reasonable in the dogmatic webdev world. It gives great insight into the places where real value _is_ provided by frameworks and build tools


Thank you very much! This is quite exactly what I'm trying to achieve. Very happy that you like it :)


Original author here - thanks! Yes, ES5 is a huge pain. However, I believe 5% of users (see other comment) are not a joke for many apps. There's always the question of minimum critical APIs required: If you build a 3D or webcam app you can forget ES5 altogether, of course.

> It’s a lot like using polyfills when needed.

Never thought of it this way, thanks for that! As is state in the conclusion, the study would likely be more convincing with ES6 and build steps. So yeah, ES5 is questionable :)


Most of the websites I write these days don’t have a build step for development, only for production. Modern browsers (and modern node) support ES modules, so there is no need to build anything until you ship to a legacy browser.

The only pain point of not building during development is when I need to pull in a third party dependency (since node and browsers pull from different sources; and import-maps is not is still not a standard), but if your are not using any third party dependencies, you should be fine.


Like most people who use ES5, your code is broken in IE11 and you had no idea. :-) You use Object.assign(state, next) which does not work in IE11.

Using ES5 is pretty much always a mistake in 2020.


Object.assign is polyfilled: https://github.com/morris/vanilla-todo/blob/9a27e850e15fddfe...

That being said, the page still didn't load properly for me when I actually tried it in IE11, but it's not because of Object.assign.

Nobody writes ES5 because they want to; many developers are still forced to support Internet Explorer for one reason or another.


Partly I agree, 5% of users is a lot.

However, this number will only continue to decrease as time goes on, and it also has to be questioned for any specific product whether that number is bigger or smaller on average.

Both for the (near?) future when ES6 is near-universal and for projects that have the luxury of just ignoring older browsers even today, it would be nice to have a proof of concept project like this that uses modern ES6 features, since many of them address precisely the types of problems that many pre-processors also fix.

Template-strings, custom elements, etc. add a huge amount of possibilities for web-application development and imo should get way more attention outside of a small circle of excited people.


Completely agree. In hindsight the choice of ES5 is questionable and in the study I actually conclude that another ES6-based experiment is desirable. Also, as others here have noted, you could start with ES6, see if it works for enough of your users, and only if not orthogonally introduce transpilation to ES5 as a production optimization.


Those 5% of users will need 95% of your support capacity. Best to kick them to the curb until they upgrade.


I very much appreciate proof that we don't need modern frameworks as much as people claim we do.

As for ES5 support, would you consider using something like Babel to be close enough to using polyfills to consider them? Transpiling ES6 into ES5 fixes your compatibility issue with ES6 just like I would argue adding a polyfill for WebP images would.


Thanks, although I wouldn't claim it to be "proof" :) and yeah, the choice of ES5 vs. ES6 seems to be a major weakness of the study as others here have said as well. In any case I believe switching the current product to ES6 would make the results even more convincing.


5% is not a realistic number for ES5. 1% of users are IE11 and the other 4% are weirdos like Opera Mini that won’t work no matter what JS you use.


I did not know how widespread ES5 is nowadays: https://caniuse.com/es6 says 94.69% of users worldwide use a browser that fully supports it.


> Naively re-rendering a whole component using .innerHTML should be avoided as this may hurt performance and will likely break important functionality such as input state, focus, text selection etc. which browsers have already been optimizing for decades.

I think this statement deserves some serious scrutiny. In many cases the performance hit may not be relevant or noticeable. I know this because I've been very productively using the following approach to build dynamic webapps for my personal use:

1. Compose HTML strings using Javascript, especially with string interpolation

2. Slap the resulting strings into div/span elements with innerHTML= statements.

This approach results in extremely clean and simple code - much cleaner than the OP's code in my view. I have never noticed any kind of performance issues, the updates are always instantaneous. I don't know what the author means by breaking functionality like text selection and focus, but it's never been relevant for me.

For an example of the coding style, see the reDispActiveTable in this code, which draws a table of TODO list items with some operations like edit/delete/mark complete. https://github.com/comperical/WebWidgets/blob/main/gallery/m...


You bring up an interesting point. I intentionally used "may" in that statement as using .innerHTML can be both effective and performant enough (and a lot simpler).

A couple possible problems off the top of my head:

1. CSS transitions won't work if you re-render a complete chunk of HTML instead of toggling a class.

2. <a>, <button>, <input>, etc. may lose focus or even data on re-render (e.g. while filling out a form).

3. Text selection may be reset on re-render.

4. If you don't use event delegation, event listeners may need to be reattached.

I agree that for raw display .innerHTML may be sufficient, but it's surely not in general. That would make React almost irrelevant, by the way, which would be a huge surprise.


That's what OP means: https://codesandbox.io/s/elastic-rubin-hxpcw?file=/src/index...

And that's why reconciliation algorithms became so popular. You can't be serious when you say that losing input focus and text selection have never been relevant to you. Basically, dealing with the internal state of components down the tree becomes a nightmare.


it is possible to track and set cursor location after updating innerHTML... it is definitely no longer simple and clean though haha.

https://codesandbox.io/s/zealous-vaughan-qd3gg?file=/src/ind...


Your answered yourself there. It's not only about input focus, it could be anything really, like loading state, form values, progress indicator, etc. People will be basically building yet another framework at the end. React is not the only solution, there are plenty of lighter alternatives. OP's solution is just very type-unsafe.


IIRC it will also lead to weird behavior for people using screenreaders, they usually restart saying whatever they where reading when the element was replaced even if the contents are the same.


If your user data isn't sanitized you could run into your site being hacked.


Oops, I expected vanilla web development to mean server-rendered HTML with forms. Thanks for making me feel old.


Sorry :D nothing wrong with server-rendered HTML and forms for many use cases. Unfortunately, by the limits of connectivity, if you want to build something that is interactive, data-driven, and responsive at the same time you will always be forced to render on the client-side at some point.


> if you want to build something that is interactive, data-driven, and responsive at the same time you will always be forced to render on the client-side

I'm not sure I personally agree with this blanket statement. Connection times can be, for a vast majority of users, measured in tens of milliseconds, giving us a pretty good budget for processing and rendering while still appearing to be "instantaneous" (100ms) or "fluid" (1s). Even the worst case connection times (satellite & mobile) are still measurable using hundreds of milliseconds.

The worst ranked countries still average in excess of 1.5 Mbps transmission rates - more than enough for compressed text.

And, given how unresponsive so many "top 100" SSA pages are (such as blogs that take whole seconds to display their initial content), I can't agree that doing that processing on the server would actually be less interactive or responsive.

Even the test application from this article can load from scratch in under a second, most of that time being the DNS resolution and the server processing for serving the page.


I'm not an expert but I'm certain we aren't even close to providing this level of connectivity. Throttled data, sucky hotel wi-fi, underground tunnels, bad service in rural areas of e.g. Germany etc. are real, unfortunately.

You cannot hope to get enough uptime and guaranteed response time ranges for an app's interactions from any server; you will always get inferior UX to client-side rendering (which, in a way, has 100% uptime and guaranteed response times only bound by CPU/RAM/bugs in your code).


It’s moderately ironic to talk about poor connections (largely due to cell phone situations) without discussing the lackluster processing, memory, and battery life available on those cell phones. Not to mention the fact that SPAs tend to be significantly more heavyweight in their initial download requirements than server-powered forms, taxing those poor connections before you even see a single line of text.

Again, the performance bar for SPAs has been set so low by top-100 sites (like Medium and other SPA blogs) that a server powered forms would have to work very hard to be worse.

As a side note, we can’t forget that even these “100% uptime” SPAs in the real world largely still rely on requests and responses from a server backend; still rely on prompt responses to their ‘XMLHttpRequest’ calls (hello, animated spinners!).


True, we should respect all the resources of a device, at best. Also true for desktop (Slack's memory usage comes to mind).

A major result of the study is greatly reduced bandwidth (and consequently, shorter parse time) compared to the original TeuxDeux, so I'm working towards respecting these resources, for what it's worth.

To be clear, the study does not care about doing SPAs or not. The results are applicable to server-rendered HTML as well (write a function that enable some behavior on an element, mount by class name, done). I agree that many use cases (e.g. blogs) should mostly be server-rendered and only progressively enhanced with JS for some UX improvements.

But highly interactive apps (drag & drop just being one example) will not have comparable UX without client-side rendering. I do not want a 100-1000ms delay (or an error message) after dropping an item in some list because of a server-roundtrip. This is not good UX.

Also, when filling out a form, I do not want to lose data or context when clicking submit while I'm in a tunnel. I'd rather have a client-side rendered UI that keeps my context and tells me "Sorry, try again when you're connected again".

Even better if it works fully offline and syncs the transactions I've done with a backend once I'm connected again.


A small note: 100ms is effectively imperceptible to a human. Even 1s is acceptable in most cases. There’s a HCI study from the late 60’s that defines this.

WRT losing data by filling out a form, that hasn’t been an issue for years now (except when the “smart client renderer” decides that it is). Most browsers will not lose data in an interrupted form transmission.

Plus, most browsers (especially mobile ones) handle interruptions like a tunnel fairly well, waiting for the connection to return without losing data. And without having to think about that usecase as a web developer.

It’s funny to me that you mentioned slack above; some of my favorite old-school chat experiences were server-side rendered pages. They worked remarkably well for the limitations they faced.

All this said, the proper compromise is probably doing both client and server-side rendering. I’m reflexively against client side rendering, because of how they’re typically implemented: slow to download up front, each SPA creates its own interaction primitives, and finally the interactivity of an SPA is only rarely required and yet they’re used everywhere (read-only SPAs are the worst).

Javascript - and client-side rendering - is the power hammer of the frontend development world.


Not sure about the interruption handling, maybe I need to do more research there.

But totally agree with the last parts - currently, typical SPA implementations are often misguided and create more problems than they solve (if any) compared to a server-side approach. That does not mean that pure server-side is always enough to provide good UX, especially with interactive/offline apps.


In theory.

In the real world my crusty old-fashioned drag-and-drop works way better than pretty much all "SPA"s (who are only trying to display plain text) on low/spotty connection. SPAs usually fail to display anything but a blank screen when there is a slow connection.

Losing form data hasn't been an issue in forever... The browser saves it when you go back or forward. You don't even need to press "back" - you can even just refresh the error page and as long as you click "yes" to the pop-up telling you you're resubmitting a POST, the form will submit with the original data entered, no problem.... Of course, SPAs like to break this for no reason, stop doing that.


This is so true. If you want to feel again what's possible, make a vanilla website (without JS and with very few images) and do all form processing server side. You will be astonished how fast and actually reactive this side can feel. Part of this feeling could be that animations (thinking of the popular material design by Google) itself take 10ms to 100ms. Within this duration, the browser can easily load a new page.


Completely agree. I still build websites with server-side rendering and just enough JS for Ajax, which changes innerHTML and occasionally adds an element. They are astonishingly fast, and they still function if the client has JS turned off.

I remain convinced that for 90% of websites, using React is shooting a sparrow with a cannon.


For what it's worth, the study was not targeting SPAs specifically. The patterns I found can very well be used to add some minimal behaviour to some HTML generated completely on the server side. I'm by no means advocating SPAs or client-side rendering for each and every use case.


Understood. I do appreciate the effort you put into this and the fact that you built it from scratch. FWIW your example has convinced me to explore client-side rendering now that I know it doesn't require a monster library.


and Perl CGI. That's all a developer needs!


I have written (internal) production CGI in bash.


I read it all despite my short attention span. Beautifully written study.

I hope to leave productive feedback later.

I do hope people don't miss the point that I think OP was trying to be modest about. This study is not for or against frameworks, it is a detailed and informative reflection of the current state of client JavaScript.


Thank you very much, looking forward to your feedback! And yes, this is exactly what I wanted to achieve :)


as someone who has made money as front-end developer, & saw the error in my ways. I will say the industry is not interested in simpler things. they're not interested in making the web faster for everyone. your typical insert framework here web app is slow on a laptop and even worse on mobile. now throw in hiring, it means everyone thinks their app should be on react or whatever, when it doesn't warrant it to be. look at Basecamp for apps that work without a major framework. things like htmx exist but community is small.

so yeah, props to the author for making a proposition for something that would work.

but nah, frontend work this days is about making everything complex from getting the project running to the build steps and even deploying the project.

though one area, I will say frontend is now better on is testing: cypress, jest and react-testing library are nice things to work with.


I know in this community the overloaded value for vanilla in this context means just javascript without other people's javascript. But "vanilla web" should really mean HTML and CSS.


Sure, so which subset of HTML are we talking about?

Because unless I'm misremembering things, HTML 2.0 (literally the first version intended to be a standard going forwards) came out exactly the same year that JS dropped in a commercial browser (1995), and superseded the previous HTML and HTML+ markups.

So surely you don't mean HTML with things like file uploads, or GIF support. Because that's not "vanilla" in your world.

/s


/s all you want but the distinction between documents and applications is clear.


Is it?


no js = just html and css

no framework js = topic of this link

vanilla = a bean, or a flavor


In the software sense, vanilla means "in it's original form, without addons, modifications, abstractions, third party libraries, ...".

Like a "vanilla kernel" is a Linux kernel without (distributor) patches. Or a "vanilla debian" is a Debian system without third party repositories or software.

Regarding parent, I could say... you cannot make a permanent To-Do web app, with HTML+CSS... drag and drop? data persistence? you need a backend (and JS for event handling) or as in this case local storage via JS.

OK, you can make a simple To-Do with static forms being sent to a backend... but then you have HTML+CSS+something (java, python, php, ruby, perl, golang, whatever), not only HTML+CSS.


> Data flows downwards, actions flow upwards

I know I'm nitpicking here, but I always find this phrase confusing, and I think this way of explaining unidirectional data-flow is problematic for those new to the concept.

If you want to persist something from a child component, the message that represents the action contains the data needed to make that state change, which sort of exposes the flaws in the up/down analogy.

Unidirectional data flow doesn't have an "up" and a "down". It doesn't even follow a single path. It's more like a ladder with several water slides connected to it. The pool is where the state lives. The slides are the child components. The people are the data. Climbers are performing state propagation. People who are sliding are creating actions. People who are landing back in the pool are updating the state.

Edit: no, it's not lost on me that the example I ended up using involves up and down motion--it's just not a very easy concept to convey using real-life analogies (which I think also contributes to the learning curve).


Thanks to everyone who took the time to read it, and thank you all so much for all the feedback! I'm quite overwhelmed by the response :) looking to address as much of the feedback as possible by the end of the week.

A major weakness seems to be my choice of ES5. I wanted an almost absolute minimum, which ES5 seemed to be at the time. I was lead by the fact that most bundlers produce ES5 by default, which may very well have been a mistake.

Interestingly, if ES5 is really dead (which it might be, I'm not sure) and ES6 is the minimum target, the study's results would actually improve drastically (less verbosity, actual modules, etc.) and further support the claim that vanilla can be maintainable (even without build steps). For anyone interested, let's continue the discussion here: https://github.com/morris/vanilla-todo/issues/6


I appreciate this project a lot! As someone exploring webdev as a hobby, everything is new to me and there are a variety of frameworks. I'm hesitant to start from somewhere that isn't "vanilla". As someone only a little familiar with JavaScript, the script seems surprisingly comprehensible.


Glad you liked it and could follow through!


I love it! It's definitely an exercise worth taking. Nice transfer size, and I'm happy to see that the code is quite maintainable.

I do my wordsandbuttons.online in similar spirit: no dependencies, and all the pages are kept below 64KB. However, as the code base grows, I'm starting to employ scripts to do the grunt work for me. So while I don't have dependencies per se, code patterns become dependency it its own right.


Many thanks :) like the concept of your site, I will take a look some time


This is odd (public/scripts/TodoList.js):

    el.innerHTML = [
    '<div class="items"></div>',
    '<div class="todo-item-input"></div>',
  ].join('\n');
I wonder why aren't they using `DOM.createElement` instead of innerHTML since they will query for those nodes later anyway.


I've tried but I (surprisingly) found the string array to be the most readable way to do it without helpers. It's also easily replaced by ES6 templates when upgrading.


Counterintuitive fun fact: `HTMLElement.innerHTML = "string"` tends to be faster than `document.createElement()`.


Another good guide to mvc with js is this one https://github.com/madhadron/mvc_for_the_web

The neat thing is that it starts from scratch and adds things, like pub/sub, then models, controllers and so on.


It does start from scratch but it (re-)invents a whole lot of general-purpose code, in effect making up a framework/library, which was an explicit non-goal for the case study (because there are good ones already).


I didn't have time before to go through your case study. But after looking at it for a bit, it doesn't look that much different from the mvc example I linked.

It seems that you are leveraging the built-in event system for observers and using the dom for referencing your components. Personally, I don't feel that mvc for the web is inventing a whole lot more, just patterns and some utilities.

Having events that bubble from the child component up to the app component is an interesting approach. It simplifies the code by not having to pass controllers into views, but makes the child components less reusable in other contexts.

Btw, this structure also reminds me of backbone(1) and that riotjs 1kb blog post from ages ago(2).

I love the effort and thank you for making it.

(1) https://backbonejs.org/#View

(2) https://muut.com/blog/technology/riotjs-the-1kb-mvp-framewor....


Not separating the state from the DOM (more than necessary, that is - at the client/server boundary) makes a bunch of things much easier. You can just manipulate the DOM and the state stays with the DOM, so there's nothing to sync internally within the JS app. You only serialize from DOM and to DOM at the server API boundary. No need for the "rendering" in response to state changes. You just change the DOM intuitively, and that's that.

And that makes "vanilla" browser apps easier to write too.


We use task management apps to save us time and give more visibility over the work. However, the app can become so complicated to use by itself, that we may choose to go for very basic note-taking apps to avoid the complexity.

I was looking for a minimal, simple and user-friendly app for daily task management, so I developed Renoj.

Fast to-do task management in Desktop for ultimate productivity.

Website: https://ribal.dev/renoj


Not a js wiz but i really enjoyed your experimental approach and the write up. Stating the objectives and constraints, report results. Nicely done!


Are Web Components (https://developer.mozilla.org/en-US/docs/Web/Web_Components) considered to be a library/framework?

If not have you considered trying to update the repo to use them


No, I'd consider them a standard and a candidate for implementing the case study. When I started the study I was thrown off by this: https://caniuse.com/?search=components - and very huge polyfills. In hindsight I'm not sure if I dismissed WC too quickly.

This is probably thin ice, especially since I've never built anything with WC, but they always seemed slightly over-engineered. I will try to learn about WC a bit more and maybe elaborate on my reasoning here.


That page shows excellent support!

Web components are supported in all current browsers. Unless you support IE11 you don't need to worry about polyfills.


On a side note, the TeuxDeux app they cloned is wonderful. It immediately clicked for me. As someone who's been managing five big projects simultaneously, this is going to significantly help with cognitive load. I love the lists below the calendar.


Totally agree. I should update the study to express praise for the original TeuxDeux, it's conceptually the best to-do app out there in my opinion.


The trick is to stop thinking XML/HTML and instead use functions which returns elements/components.

First write the app in spaghetti code, then turn it into pure functions.

   var app = todoList(data);
   document.documentElement.appendChild(app);


In other words, the good old MVC pattern.

(M) Model is your `data`. (V) View is what your function returns. (C) Controller is the functions your elements may use to mutate the data.


MVC doesn't mean "has data, logic, and UI". MVC is a model for how the UI communicates with the data store.


Responding to the wrong comment?? MVC is exactly what I said it is, check it here if you don't know: https://en.wikipedia.org/wiki/Model–view–controller


Why is there an fps counter at the top right?


Performant drag & drop + FLIP animations were not trivial to implement (naive approaches like re-render fully on each mousemove fail spectacularly). I needed to see FPS on a couple different devices so I added the counter. Also, since I claim good rendering performance in the case study, I just kept it there as "proof" :)


15ms to draw an empty table for a week's schedule, and 40ms during animation seems... excessive?


In your Try it Online demo I could decrease (<) the dates all day long but not increase (>).


Would you mind creating an issue with some details (browser and version, maybe how to reproduce)? Thanks :)!


It works fine today!


Why does it have to be fully animated? I feel like 44k is a lot if it's just a sortable list with checkboxes.

I saw a draggable package on mom that is less than 2k. Does Vanilla mean we can't use npm or we pack or browserify or anything?


It does not have to be. However, animations greatly contribute to UX, and are an interesting cross-cutting concern to implement (with any tech).

Remember that 44K is unminified, unoptimized, with considerable duplication, and includes HTML, CSS, JS, and SVG icons.


Will uncompleted tasks move over to the next day?


Nope, apparently not.


> There's no custom framework invented here.

Proceeds to literally invent a custom framework.

It’s interesting, but I’ll stick with Vue/React.


> Proceeds to literally invent a custom framework.

What definition do you have of “framework” if you say that? I just see a bunch of procedures that create and manipulate DOM, coordinating through regular events and selectors. There are patterns and loose organizing principles, but no framework code that I can see.


Original author here. During the study, I explicitly forbid myself from writing general-purpose helpers which would make up a framework. Would you mind elaborating what in your view makes up the "custom framework"? Thanks!


If general-purpose helpers make up a framework, then what is the difference between a library and a framework? I'm pretty sure jQuery is not a framework.


Sorry, I didn't mean to make a difference between frameworks and libraries in this case; both would violate the method/goal of the case study which was finding vanilla patterns that are not dependant on general-purpose code.


Not OP, but in my reading of your case study, you still made a framework. Instead of having a code-based framework, where the methods and classes enforce how you implement your system - you created a convention-based framework (use a specific CSS naming technique, model your JS on this template, etc.).


Isn't a “convention-based framework” just... a convention? With such a generous definition of “framework”, is it even realistic to solve this problem without accidentally creating one? Would you need to purposefully avoid any type of consistency between the different modules, and solve the same problem in different ways if it comes up more than once?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: