Hacker News new | past | comments | ask | show | jobs | submit login
Solidjs – JavaScript UI Library (solidjs.com)
349 points by silcoon on Nov 30, 2021 | hide | past | favorite | 88 comments



(repost from the Asciinema thread, this comment feels more on topic here)

Wow, I just read about Solid for the first time, and I'm impressed at the API design. I love how it's a fully reactive data flow thing, but it looks and feels like React Hooks.

The other reactive/observable-based frameworks I've seen (eg Cycle) put the observable streams center piece. I always felt that was distracting, and that nuances about how the underlying observable stream library worked (eg Rxjs or Bacon) quickly got in the way.

Solid still puts the components firmly at the center, just like React, but replaces React's state concept by reactive observable state, called "signals". You use them like you useState in React, but deep inside it's an observable stream of data changes, and you get all the finegrained update control that comes with that.

I also love how noun-heavy it is. Resources, tracking scopes, effects, signals. It's just like how React moved from "do this thing after the component updated" to "ok we have this concept called an effect", but extended to more topics such as dealing with async data loading, when exactly a signal is observable, etc.


By fine grained update control -- do you mean similar to mobx + react. I think "automatic change tracking" is the phrase I've seen thrown around. If so this was always the most appealing method for performant UI's for me. You have some conceptual overhead of dealing with observables, but in return you get to ignore the majority of performance -- components only update when the data they need to use changes, and you never have to manually specify it. Really nice for web apps that have a lot of mix and match screens (say, complex internal tooling that may have numerous components re-used ina variety of places).


It's a bit like MobX but instead of re-running full components or subtrees it contains the updates granularly. Picture if your renderer was just MobX Autoruns wrapping specific DOM updates as depended upon. In so because the reduced of need for diffing and the compiler that transforms the JSX to this you can author components in a normal way yet get incredible performance.


Reading this I'm reminded of knockoutjs, which the author of SolidJS cites as an influence. I remember at one point, years ago, trying to figure out why it was so much faster than AngularJS. Two things seemed to be going on: 1) It was only updating the parts of the DOM it needed to 2) To do this it seemed to 'automagically' be inferring dependencies.

I wondered how they did this second thing and guessed that it was parsing the JS code I was writing somehow. Either that or flooding the observables with values and making note of how changes trickle down. It turned out that it was doing neither of those things but frankly I didn't understand how it worked even when it was explained to me. Might be a good time to revisit and satiate my curiosity and take another look at it.


This is the article series for you (I wrote it so I'm biased): https://dev.to/ryansolid/a-hands-on-introduction-to-fine-gra...


It just tracks function calls. The general idea is that if a function is called in the render cycle of a component, that registers as a subscription for that piece of data that function returns. There is some magic with proxy and getter to hide the functions but basically that's it.


Heck of a library, and its creator, Ryan Carniato, is a very smart engineer who works on both Marko[0] and solidjs. He's really patient and answers my random questions on Twitter pretty reliably, I have to say I appreciate it!

The performance that SolidJS eeks out of the DOM is really next level.

I think it could use a small augment in the docs about migrating from React to SolidJS, but all around the project is very approachable and fantastic, and its fast

[0]: https://markojs.com/


> The performance that SolidJS eeks out of the DOM is really next level.

Kind of a weird way of putting it. Intuitively any framework abstracting concepts on top of DOM manipulation has to be slower than direct DOM manipulation.

But yes in comparison to other frameworks, the benchmarks they make do look impressive.

Now I'm curious to do some benchmarking of my own.


That's kind of why the entire virtual DOM concept came about, because it was faster than direct DOM manipulation. Essentially batched updates to the DOM were faster than ad-hoc updates.

Now React is 8 years old and browsers have improved a lot since then so I imagine the gains might not be what they used to be. But at the time it was huge.


Virtual DOM came about since it offered a simplistic top down view = fn(state) model without terrible performance. Other top down renderers were terribly inefficient and this built on that. It was never innately faster than targeted direct DOM manipulation. It was just compared to other approaches that were innately built on diffing as well. And things like reading from the DOM can cause reflows and other terrible performance bottlenecks.

Fine-grained reactivity existed back then and was more performant for updates. Always was. Just had its own issues since pre-MobX we didn't see implementations in JavaScript which provide glitchfree execution guarantees. So Virtual DOM was a great invention but I think it was misrepresented early on. That's what got me to start working on Solid. I knew the performance was there without a VDOM from day one. I'd seen it. So when Knockout started waning in popularity 2015/2016 I started working on a replacement.


That perception of the DOM/VDOM situation is and always was false. React was always slower at runtime than a carefully-engineered system designed for performance.

As Dan Abramov said a couple of years later <https://medium.com/@dan_abramov/youre-missing-the-point-of-r...>, people were missing the point of React: it was never about VDOM; rather, that was a cost that at the time they reckoned had to be paid in order to write reliable code in an immediate mode style, because if you tried doing that without DOM reconciliation the result would be atrociously bad. VDOM came about because the alternative (the consistently faster alternative, I may add) entailed things like explicit DOM mutation that was far too easy to make mistakes with, and reactive data flow was generally even buggier. It was a carefully-chosen trade-off: shedding some performance, for greater robustness and ease of use.

“DOM is slow, VDOM is fast” was a straw man comparison that entered the public perception but which the React team mostly stayed well clear of: almost no serious systems have ever used the DOM directly in the immediate mode style, because it has obvious and serious problems in both performance and transient UI state like scroll and caret positions and element focus (… and transient state things are problems for all immediate mode interfaces, not just DOM ones: escape hatches are fundamentally required).

Was VDOM worth the cost at the time, compared with the other options then available? For most people, probably. And even for the rest, React presented useful food for thought that led to other options improving too. Is VDOM worth the cost now? Well, I’m with Rich Harris that VDOM is pure overhead <https://svelte.dev/blog/virtual-dom-is-pure-overhead> and that we have more efficient ways of doing things now.


Wasn’t a separate big motivation that reading from the real DOM (in order to generate a diff with the new intended DOM) is also slow?


The DOM used to be slow, incredibly slow, but that was a very long time ago when JavaScript only executed as an interpreted language. The DOM has been insanely fast even since before React was born. Using micro-benchmarks you can see that DOM access, when not using query selectors, tops out at around 45 million ops/s in Chrome and between 700 million to 4 or 5 billion ops/s in Firefox depending upon your CPU and ram. That is fast. No higher level framework will improve upon that.

Back in the day when the DOM was slow the primary performance limitation was accessing everything through a single bottleneck, the document object. To solve for this the concept of document fragments was invented. These aren't used anymore because the DOM is insanely fast and modern implementations (popular frameworks) are so incredibly slow. You aren't going to achieve a technology solution to a people problem.

The first big misconception of DOM performance is the difference between DOM interaction and visual rendering. Visual rendering is fast now because its offloaded to the GPU but its still far slower than accessing and modifying the DOM. As an example set an element to display:none and then perform what ever DOM modifications you want to it. Those changes have no visual rendering, are still DOM manipulation, and are insanely fast. You can measure this with a microbenchmark tool.

The second big misconception of DOM performance is how to access the DOM. The fastest means of access are the old static DOM methods, like: getElementById and getElementsByClassName. Query selectors will always impose a huge performance penalty when there are standard methods to do the same job and a minor performance boost when there aren't. The querySelectorAll method compounds that performance penalty. The performance penalty is present due to string parsing of the selector as necessary to convert that into something vaguely equivalent to the static methods, which is a step on each operation the static methods do not require. The minor performance to access things, such as by attribute, is achieved because there isn't a single static method equivalent and more steps must be taken compared to the parsed string result of the selector, but that performance boost is exceedingly minor (16x at most).

Usually developers prefer slower means of access to the DOM due to preferential bias to declarative approaches to programming. There isn't a performance tool to fix developer bias.

If you want both performance and less intimidating approaches to DOM access you can create aliases that solves for code reuse with more friendly names, but you will still need to understand the concept of a tree model.


Fair enough, I don't retract the ethos of this statement, but really it should: as far as abstractions go, SolidJS is a very performant framework, arguably more so than any other framework out there right now


Oh,I agree so much here. It was 2 years back when I had looked at SolidJS. It was a simple project and had Bootstrap for a couple of pages and instead of JQuery or Mithril, I put in SolidJS. Was stuck on few reactive issues. Ryan was so quick to help me and also explain few things which really helped me.


Interesting how does this compare to Mithril? I have yet find anything that is more performant than Mithril. Never heard of solid. How is it?


Mithril is reasonably fast, but there are plenty of faster options like Solid, Inferno, Preact, or Svelte.

https://krausest.github.io/js-framework-benchmark/current.ht...

IMO the best thing about Mithril is that it doesn't have reactivity, much like Imba. This allows you to define state with pure vanilla objects and classes. Also that it includes an HTTP client and router in just 10kB.

It's really verbose though compared to Svelte.


[Mithril.js author here]

FYI, the krausest benchmark is known among framework authors to be not very good (it weighs some aspects much more heavily than others and has been gamed by various toy-ish "frameworks" that aren't all that practical in real life).

With that said, people obviously use React and even Ember (which are on the slower side of the krausest rankings) out in the wild and they're generally fine frameworks: asciinema-style "render-a-huge-grid-at-60fps" is very much a niche use case that 99.9% of people don't have.

I think the most accurate way to describe Mithril.js is that it aims to be a "get-out-of-your-way" sort of tool, in the sense that if things go wonky, you can generally reason about the low level reason as to why that is the case. For example, in Mithril.js, `render` is not just a first-class concept but an explicit API. So if you ever run into an issue where the template doesn't update for whatever reason, you can intuitively infer exactly what to do to unblock yourself. No need to reason about stale closures when debugging useEffect, observable/signal composition, or similarly complicated mental models. Reactivity, specifically, is great for squeezing performance from needle-in-the-haystack sort of updates when you have humongous haystacks, but it does also have caveats: if Svelte ever doesn't update the template for some reason, the mental model required to understand reactivity membership graphs and reactive bindings and where the compiler has jurisdiction and all that jazz are quite a bit more complicated than "ok fine, just slap a render call in this library's event handler".

The thing you said about vanilla objects follows from those principles: as a JS person, you know how objects work so you're never going to run into cognitive dissonance about the semantics of your primitives.

The verbosity thing I think is more a testament to Svelte being terse than anything else. Mithril.js isn't really verbose compared to other frameworks, IMHO.

</two-cents>


> run into cognitive dissonance about the semantics of your primitives

Thank you for stating that. Seems like Mithril isn't just fast for processor, rather it is a fast for mental processing.


I always say Mithril is a sushi chef knife. It's wonderful if you know what you're doing, but you can cut yourself badly if you don't.

In more popular frameworks like React, Vue, and Angular, there are multiple tools that give you a structure. In Mithril you're free to do whatever you want.

Personally, I love that freedom. It's also one of the reasons I love Svelte, since it tends to get out of the way.


Hey Leo

I agree with all your points, of course. I don't think I ever stated that Mithril was slow or verbose in an absolute sense.


Interested as well, Mithril seems plenty fast for my use case. One area I believe I read about where VDOM/Mithril is faster is dealing with dynamic list data. E.g. you have a list of items you're rendering (probably keyed in Mithril), and you append a new one, it'll render faster with VDOM then solid because the diff process will be faster than whatever solid is doing.


Solid's diff algorithm generally is faster(or atleast very comparable) than Mithril's. We test very well in list benchmarks like: https://krausest.github.io/js-framework-benchmark/current.ht.... We are also fast at node creation using pre-compilation to prepare the nodes in a way that can be created more efficiently.


Interesting, thank you for the links and clarification. May need to revisit solid then! What about rendering things that aren't rendered by solid, like markdown rendering via commonmark? Also, Mithril streams is a huge part of my app, will I miss it with Solid?


Hey Solid's reactive system uses Signals which are different than streams but work in similar use cases. Streams are slightly more oriented to transformation than synchronization. Most stream libraries could be used with Solid with a bit of an adapter on the end to connect to the templates as they are a good tool for managing global state.

All that being said. If you are happy with Mithril stick with it. It sounds like it's done everything you needed. I have a lot of respect for it's minimalist approach and its author is one of the most insightful and helpful people I've come across since getting into JavaScript frameworks.

If you are interested in trying something different. Check out our tutorials on the site and see how you feel about it. It is a little bit different type of framework.


Interesting but your comment on doco and the fact that you need to ask the dev on twitter is a huge turn off.

Documentation > performance for most business applications, because it's developer performance. I don't have time to reverse engineer some uber nerd's SIMD optimized world wonder, I have things to ship.


When I go to https://markojs.com in Safari (14.1.2), the CPU load on my MacBook Air goes up above 100%. If I use the Brave browser, the CPU load is closer to 25%. Still too much.


I've just completed porting react-bootstrap to SolidJS and the process was fairly painless.

Having components only run once really simplifies things ...no need to stash useRefs and useCallbacks everywhere. Refs are simply the elements themselves.

Mostly porting required a fairly repeatable pattern of removing awkward React code and using a couple of Solid functions to keep props reactive when splitting them up to spread across JSX elements.

A+ for Solid developer experience (coming from 5+ years working with React). Oh, and performance/size are added benefits.


> In Solid, props and stores are proxy objects that rely on property access for tracking and reactive updates. Watch out for destructuring or early property access, which can cause these properties to lose reactivity or trigger at the wrong time.

From the docs; does SolidJS provide a way to lint or warn on this? I've been getting more and more scared of destructuring and ... copying recently since, for example, you lose prototype info when you do this. TypeScript doesn't warn you.


I’m working on eslint-plugin-solid for this reason, and while I haven’t implemented this yet (it’s complicated), it will eventually warn when something is wrong with reactivity in general.


> I've been getting more and more scared of destructuring and ... copying recently

Well, certainly it should be clear that `obj !== {...obj}`, and you have to behave accordingly.

In reality though, assuming you are destructuring to pass it down a tree (and not around the app), this usually just means that you lose the optimizations of only rendering some part of the subtree and render your whole subtree more often, which is equivalent to using state management that isn't integrated into the scheduling engine very well, which is very common. So you're just going from a nicely performant app to standard fare.

I would recommend to always try to think in a singleton structure and use IDs and maps to the original objects over de- and re-constructing things you pass around as if they were the original


> Well, certainly it should be clear that `obj !== {...obj}`, and you have to behave accordingly

Sure but that's not my issue. I'm saying that when you copy an object like you just did, the latter object looses its prototype.

So any prototype methods I try to call on a copy/de structured object like that will crash in my app without a prior TypeScript warning.

I want to be able to copy objects with a nice syntax but still have them retain their prototype.

Since no one warns you (TypeScript anyway) about losing your prototype it makes me worry about this everywhere because who knows which objects weren't meant to lose their prototypes.


> > Well, certainly it should be clear that `obj !== {...obj}`, and you have to behave accordingly

> Sure but that's not my issue

I think you misunderstand what I'm saying. I know you know `obj !== {...obj}`, but it's important to understand exactly what that means, and one of those things to understand is

> when you copy an object [via destructuring], the latter object looses its prototype

For example, any class instance should most certainly not be used in that way, as it moves away from a true "object" paradigm (keys and values baby) into an inheritance paradigm, and as you observed inheritance is lost in destructuring.

> TypeScript doesn't warn you

I find this hard to believe. If you are passing TypeScript some interface T, and the object {...tInstance} doesn't have the keys of T, you should get an error. If you are passing TypeScript some class X, and you try to claim `typeof {...(xInstance)} === X` you would also surely see errors.

Please link an example so I can understand what I mean, I would guess you didn't type the destination very stringently so the loss of the class type was unobserved


I can't reproduce what I'm saying! Well that's interesting. Thanks for pushing me.

I wonder what I was seeing in my code...


Cheers, good luck to figure it out.


Destructuring only really makes sense for simple struct-like data structures. If you are worrying about prototypes you are destructuring the wrong things. Duplicating class instances is almost always something that you'll need to do manually or through serialization in OOP afaik.


All I did was try to put some helper methods on data classes. :D But yeah clearly I'm going against the grain.


It could potentially if we do analysis to identify components. It's kind of like React's rules though in that sometimes you want to access things in an untracked context on purpose. A simple linting rule that is ignorable would probably help.


Properties are proxies, but getters should be used for everything else. Why weren't getters also used for properties?


Props are getters. They are shallow. It are stores that are actual ES proxies.

I think the term is used loosely here to suggest that they are wrappers on top of objects. It isn't so much about the specifics but to explain why destructuring should be avoided.


Yes, by getters I was trying to refer to these wrappers.

  const valueGetter = () => props.value;


It's a fair question. Pre React Hooks I expected the proxy plain object approach to be more common. We definitely want consistency regardless if component consumer passes signal or literal, so I went that way. There are cases like with spreads where they are actual proxies too. But probably could have made all props are functions work and have them work with combination of proxies as well things just didn't play out that way.


I'm doing a lot of data viz work in different environments and solid is my goto tool. It is small and flexible so I can easily inject it in another page/project.

What is interesting, render function really is a function factory - like reagent form-2. That let you simplify "hooks rules", because they're called once and also let you factor your components more easily without much consideration for cost of the component abstraction.

My favourite feature is "opt-in" reactivity. In bigger data visualisations you must be conscious what would re-render in response to what change. If you have only component boundaries like in React it is very easy to waste re-renders and then you start adding useMemos and React.lazy. In Solid I can start from opposite side, and declare granular computed properties and Solid will take care of all re-renders.

It has few rough edges like special props object or "boilerplate" function calls. But that's some minor ergonomic issues.

In conclusion, thanks Ryan!


not the submitter but saw it recently mentioned as the small library powering asciicinema's 3.0 rebuild[1].

[1] 4x smaller, 50x faster, https://blog.asciinema.org/post/smaller-faster/ https://news.ycombinator.com/item?id=29387761


Yes, I found it from the article. I think that is an interesting library. Is pretty fast and developed/used for 5 years.

I cannot understand how some libraries are so popular without changing that much and others are not popular at all.


To be fair the first few years I wasn't really promoting it. Honestly just was content entering benchmarks and using it for my own purposes. Then React announced Hooks and it was like looking in a mirror. At that point I realized that people might actually use this library so I started promoting it. Honestly bigger players have so much inertia behind them it takes years to make a dent. We released 1.0 in July and things are just getting going.


Something's weird with the playground widget on this landing page. I wanted to make the code section bigger to take a better look but when I drag the slider it just goes up, showing a little sliver of code (around 2 lines). Doesn't make a very good first impression.


Yeah I see the bug. Thanks for reporting. Looks like the code added for re-adjusting has mins set that weren't intended for smaller viewports. This has been a community effort(PR that added the feature: https://github.com/solidjs/solid-playground/pull/43) and we are continuing to improve things.


I'm confident that's a browser native feature and not something with their code. Are you on Firefox?


I thought about that, but it happened to me on Chrome as well.


I read this article[0] which was written by the framework’s author. It looks like it gets closer to “reagent in plain Javascript,” which is pretty great.

JS development would benefit greatly from a native “immutable, nestable, performant, deep-compare-by-value” data type that supported something like these operations:

  atom.get(path) 
  atom.set(newValue) // returns new atom
  atom.set(path, newValue) // returns new atom
By handling this type in the JS engine, you can take advantage of HAMT to make this quite performant[1]. The “single-global-state atom” pattern is a point of convergence, and it would be nice to have native speed instead of relying on one of the many libraries that reach for it, each with their own trade offs. (SolidJS uses proxies for this.)

It should be a language-level abstraction. I suspect it would be rapidly adopted by a lot of frameworks, and we’d all benefit from it.

[0] https://javascript.plainenglish.io/designing-solidjs-immutab...

[1] https://en.m.wikipedia.org/wiki/Hash_array_mapped_trie


Isn't immer pretty close?


Some of the examples didn't load for me or contained errors (such as the css animations and todo). I'm on Android chrome.


I'm on Linux/Firefox 94.0 - same.


Can we please just get the ECMAScript body to backport some of these common features into the core language itself? I can't keep up with all the javascript frameworks of the day :( By the time I finish one feature, ten new frameworks have popped up. I can no longer tell the difference between innovation and anarchy.


>I can no longer tell the difference between innovation and anarchy

I think this is kind of the thing with innovation. Innovation is anarchy until the point where the innovation is accepted as best practice. And at that point it's not really innovation. It's just best practice.


Yeah, but we shouldn't need 15, no, 17, no, 27... different ways to "update the UI when the data changes after an API call". Just one good one -- not necessarily the best, just good enough.

Something a bit more advanced than fetch, not quite as complex as useContext. To the end user, websites haven't really changed all that much since the early AJAX days. Input gets received, ajax happens, shit gets updated. Why has the tooling gotten 1000x more complex?


I suggested one such idea (a diffing version of innerHTML) years ago to Mozilla. Ironically, the React team was against it and it fizzled out. In more ironic twists of events, someone eventually wrote a JS implementation of it: https://github.com/tbranyen/diffhtml/tree/master/packages/di... and nowadays people are talking about HTML-based rendering engines again, making this idea somewhat relevant once more.

To be fair to standards bodies, they have done some work. Element.append now exists to make hyperscripts a bit more straightforward, and a lot of reactivity semantics can be implemented on top of Proxy.


Congratulations, now every browser (oh wait, there's only 1 real browser) will implement it slightly differently and you'll use a library once again to reconcile all of them. (Also old browsers, browsers implementing the non-final version of the spec)


A polyfill lib is different from an entire framework. Besides, even if you use a framework, you still have to polyfill for browser diffs anyway.

But with ES standards, at least the code can be largely the same, not as different as vanilla/jQ/React/Angular/Svelt/Vue/someotherslickonewordjsframeworkdujour


Such a pleasant library to use, and coming from Vue I was somewhat surprised by that as Vue was the first lib I used where I could just intuit how something worked and was very often correct or close.

Great job.


It will be sad if this UI library makes Tim Berners-Lee's long-term project Solid less visible: "Your data, your choice. Advancing Web standards to empower people." https://solidproject.org/

In the future, people might build UIs for Solid using a (hypothetical) solid-solid or at least combining Solid UI and SolidJS: e.g. https://github.com/solid/solid-ui vs https://github.com/solidjs/solid

At least, if SolidJS is supposed [1] to be a name separate from Solid, why not call the repo solidjs like the org?

[1] https://news.ycombinator.com/item?id=23375912


The name is Solid, the author uses SolidJS to disambiguate. He also has solid tattooed from before writing this library, IIRC a reference to a punk band he was in. Which doesn’t necessarily trump Berners-Lee, but I think both can coexist maybe?


See also: https://github.com/inrupt/solid-ui-react, and the `https://github.com/inrupt/solid-client-*-js` names under Inrupt. (I helped build most of them!)

I can already imagine the naming discussions if we were to build a Solid library for Solidjs. "Solid-ui-solidjs"? "Solid-client-solidjs-js"? Oh boy. We're currently using React for most things, but I could imagine switching to this if it takes off... that's going to make following conversations very confusing, haha.

Now that I say that and I've clicked around, I wonder which is canonical: "Solidjs", "SolidJS", or "Solid.js"? I've seen it all three ways.


In some benchmarks Solid appears to come in as both the fastest and smallest JS library. https://levelup.gitconnected.com/a-solid-realworld-demo-comp...


I've tried to ask on Remix's discussions, whether the use of SolidJS is possible within their framework, it seems to me from the first glance that porting it won't be a big trouble. It's really going to simplify full-stack development.


Hmm.. Remix is based around their router. And a nested router is what we need to for Solid (see Solid App Router https://github.com/solidjs/solid-app-router). I think the challenge is that we don't render like React. Not at all. I've found most cases where that assumption exists to be incompatible.

That being said the work has already started on a starter with Nested Routing/Automatic File Based Routing + Code Splitting/Parallelized Data Fetching/Streaming SSR/Multiple deployment adapters. We're given it the same focus on performance that we've given the rest of Solid.

Here is the recent Vercel Edge Function demo we made with it: https://twitter.com/RyanCarniato/status/1453283158149980161


An interview with the creator of Solid.js: https://www.youtube.com/watch?v=Dq5EAcup044


ryan, is a cool guy. but I feel like a lot of work they're doing on marko, will be much better than solidjs. solidjs api is non-intuitive, same as the reactive system say compared to svelte. Hooks are already a bad idea in react, for reasons I will not expand on. And having the same concept in solidjs isn't progress at all. Svelte nails reactivity, it's something you don't think about.


Something you don't need to think about until well, you realize reactivity leaves templates and you need to write a store. Or that you have large data and need things to only update piecewise. Or you need to hoist things out into functions. Svelte has done an amazing job with its compiler but there are considerations you need to understand with its reactivity.

Solid does not have Hooks or Hook rules. They look similar but execute more similar to say Svelte. Although Svelte still is about Component re-renders and Solid's reactivity is more granular hence the performance improvement.

This area of DSLs is very superficial for the most part. I think people have preferences and I'm exploring both sides between Marko and Solid but saying things like unintuitive I think mischaracterizes things. Maybe explicit, transparent, transferrable, and composable are better adjectives that apply more to Solid than to Svelte that you can use in the future.


I think you need to take a second look at Solid. It isn't subject to the same "rules of hooks" as React so hooks are really just functions that return reactive units. You could argue writable and readable are just hooks for creating stores. It's reactivity is fairly similar to Svelte from there. There are stylistic choices with Solid choosing a function style and Svelte abstracting that with proxies and allowing a mutation style but otherwise how the reactivity works is close.

I do agree that Marko is quite promising. I think quite a few sites that need server rendered markup and progressively enhanced features could be implemented more simply in Marko than the equivalent solutions with React, Svelte or anything else.


Svelte doesn't really use any kind of proxy. It instruments its code with explicit invalidations and scheduled updates. Mutations are tracked "lexically" intra-component with static analysis or with explicit functions when using stores


You're right, I think I meant something more like proxy style mutation. It does of course compile down to functions operating on stores which is even more like Solid


> Performant - Consistently tops recognized UI speed and memory utilization benchmarks.

𝚊̶𝚜̶ ̶𝚖̶𝚢̶ ̶𝚌̶𝚘̶𝚖̶𝚙̶𝚞̶𝚝̶𝚎̶𝚛̶ ̶𝚔̶𝚎̶𝚎̶𝚙̶𝚜̶ ̶𝚏̶𝚛̶𝚎̶𝚎̶𝚣̶𝚒̶𝚗̶𝚐̶ ̶𝚠̶𝚑̶𝚒̶𝚕̶𝚎̶ ̶𝚕̶𝚘̶𝚊̶𝚍̶𝚒̶𝚗̶𝚐̶ ̶/̶ ̶𝚜̶𝚌̶𝚛̶𝚘̶𝚕̶𝚕̶𝚒̶𝚗̶𝚐̶ ̶𝚝̶𝚑̶𝚎̶ ̶𝚜̶𝚒̶𝚝̶𝚎̶ ̶(̶𝚖̶𝚊̶𝚌̶𝚋̶𝚘̶𝚘̶𝚔̶ ̶𝚙̶𝚛̶𝚘̶ ̶𝟸̶𝟶̶𝟷̶𝟻̶ ̶𝚠̶𝚒̶𝚝̶𝚑̶ ̶𝙵̶𝚒̶𝚛̶𝚎̶𝚏̶𝚘̶𝚡̶)̶

It's also not very clear what exactly solidjs is. My understanding is that it's basically React but with a different method to render components?

Otherwise the site is actually impressive and the UX is really good.

EDIT: Sorry this actually seems to be my computer's fault because other sites are slow too, even HN is a bit jittery. Idk if I have too many tabs open.

I can say that the site particularly slows down whenever I'm scrolling and especially when scrolling while an animation plays. Changing tabs is very fast even though there are a lot of animations.


Solid author here. Hmm don't see this on Firefox on Windows or on my Macbook air. From what you are describing it's probably the REPL acting up. I wouldn't use that as a measure of performance.

SolidJS is a UI library, that is basically a reactive state library first, renderer second. It happens to look like React by choice, since it chooses JSX for its flexible composability and React Hooks resemble reactive primitives. Cliff notes are reactivity is independent of components. Components are just functions that run once and wire up granular updates. Then only the things that change ever re-run.


it wasn't clear if this API is hook-based like react. There is function called useTransition but it's not indicated as a hook, and then in the playground you just call createSignal in a render function and it magically has it own instance? Does that mean you can't call createSignal outside of a component either?


You can put createSignal anywhere. It isn't hooks based. It's reactive like MobX or Vue. Some of the primitives don't have much meaning outside of a render setting. What sets Solid apart is the rendering is just that. It's just `createEffect`s. We just compile JSX to it. See my React Finland talk: https://www.youtube.com/watch?v=2iK9zzhSKo4


It's interesting you report this website issue, because I know some on the team working on the site are very diligent about testing on various devices. I've highlighted your comment in the SolidJS discord server's #website channel.

SolidJS is a framework at the abstraction level of react, vue, svelte, or marko, with an API that adopts certain philosophies from react. It is an entirely incompatible framework, however, and not just a drop in renderer (though easier to port to from react due to api similarities).


this made me nostalgic for angular 1.


did you just want to say angular 1 for whatever reason?. There is nothing like it in here. Looks like a better react if anything


vue reminds me more of angular than this. This is way more similar to React.


If your website has janky scrolling I'm not using your JS library. No exceptions.


Yes, any idea why that is?


More than likely it's the lazy loading of the REPL. Those code editors are heavy and when they scroll into view they need to load. If we load up front it would drastically tank the load performance for people just visiting the site. It's possible on mobile we should opt for a click to load strategy.

There is very little we can do about this once we do go to load, it's just the nature a heavy fully featured editor like Monaco.


I wonder if you could share a single Monaco instance across multiple editors with some kind of twoslash chicanery behind the scenes. The TypeScript Playground recently got support for “multiple files” via twoslash comments; it’s kind of buggy but may work with some finesse.


Is this backward compatible with other React packages? Otherwise, I’m afraid it may just turn out to be Yet Another Js Framework.


No, because it's _not_ React-based at all. It's a completely different framework.

It's got some syntax similarities, in sort of the same way that most of the C-family languages look similar (for those languages, curly braces, if statements, semicolons, declaring data types; for Solid and React, function components and JSX syntax), but that's it.

FWIW, from my own viewpoint in the middle of the React ecosystem, I think Solid looks like a fascinating approach and it has a bunch of reasons to be worth considering in its own right. Doesn't mean it'll magically gain adoption or ever be considered one of the major players in the web framework space, but it's definitely far more than just a random toy project.


It isn't, https://www.solidjs.com/guide#react is kinda nice read




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: