As a narcoleptic I wish that the diagnosis was more accurate, or at least that the insurance companies were more holistic in their coverage of medication. The multiple sleep latency test hardly qualifies as science and has a terrible false negative rate. It’s also expensive so insurance is reluctant to cover it in the first place, and outright hostile to a second attempt.
Any neurologist will tell you that your first night’s rest in a new location will be of a lower quality and depth than at your home. Despite knowing that, sleep studies are performed at the hospital in a room so uncomfortable that it makes the Holiday Inn feel like the Ritz. You’re then hooked up to a dozen different monitoring devices and asked to sleep in an uncomfortable bed with a camera observing your most vulnerable position. You should have no trouble falling asleep!
The second day is peppered with six attempts at napping within a short window, and if you enter REM within a threshold, you’re official diagnosed as narcoleptic. Otherwise you get a consolation prize of “idiopathic hypersomina” i.e. “sleepy person syndrome.” This methodology only selects for the most severe cases of narcolepsy, and as a result, allows insurance companies to gate-keep expensive medication.
I’ve read that a patient’s suspicion of narcolepsy and their final diagnosis is estimated around 8 to 15 years! IMO there is a subconscious characterization of known-unknown diseases as personal failing of the patient’s virtue. Convincing your parents, teachers, and doctors that you’re not just lazy is near impossible until the symptoms become too frequent to explain away. It also stands that doctors cannot be perceived as lacking critical information, therefore it is Not Allowed for their patients to be fatigued unless they’ve earned it, or put through the gauntlet that is our medical system.
Unless your condition is obvious or catastrophic, you're SOL with doctors. If they do not immediately know the diagnosis they either guess or decide you are just a complainer. Your description of symptoms means nothing. They do not research anything if they do not know it immediately. I have narcolepsy and was able to diagnose myself off Google after every doctor had failed. I finally went to a sleep specialist at Stanford and he confirmed with testing.
If it’s any consolation, I haven’t gotten any favors as a trans woman, even with “passing privilege.” Both myself and the cis women in tech I know all hear the same thing: companies are tipping the scales to favor diversity hires. But in truth it seems to be a marketing tactic rather than a hiring strategy.
It’s quite bizarre at all levels — I often receive invitations from recruiters to apply to “women led startups”, but when I ask why I’m qualified there’s no real explanation other than I’m a woman who owns a computer. The same seems to be true of female founded startups. Doesn’t matter what the role is or what’s being built — Does she use a computer while in an office building? That’s women in tech! The purpose of most of these interviews is really about manufacturing consent: “It’s just too hard to hire women! Just look how hard we’ve tried!” I’m all for incentivizing under represented groups, but it wouldn’t be so bad if the phrase “women in tech” was short hand for “women who have written a lot code” and less about “brave women who uses their yonic powers to guide the brutish male code monkeys.” Attend a FAANG sponsored women centric event and you’ll see that I’m only exaggerating a little bit.
Ironically, my transition has been something like a rendition of Gift of the Magi: The more passable I became, the less experienced I was perceived by my peers. And worse, what was once thought of as confident display of technical ability is now seen as a lack of demure. Insecurity runs deep in this industry.
IMO the hiring problem isn’t about gender or race. It’s the fact that tech doesn’t have the luxury of an economic environment where all the money is imaginary. There’s really no era quite like the last two decades. Tech companies could burn through billions of dollars on intangible assets with no immediate need for deliverables. As the perception of innovation diminishes, companies feigned cutting edge leadership by leaning into the virtues, and as a byproduct, having the employees fight over who’s more oppressed.
I think everyone here has questioned if their skill set is actually worth their salary. “Sure, sometimes it’s a free ride, but those hard sprints are really why I’m paid six figures!” — It’s explanations like that which let software engineers hit the snooze bar on whether their employer’s solvency is transitive of their technical expertise, or rather just two decades of zero interest rate policies. It’s likely a little bit of the former and a lot more of the latter.
IMO most engineers are looking through the wrong end of the telescope, trying to find a job like the dating you do when you’re looking for a comfortable but uncommitted relationship. That time is over and our jobs are now akin to the blue collar trades who’s customers have a clear idea of what they’re paying you for, rather than a vague set of technical skills that might be worth exploring on their dime.
To add to your ZIRP point: I find it bizarre that tech companies are still valued at 10x their revenue in the stockmarket. The multiple is a high-tech premium... because computer technology is new and therefore every business that involves computers is high-growth?
We don't pay crazy multiples for businesses that use electricity or telephones. Why should we pay them for tech? Soon all these high-tech companies are going to have valuation multiples like manufacturers, logistics companies and fast-food chains.
> The more passable I became, the less experienced I was perceived by my peers. And worse, what was once thought of as confident display of technical ability is now seen as a lack of demure.
This is very valuable information, thank you. Most of us only ever have the chance to experience the situation from the vantage point of one gender. Are you in the US, or what?
Much appreciated. I’m US based, but I travel a lot for work. It’s a blessing to have a wider perspective on gender roles, especially with so much of the journey now in the rear view mirror. AFAIK there’s no lower rung on the corporate ladder than a sad partially-baked trans person. Hormones are cheap but a remote tech job with a decent salary can make a transition affordable without jeopardizing your career, socioeconomic trends withstanding.
Sadly there are many ways to experience negative social expectations at work. Several of my formerly heavy-set colleagues have observed the perception of their competence being a result of their weight loss. Most of the cis men I know use a combination of testosterone, Ozempic, hair plugs, lifts in their shoes, etc.
I really enjoyed the author's technical deep-dive and approach to debugging performance issues. Mild spoilers for anyone who hasn't played Riven, but the method for fixing Gehn's faulty linking books is a perfect analogy for the author's more counterintuitive performance optimizations.
While I don’t have a write-up as detailed as this one, I spent a month on a similar journey optimizing an animated ASCII art rasterizer. What started as an excuse to learn more about browser performance became a deep dive into image processing, WebGL, and the intricacies of the Canvas API. I’m proud of the results but I’ve annotated the source for a greater mind to squeeze another 5 or 10 FPS out of the browser.
Maybe it’s time to brush up on those WebGL docs again…
I hope that I’m not the only one who feels the anger emanating from these sort of blog posts. It’s stuff like the Qwik developers claiming things like “Hydration is pure overhead” as if it’s the mathematical proof that keeps their reality from crumbling. It’s the same thing on YouTube with people like Theo, gnashing their teeth at how Tailwind is incredible; You’re objectively stupid for not liking what I like; You’re using it wrong; You’re a not as smart as me; I drew you as the Soyjak and me as the Chad. Please won’t somebody tell me that I’m cutting edge?!
I really wish these people would pick up a history book. Unlike back-end developers, whose worth is intrinsically recognized as necessary, front-end development was lowly micro-managed job that must simultaneously keep up with customer expectations in a variety of formats while also rendering whatever slop passes for a REST API. React’s adoption was a combination of talented marketing and a genuine empathy for the frustrations of a 2010s web developer. They gave us a white-lie to pitch the idea to our managers: “It’s just the ‘V’ in ‘MVC’!”
JSX freed us from the jQuery-Rails template spaghetti. A quiet revolution soon followed and everyone’s been butthurt ever since.
Look — Server-side templates, especially the “stringly” typed variety, are a demonic chimera suitable only for depraved alchemists. There’s no type-safety, no IDE references. You’re in Hokey Pokey Hell — we start with a string, now we’re interpolating, back again, now once more deeper and let’s really put your editor’s syntax highlighter to the test!
It’s no surprise that stringly typed tools like HTMX and Tailwind are so deeply admired by mid-career developers who are frustrated by their lack of experience and eager to prove their talent. That’s all very normal and healthy, but the problem isn’t that React is too complex. Building software as team is a complex task of communication, and pretending to be illiterate doesn’t make the hard words any less difficult to read.
There’s most definitely room for improvement in React, and the team at Svelte demonstrated that you could have your state and skip the boilerplate too. Svelte’s compiler is a genius move and unfortunately for them, React’s upcoming v19 will commodify their complement.
It’s never been about replacing React — it’s about empathizing with developers and making it easier to work together.
I generally agree (that the React isn't the main culprit), but what does Tailwind have to do with templating? It's still just CSS classes, but with specific names. Also, I bet you could have strongly typed class name strings using TypeScript's template literal type somehow.
The gist is something like a new cohort of front-end developers and some begrudging back-end folks who never actually learned how anything worked are now trying to inscribe their influence as the new smart people with tools like Tailwind and HTMX.
Wanting to prove yourself isn’t a problem. It’s actually a sign that a developer is starting to form their own opinions. But it becomes toxic when the primary motivation comes from a desire to appear smart instead of actually solving a problem.
You’re absolutely right about typed classes and that’s how React Native does it. Writing and debugging CSS is hard for many of the same reasons that string-based templates exhibit. IMO the developers who push Tailwind are looking through the wrong end of the telescope. CSS is challenging because it’s a combination of declarative aesthetic UI and imperative state management. Choosing to represent that complexity with Tailwind guarantees what could have be a temporary ignorance into a more permanent crutch that retains the same faults of the underlying abstraction, tragically opting out of any of the benefits of embracing the system. Modern CSS is pretty great and learning how it works pays endless dividends.
> Choosing to represent that complexity with Tailwind guarantees what could have be a temporary ignorance into a more permanent crutch that retains the same faults of the underlying abstraction, tragically opting out of any of the benefits of embracing the system.
I think you maybe don't understand what Tailwind actually does. The only abstraction is its "language" that is used to generate the list of class names you can then use to style elements, but that's not really an abstraction, because in the end it's not that different from using something like BEM and then having to remember project-specific classes. But with Tailwind, it's not specific to a single project.
And you always have to understand CSS and how it works anyway.
As a web developer, I’d like to think that we’re effectively alchemists who transmute vague ideas into products held together with absurd magic that’s constantly changing.
Can we get a bill going? I can’t decide between “Webmancer” and “www.izard.com”
I just don’t buy how this is a productive way to build websites. Having the functionality of HTMX natively supported would be nice but you’d still need much of what React does. HTMX’s docs seem to hand wave away front-end state management as something that no longer applies. Simultaneously, they also assume that every API you interact with will return HTML partials.
What could convince anyone to abandon the rich and bountiful lands of JSX and TypeScript? Who would prefer to move into a write-only and stringly typed HTML that competes with PHP for the slot of least performant debugging experience?
> HTMX’s docs seem to hand wave away front-end state management as something that no longer applies.
I feel like you're begging the question that you need front-end client & state. I have ASP.NET apps still running from 10 years ago. They're fine. I'm adding HTMX to remove the page reloads. Why do I _need_ anything else?
I've been consulting for a large org where someone decided that every web app needs to use a consistent pattern, and they mandated Angular. This could have been React, Vue, whatever, the point is that they picked a client-side JavaScript framework for all of their web apps.
Turns out that they weren't actually making web "apps". They were making web sites with mostly read-only content and a handful of web forms.
Traditional server-side templating, like Razor pages, is a well-established method for handling this. Something like HTMX adds the tiny bit of client-side interactivity that is actually required. Nothing else is needed.
The article talks about about reducing code size to 2/3rds and you just handwave that away!?
That's the exact same thing I've been telling my customer! They're literally bloating out their codebase three times over (3x!) by using JavaScript client-side rendering instead of plain, ordinary, boring, and simple server-side rendering like they should have.
For every single thing that they do, they need a C# bit of code and a TypeScript bit of code, and a whole API thing to wire the two up. They are forced to use distributed monitoring spread across the browser (untrusted!) and the server! Deployments have to factor in client-side cache expiry! And on, and on, and on.
I did a demo for them recently where I rewrote several thousand lines of code with 50. Not fifty thousand. Fifty.
"Thousands of lines? This is fine" -- says the developer on the hourly contractor rate.
You aren't more productive when building websites but when maintaining them. Suddenly you don't need a way to scale the insanely complex bundling & caching anymore with each feature you add or deal with the js upgrade churn.
The more developers you have, the less React SPAs are scaling in my own experience. In my current company it's even visible on the bundling graph itself over time.
Respectfully, those metrics are not proxies for productivity. They don’t seem to be grounded in a statical model either:
>They reduced the code base size by 67% (21,500 LOC to 7200 LOC)
> They increased python code by 140% (500 LOC to 1200 LOC), a good thing if you prefer python to JS
Literally what? So they rewrote their app, which was most definitely in a state of affairs that warranted a refactor, and then concluded it must’ve been the limits of React. Oh, and rewrite the back-end too while we sing the virtues of this library claiming a lower technical investment.
Believe me, I’ve got plenty of gripes with React. It’s very easy to build the wrong things with it. And the ecosystem is an overgrown mess. But I’d still prefer a problem of technical curation over debugging a library which marries HTML and server-side templates with an untyped DOM runtime.
there's always going to be an excuse if you want there to be
this is a real world situation (warts and all) where someone rewrote their whole app that had taken them two+ years to build and that was stalled with htmx in two months from a cold start w/no falloff in UX, they simplified the codebase tremendously, they improved performance in both dev & prod and it flipped their entire team to full stack, eliminating a flow bottleneck in their development process
i try to be balanced about things, outlining when hypermedia is a good choice (it was in this case) and when it isn't, but c'mon... if a more conventional reactive library showed this sort of improvement you'd be interested in learning more.
So, maybe it's worth a more serious look, despite your priors? The ideas are interesting, at least:
> hand wave away front-end state management as something that no longer applies
Does client-side state often need to exist independently of server-side state? I’m having trouble imagining a shopping cart or email draft being optimal UX-wise without the ability to resume on a different device.
For things like dropdowns and modals, you can bring in _hyperscript, Bootstrap, Alpine, or even CSS hacks (my preferred approach).
> the rich and bountiful lands of JSX and TypeScript
One person’s richness is another person’s needless complexity.
JSX is cool when you first try it, but the novelty wears off (at least for me it did). There are superior templating languages (Django, Jinja, EEx, erb) that don’t require bizarre syntax such as nested ternaries, and they make it feel like you’re just using a slightly-enhanced superset of HTML (not to mention being able to use them to render things other than HTML).
As for TypeScript, with the checks stripped out at runtime, you’ll still need to validate and test the assumptions your typed code is making. Frankly, TS seems like busywork to me.
Finally, Progressive Enhancement is a thing with htmx. You might be able to have it with React, but then you introduce even more complexity into the build system.
Lookup a stenographer’s keyboard. There is a learning curve but a chorded keyboard can exceed typical typing speeds. I imagine a T9 isn’t too different in this regard.
I use one. I don't think that it would be a good substitute for this use case. You can try and do steno on your phone with Dotterel but it's not a good experience - you're better off using a swiping keyboard. I've not used a T9 system in my life, but I can imagine that it's a system that would let you input anything just typing with your thumbs. To have a good time doing steno, you have to exercise all of your fingers on both your hands. That's not quite so nice on your phone.
This is very cool. I suspect the author encountered that 12 pixel offset due to a default value on the canvas’s text baseline property. Setting this to `top` may resolve the issue, or invoking the `measureText` method and calculating the offset from the output. A fixed value for a monospace font is pretty good too!
Shameless plug, I’ve actually built the opposite of what the author has described. Asciify[1] is my very own highly efficient and over-engineered tool to generate animated text art. It started as an excuse to learn more about browser performance and just expanded out from there. I would love it if a greater mind could squeeze another 5 or 10 FPS on the spiral demo[2]. Maybe it’s time to brush up on those WebGL docs again…
6to5…err, I mean Babel, Already accomplished it’s mission to bridge the feature gap between older browser implementations. And like all bureaucratic melanomas, the maintainers made a strange decision to not only expand their domain to ES7, but to ALL FUTURE VERSIONS OF JAVASCRIPT FOREVER.
Babel became Webpackified and splintered into poorly understood preset bundles of the latest revelations of the TC39. A fractal of API documentation could then be written and rewritten again for the next mission: Newer is better. Modularize everything. Maintenance is a virtue.
I’m guessing that the brain trust at Babel HQ saw how the left-pad situation panned out and something clicked — we could turn our discrete task into an indefinitely lucrative operation as a rent seeking dependency for everyone. Every week could be infrastructure week so long as JavaScript kept adding features.
But what their hubris didn’t factor in was a petard hoisting much higher on the food chain — the Chromification of the web. Now that everyone who’s anyone is building a browser on the same engine, there’s no need for a second cabal of feature creatures to get a cut of the action.
It’s the same reason Firefox’s Wikipedia page has to be disambiguated with the term “cuckhold”; the same reason core-js can’t ask for a dime without macro fiscal policy being invoked by armchair techno economists. Why are you running out of money? Simple — We already paid for it!
These projects have transmuted one kind of technical debt into another, and the sooner they’re gone, the better we’ll all be in their absence. I would pray for a cosmic force to come and topple Babel back to earth, but the irony would be lost on them.
I downvoted you because you made multiple bad faith accusations about people involved in these projects. Regardless of Babel's and Firefox's utility your negative snark isn't helping anyone.
I do appreciate your transparency, though I disagree with the sentiment that I’m arguing from a position of bad faith.
The Babel team has not shown a moment of interest in lowering their role in the JavaScript ecosystem to anything short of kingmakers. I think the facts are self-evident, but I can easily back up my claims by citing pretty much any document the team has ever produced. Have a gander at their GitHub README and what do we see?[1]
- “Babel is a compiler for writing next generation JavaScript.” I suppose they left out “indefinitely” to avoid the obvious. Don’t forget, you’re here forever.
- Over a dozen sponsor logos. An embarrassment of riches.
- A literal audio recording of a song in praise of the project. The call is coming from inside the house, people!
The Babel team has a well documented history of their priorities[2], emphasizing the need for a modular approach that has no exit strategy[3]. At best, we have a case of accidental entrenchment and long term dependence on Babel brewing as early as 2017![4] At worst, we have a group of aspiring Carmack-wannabes looking for their big break into the incestuous and lucrative class of technorati standards committees.
Don’t believe me? It doesn’t take an inner-join on the TC39 roster and the Babel maintainers to see our own version of regulatory capture forming right before our eyes.
Compare this infinite circus to the humble but popular Normalize.css, which has the express purpose to stop existing.[5]
If the Babel team wants to raise some money, they can start by putting a plan together that would codify an exit strategy. It’s certainly more noble than their current plan of barnacling onto every NPM package…
That is a wonderful question and is exactly the sort of thing that should be on the Babel website. You’ll find no such explanation or even a summary of trade offs that come with adding Babel to your app.
It’s assumed that if you want to support older browsers, the next logical step is to add Babel…forever. An incredible trick happens here, where the developer thinks they added the magic package which only bears a “tax” on the poor sap who’s stuck on Internet Explorer, presumably running eye watering amounts of polyfills on 32 bit limits of RAM.
In my opinion, the Babel team should start looking for a strategy that aligns with a world of evergreen browsers, and untangle the web of feature polyfills from syntax transformations.
It’s also not too wild to think that Babel is a symptom of a larger problem. JavaScript lacks a versioning mechanism when new features are added. A more self-aware Babel could use their connections with the TC39 team do what all successful JavaScript libraries do: become part of the standard library a la jQuery and CoffeeScript.
Alternatively, reconsider the velocity that Babel introduces to the JavaScript ecosystem. These tools might actually be self perpetuating their existence by making new features so readily accessible.
With evergreen browsers it's not only about features, but also about security. If you can't update your browser to have arrow functions, you might have security issues. So it is in everyone's best interest that old browsers have yet another reason to be updated.
Also it can be argued that Babel gave IE11 a huge afterlife. IE11 support should have been dropped by the javascript community much sooner, and IE11 should have been used only for legacy apps, as Microsoft tried to. But tools like Babel made it possible for managers to say "c'mon just use Babel".
Also, while it is convenient to have Javascript features before they're available in Browsers, in practice the wait time is not as long as it was. And having a tool removes pressure (including internal pressure) for browsers companies to be fast.
And also: normally, projects using Babel have to pull hundreds of babel-related packages. The biggest complaints you see here on Hacker News about the Javascript ecosystem center around the massive number of packages. Well, guess what: Babel by default on create-react-app needs 133 packages.
The gist of the comment is that scope creep is expensive & mutates the original mission of the organization. Organizations tend to self-perpetuate via scope creep.
IMO, ESBuild is the best option these days. It’s not as magic or batteries included as Webpack, but there’s very little kept secret from you during the compilation process. It’s fast too!
Another tricky alternative is to just use TypeScript’s compiler. Combined with the new import maps spec, you can target most modern browsers and skip bundling all together.
I'd actually recommend Vite over Esbuild directly. It uses Esbuild under the hood, at least for production builds, but during development it uses the native import syntax with some optimisations to bundle dependencies together somewhat. This gives you a really quick development build, and then a well-optimised but still pretty quick production build.
But I think the real benefit is that it's much easier to get right than Webpack ever was. You don't need to start by configuring a thousand different plugins, rather, you just write an index.html file, reference the root CSS and JS file from there, and then it can figure out the rest, including all the optimisation/minification details, applying Babel, autoprefixing, using browserslist, etc. If it doesn't recognise a file (e.g. because you're importing a .vue or .svelte file) then it'll recommend the right plugin to parse that syntax, and then it's just a case of adding the plugin to a config file and it'll work.
I'm a big fan of Parcel, which is a very similar tool for zero-configuration builds, but Vite feels significantly more polished.
I agree - I love esbuild, but Vite is great and will generally give you what you want and more with minimal hassle. The development server and hot reloading are excellent.
I did recently find one thing that didn’t work out of the box in Vite, though. I needed to write a web worker, but Vite didn’t package it into a single .js file, so I had to call esbuild directly to do that.
Any neurologist will tell you that your first night’s rest in a new location will be of a lower quality and depth than at your home. Despite knowing that, sleep studies are performed at the hospital in a room so uncomfortable that it makes the Holiday Inn feel like the Ritz. You’re then hooked up to a dozen different monitoring devices and asked to sleep in an uncomfortable bed with a camera observing your most vulnerable position. You should have no trouble falling asleep!
The second day is peppered with six attempts at napping within a short window, and if you enter REM within a threshold, you’re official diagnosed as narcoleptic. Otherwise you get a consolation prize of “idiopathic hypersomina” i.e. “sleepy person syndrome.” This methodology only selects for the most severe cases of narcolepsy, and as a result, allows insurance companies to gate-keep expensive medication.
I’ve read that a patient’s suspicion of narcolepsy and their final diagnosis is estimated around 8 to 15 years! IMO there is a subconscious characterization of known-unknown diseases as personal failing of the patient’s virtue. Convincing your parents, teachers, and doctors that you’re not just lazy is near impossible until the symptoms become too frequent to explain away. It also stands that doctors cannot be perceived as lacking critical information, therefore it is Not Allowed for their patients to be fatigued unless they’ve earned it, or put through the gauntlet that is our medical system.