Express is still popular, but a lot projects these days use a full-stack framework like Next.js, SvelteKit, etc.
Fastify, NestJS (bleh), Koa, Hono are the modern replacements for express, none of them have caught on as a standard though. My personal favorite for small projects is Polka (https://github.com/lukeed/polka), when I'm not using Go instead.
IMHO the only reason you are using JS in the backend is because of some meta framework, otherwise is not worth. So at least for Nuxt is nitro, not sure for SvelteKit or the other React meta frameworks.
Next is where it's at these days in my opinion. You get a full-featured client-side React framework (the only one that supports modern React SSG), and then on top of it you get a better-organized approach to doing everything you can do with Express.
And do mean everything: I run an entire Postrgraphile server through Next (and you can easily do the sme with Supabase or a similar tool)!
Eh, I don’t know. I spent some time over 3 days trying to get Claude Code to write a pagination plugin for ProseMirror. Had a few different branches with different implementations and none of them worked well at all and one or two of them were really over-engineered. I asked GPT-5, via the Chat UI (I don’t pay for OpenAI products), and it basically one-shot a working plugin, not only that, the code was small and comprehensible, too.
GPT-5 at its strongest is as good as any model we've seen from any provider. However, while these models aren't parrots, they are most definitely stochastic. All it takes is for a few influencers and journalists to experience a few conspicuous failures, and you get articles like this one and a growing (if unjustified) perception that the GPT-5 launch is a "flop."
I think their principal mistake was in conflating the introduction of GPT-5 with the model-selection heuristics they started using at the same time. Whatever empirical hacks they came up with to determine how much thinking should be applied to a given prompt are not working well. Then there's the immediate-but-not-really deprecation of the other models. It should have been very clear that the image-based tests that the CNN reporter referred to were not running on GPT-5 at all. But it wasn't, and that's a big marketing communications failure on OpenAI's part.
One of several, for anyone who sat through their presentation.
I wonder if GPT-5 benefited from what you learned about how-to-prompt for this problem while prompting Claude for 3 days?
I done a couple of experiments now and I can get an LLM to make not horrible and mostly functional code with effort. (I’ve been trying to create algorithms from CS papers that don’t link to code) I’ve observed once you discover the magic words the LLM wants and give sufficient background in the history, it can do ok.
But, for me anyway, the process of uncovering the magic words is slower than just writing the code myself. Although that could be because I’m targeting toy examples that aren’t very large code bases and aren’t what is in the typical internet coding demo.
You can use GitHub Copilot as an API (free unlimited gpt-4.1) with the generated token used in the JetBrains extension, but it's really stupid so I canceled the subscription and subscribed to Claude.
It’s a shame that AI companies don’t share examples of their training data. I would assume one could best prompt an LLM by mimicking how the training data asks questions.
In my experience as an early adopter of both Cursor and CC, nothing. I don't have a CLAUDE.md.
My expectations have shifted from "magic black box" to "fancy autocomplete". i.e. CC is for me an autocomplete for specific intents, in small steps, prompted in specific terms. I do the thinking.
reply