Hacker Newsnew | past | comments | ask | show | jobs | submit | yahoozoo's commentslogin

What backend framework is the go to these days? Still Express?

Express is still popular, but a lot projects these days use a full-stack framework like Next.js, SvelteKit, etc.

Fastify, NestJS (bleh), Koa, Hono are the modern replacements for express, none of them have caught on as a standard though. My personal favorite for small projects is Polka (https://github.com/lukeed/polka), when I'm not using Go instead.


IMHO the only reason you are using JS in the backend is because of some meta framework, otherwise is not worth. So at least for Nuxt is nitro, not sure for SvelteKit or the other React meta frameworks.

Next is where it's at these days in my opinion. You get a full-featured client-side React framework (the only one that supports modern React SSG), and then on top of it you get a better-organized approach to doing everything you can do with Express.

And do mean everything: I run an entire Postrgraphile server through Next (and you can easily do the sme with Supabase or a similar tool)!


> model welfare

Give me a break.


Eh, I don’t know. I spent some time over 3 days trying to get Claude Code to write a pagination plugin for ProseMirror. Had a few different branches with different implementations and none of them worked well at all and one or two of them were really over-engineered. I asked GPT-5, via the Chat UI (I don’t pay for OpenAI products), and it basically one-shot a working plugin, not only that, the code was small and comprehensible, too.

GPT-5 at its strongest is as good as any model we've seen from any provider. However, while these models aren't parrots, they are most definitely stochastic. All it takes is for a few influencers and journalists to experience a few conspicuous failures, and you get articles like this one and a growing (if unjustified) perception that the GPT-5 launch is a "flop."

I think their principal mistake was in conflating the introduction of GPT-5 with the model-selection heuristics they started using at the same time. Whatever empirical hacks they came up with to determine how much thinking should be applied to a given prompt are not working well. Then there's the immediate-but-not-really deprecation of the other models. It should have been very clear that the image-based tests that the CNN reporter referred to were not running on GPT-5 at all. But it wasn't, and that's a big marketing communications failure on OpenAI's part.

One of several, for anyone who sat through their presentation.


I wonder if GPT-5 benefited from what you learned about how-to-prompt for this problem while prompting Claude for 3 days?

I done a couple of experiments now and I can get an LLM to make not horrible and mostly functional code with effort. (I’ve been trying to create algorithms from CS papers that don’t link to code) I’ve observed once you discover the magic words the LLM wants and give sufficient background in the history, it can do ok.

But, for me anyway, the process of uncovering the magic words is slower than just writing the code myself. Although that could be because I’m targeting toy examples that aren’t very large code bases and aren’t what is in the typical internet coding demo.


I thought double escape just clears the text box?

With an empty text box, double escape shows you a list of previous inputs from you. You can go back and fork at any one of those.

Is there a way to route Claude Code through a GitHub Copilot subscription?

You can use GitHub Copilot as an API (free unlimited gpt-4.1) with the generated token used in the JetBrains extension, but it's really stupid so I canceled the subscription and subscribed to Claude.

How can an LLM determine a confidence score for its findings?

It’s a shame that AI companies don’t share examples of their training data. I would assume one could best prompt an LLM by mimicking how the training data asks questions.

Can we please stop with the “same for humans!”

The whole point of AI is to replicate human intelligence. What else should we be comparing it to if not humans?

What if MCP servers were really the neurons we were looking for all along? /s

What kind of things would be good to put in the CLAUDE.md?

In my experience as an early adopter of both Cursor and CC, nothing. I don't have a CLAUDE.md.

My expectations have shifted from "magic black box" to "fancy autocomplete". i.e. CC is for me an autocomplete for specific intents, in small steps, prompted in specific terms. I do the thinking.

I do put effort in crafting good context though.


(Don’t listen to this advice. The agent markdown is a valuable part of context engineering)

It gets routinely ignored. Been there done that.

It’s context like any other means of injecting context. All context gets ignored

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: