It's like saying "robots are replacing civil engineers". Asphalt laying is about 10%? of the work required in commissioning a road. The deciding whether to build a road at all, the costs, where to build it, the math all need to be done by a civil engineer.
The bulk of Software Engineering is feasibility study, requirements gathering, detailed design (architecture) then finally the implementation phase where AI comes in.
Those stages are in order of importance. Getting it wrong in especially the first two results in a high quality shiny white elephant at best.
The implementation phase is at most 20% but on average 10% of the work required to commission reliable maintainable software.
I’ve been using Django several years now. It works. Some things could be more straightforward, but once they work, they’re stable. I’ll keep an open mind though.
I had a quick run with Starlette/uvicorn: similar results than node/uws, a bit faster actually but not significant enough to be meaningful. So I would expect similar results with other modern/fast libraries.
I also found that the "websockets" library is a bit slower (25% or so), all with the default settings of that benchmark.
The issue with Python that the author faced takes 2 minutes to identify and fix: raise ulimits.
Finally, one can question the value of such benchmark in real world applications, especially when the supporting article is so poorly researched as other already pointed.
There has been funding in recent years to fix the quirks and improve performance. The Faster CPython project has had good outcomes towards achieving these goals.
Python 3.13 will have a JIT, and true threads. It'll likely take a couple more releases for these features to be stable and utilized throughout the stdlib and the wider ecosystem. In a few years, performance and quirks will likely not be an issue.