Modern PHP is great but it still has the same inconsistent API. I did PHP for many years and had to look up the manual all the time because functions have different conventions for arguments.
It does have a very large and inconsistent "stdlib", I agree. On the other hand, you very rarely have to look for 3rd party libs. Is node.js better because you have to trawl throgh npm for highly variable quality (and equally inconsistent) add ons (that typically pull in a spiderweb of other dependencies) ?
I've worked on PHP pretty much non-stop since PHP/FI
and while this is a good try, it goes off the rails telling people they need a DI library, container and how routes SHOULD work, which is pure hubris. I won't be pointing to this as a good example.
I’m not sure the barrier to entry is still so low.
Someone starting now would be splitting attention between “modern” concepts like presented in the OP, and the hacky/broken standard commands and quirks that persisted through the revisions.
I’d guess Ruby or Python would be easier for someone starting from scratch.
APCu is no longer a bytecode cache (if that's what you mean by userspace?) it's exclusively an in-memory k:v store
FastCGI sometimes scales better but also has increased latency because it adds a proxy layer vs mod_php where you sacrifice memory for more "direct" access.
Yes, that's what I mean by userspace. It's a hedge against startup time and separate process pools for fcgi. Php's answer to node.js's persistent processes.
- Sane, easy to use package/dependency manager that just works, every time (none of node/python/golang really have that, for entirely different reasons).
- Large stdlib that, while inconsistent, frees you from searching packages for everything (vs nodejs).
- I'm sure some will disagree, but php's documentation is top notch, easy to navigate, and with nice completion from user examples (you could say it is a mini-stackoverflow by itself). Golang's docs are good, python's are good but hard to navigate, node's are typically worse.
- PHP's ordered, possibly hash-backed possibly plain arrays are an abomination from a data structures point of view, but god are they handy! Golang is terrible here with its static types (fine) but lack of generics (just check how to sort a list of a non-primitive type).
1. I rarely have problems with package management in Python. I know pip has flaws but it's not enough of a pain point for me to see it as an issue.
2. I'd say Python is an easy match in this regard
3. Agreed about the docs - I still find the organisation of Python's docs a bit peculiar.
4. Handy data structures, you say? I think Python might have the edge here.
It's strange you don't mention PHP's biggest strengths: deployment and learning curve: You can begin on almost any commodity hosting with a simple text editor and no knowledge of Unix internals. The amount of extra stuff I had to learn to be productive in Python was huge at the time. It's got a bit better now with various simple deployment options but PHP still wins out for the beginner.
I think the biggest pro for PHP is its simplicity. By default you don't have to care about the server. You just place some text-files in a web server directory and open a website, easy. And dynamic languages tend to be easier accessible for newbies too. It is also a con, as some things are just not available (e.g. multi-threading), but in general that lets you focus on the application logic and lowers the entry barrier.
Traditionally, a major pro for PHP was also its broad availability for web space but that doesn't seem to be so important nowadays anymore.
Putting node, python and Go in the same place against PHP is kinda hard as those three are very distinct from each other already ;-)
Nodejs out of the box would ofc. be faster than PHP on something nodejs is born to do. But lets say we needed to do a bit of CPU work on each of these requests, then nodejs might struggle compared to PHP.
Both should be able to scale well in a "create a website and return the HTML"-scenario, and it's not unlikely that a PHP setup would be faster.
In a single page app thats mostly based on fetching data from an API nodejs would probably be faster.
About APCu: It's damn fast and very unlikely to slow you down, other stuff will probably bottleneck you way before APCu becomes a problem.
Use PHP as PHP, and nodejs as nodejs.
If you really need async in PHP or want to use PHP somewhat like nodejs then you should probably go for an extension and serve requests directly from PHP. I've had great experiences with that, serving over 500K simple HTTP req/s on a low end server.
But again, if max performance on a single server is so important then you should probably go for a compiled language anyways.
But php is going to have some inherent latency starting up a process for each request and an overhead of spinning up a new process for every request. If you need speed you shouldn't use any of three languages you just mentioned though. These days I think it's easier to find nodeJS developers. You can take a frontend developer and teach them nodeJS very quickly also. It's arguably just as easy to proxy from nginx as it is to setup a PHP sapi
It's not that bad anymore. We do have fcgi, fpm, and also
user space caches like apcu. You would have to try pretty hard to find a default set up that resulted in process per request these days.
It scales very well on a single host. Which has limits, but limits that exceed some percentile that most projects need. And things like redis take it even further. If you're pushing the limits of php, you probably have a nice-to-have problem tied to lots of incoming revenue. Similar to FB and HHVM.
I agree there's a scaling issue, but nobody is doing process per request anymore. Process pooling is a decade old or more.
Some people now boot an OS to serve each request, and reinstall the OS to deploy new changes. It's funny, even the objections to PHP haven't kept up with the trendy software practices.
I guess the objection to "spinning up a process" is actually a problem with slow PHP initialisation, though? A dinky server can spin up many thousands of processes a second. I'm pretty sure OS process overhead is nothing compared to whatever PHP or Rails or whatever else puts into your critical path, but the language/library/parsing requirements could kill you. (Though people still make this objection to CGI itself, not just PHP...)
The php "maybe process per request, if I'm being dumb, but probably not", vs AWS lambda "business as usual, here is a whole new os instance" is pretty amusing.
Lambda surely doesn't actually boot a new OS for every single request. I thought it stuck around for a set amount of time and you could even ping it to keep it warm?
There are interesting projects such as https://github.com/php-pm/php-pm which sets up a pool of 'warmed' workers to speed this up; I've never used that particular project in production but the approach seems reasonable, many of us are probably already handling background jobs using a similar architecture.
I'm using php-fpm with all production services since 2012 (mainly with nginx, but also with apache), never experienced any problems, so I'd highly recommend trying it out.
php-pm is very different to fpm, imagine running a heavy framework like laravel where it can take 100 ms to bootstrap a request; why not use a worker pool and delegate the request to it? that's php-pm.
To me, it's no worse than any other dynamically typed language now. And arguably better than similar peers because the barrier to entry is so low.
FastCGI and user space caching like apcu mitigate the old "one process per request" complaint.
It's a true workhorse. Sure, node.js and Golang have advantages, but so does modern php.