IMHO the future should be one where there is no difference between computers and the service providers simply provide always on, always connected devices.
Is there any reason why there's a "server" or "serverless servers"? Why everything isn't just apps talking to each other? I think no, there's no real hard reason but just its how evolved as the early on the connectivity and device power was drastically different for devices that users had and devices that were needed to process data and stay on forever.
At this day, a JavaScript code can be the engine on a desktop machine, on a mobile app, on HTML client page and on a backend server. Actually, with WASM and some other tools any language can do the same. Its even possible for everything to run in a browser that is available on every platform and all that running in decent speeds because our computers are all supercomputers and the significantly different high performance computing is happening on GPUs.
"Is there any reason why there's a "server" or "serverless servers"?"
Yes, you can not run compute processes without compute. A server is where that compute happens. Serverless is a concept that your function will run on "some" compute, that you don't have to manage. It does, however, require compute. So running that function on your mobile device, a server, a fridge, all within the realms of possible. You still, in the end, need compute to execute the code.
Decentralized compute (running on your devices or someone else's) is what serverless is (only, it's wrapped behind a service offering so it's not completely decentralized).
Whatever it is, just drop it and use computers that can execute some high level languages like JavaScript to perform directly useful actions like storing information in embedded DB like SQLite, reading information, transforming information, transmitting information and displaying information - stuff that can be done with huge performance even on 5 years old mobile phones.
So all that "serverless" stuff is basically that but its using traditional server software behind the scenes to provide maintenance free interfaces to do all the things I mentioned. Another aspect is that the client devices may connect to server resources and directly manipulate data without an intermediary code on the "server" and you still don't need specialised hardware to handle it, its handled cryptographically using algorithms that any computer can run.
The problem with it is that its proprietary non portable software that locks you in. Instead of that, you can run the exact same software on every computer(in the datacenter, at the company building, in the hands of people etc). Bottlenecks occur in certain situations, so you don't store all your clients information on every machine and the problem is solved. You run the same platform everywhere but each machine operates with an algorithm suitable for its role.
Modern consumer computers are beasts, they are capable of processing huge amount of data. Its common for handheld devices to load megabytes of of JavaScript, compile it, render graphics with it, handle inputs and send data tens of times per second to multiple servers as the user is interacting with it. I don't believe that a device capable of doing that will have hard time storing and fetching from an SQL database with a few thousand rows.
You just described one tier of the n-tier architecture. Yes, modern compute devices are fast. However, those devices that produce that data have to send that data… somewhere back to the company that cares about it. In your model, they would just write it to the company’s database? All 1Billion of them? C’mon. Even if you had a decentralized network capable - you don’t have a compute device in the world capable of handling that amount of traffic. Even if each device processed a chunk, and you had an efficient network to coordinate execution operations, you still need to get that data to its “caller” which would bombard it with interface calls exceeding its sockets.
So I appreciate the lively discussion on how modern consumer hardware is awesome, it’s only on the edge of a vastly bigger compute structure.
Fun fact, most companies don't have 1 Billion customers. Most don't even have 1000. They can have large data due to the subjects they may work on but that data tends to be accessed sparsely(i.e. a SaaS that provides KYC products to 10 regional banks).
Google, Netflix and the few other exceptions probably should stick with the traditional stuff for now but there's a huge number of companies that just don't need those hugely scalable systems and sacrifice flexibility and/or spend resources on problems they don't have.
Central sever architecture is dominate because it’s easy. I can buy an IP address within a a data center and it will be routable. The lack of IPv4 addresses made giving every client on the internet an unique IP cost prohibitive for ISPs so we created NAT. The majority of clients who only connect to hosts are behind at least one NAT making them very difficult to address uniquely. The client must initiate a connection for an entry in NAT table to be added and allow for outside entry into the network. Further, more devices are mobile or on very dynamic IP systems that cause them to hop among many different IP address throughout the day or even hour. Trying to establish a connection to devices that have no static IP address is a problem that has no real elegant solutions but always defers to some known authority - the central server.
If we were to solve, once and for all, this ad-hoc, client-client connection system, P2P communication does require trust that many applications simply cannot tolerate. Game servers exist to validate client input, register actions, and distribute them to the players in the game, banking apps verify you are who you are and allow the movement or viewing of money, etc.
I don’t see a future where we fully leave the world of central servers behind but I do see one where we value P2P more and create public routing systems that enable this more freely.
The network architecture is a valid concern but IPv6 is a thing and AFAIK it solves a lot of the problems. With the scarcity of IPv4 addresses, and the ubiquity of the connected devices the push appears to be finally here as hosting providers are ramping up the costs for IPv4.
Besides, even if the client-server architecture is to remain, I expect to see significant simplifications on the algorithm deployments. By that, I mean server apps becoming easy to run and manage as WhatsApp on iPhone.
No more complexities related on running python and all its libraries on Linux on some datacenter. Instead, it should be possible to have an app that just runs and takes care of some high level code that is possibly generated by AI.
For example, maybe in near future we will be able to tell some LLM to create an app that does something useful for us(i.e. track orders on Shopyfy and send personalised questionnaire about the delivery) and deploy it to anywhere(own server at home, Amazon, DO etc.) and never think about the complexities of deployment and only deal with the value added part of the process.
Is there any reason why there's a "server" or "serverless servers"? Why everything isn't just apps talking to each other? I think no, there's no real hard reason but just its how evolved as the early on the connectivity and device power was drastically different for devices that users had and devices that were needed to process data and stay on forever.
At this day, a JavaScript code can be the engine on a desktop machine, on a mobile app, on HTML client page and on a backend server. Actually, with WASM and some other tools any language can do the same. Its even possible for everything to run in a browser that is available on every platform and all that running in decent speeds because our computers are all supercomputers and the significantly different high performance computing is happening on GPUs.