I feel like this is a kind of naive approach that might work for some end user programs, but isn't a good principle in general. Large/monolithic programs like databases, operating systems, compilers, etc. all can benefit from added overall complexity. (new data structures for storing data on disk, better memory management and program separation, better/more aggressive optimization+static analysis+etc.) You can argue that monolithic programs like those should be broken down into smaller parts (they already are, though), but in the end adding an R*-tree to a database allows you to do more types of lookups efficiently and makes your program /better/.
The thing that no developer really benefits from is added state complexity. But we've spent the last 30+ years coming up with ways to hide that kind of complexity from a programmer. That's /why/ we have our programs and processes divided into boxes (PL, OS, etc.) and /why/ we use things like higher level languages instead of assembly, object oriented programming, and general data hiding. On the other hand, it sounds much less sexy to say "People should do a better job of following best practices, and a lot of the problems in our industry are because people prefer to glue things onto existing code instead of being willing to do a higher level restructuring of a code base when a new need is established."
I'd agree that a lot of programming is not science. We shouldn't be treating it as such. The harder parts of programming are an application of theoretical mathematics (Dijkstra pointed this out years ago, and it's still true), but very little of CS is 'science' the way something like physics or chemistry is 'science'.
http://www.openmirage.org/ - by merging the OS box and the PL box we can improve performance and security for server applications
I don't disagree with you exactly, but I would point out that a) boxes are a mean of dealing with complexity - less complexity means we can have bigger boxes and more opportunities for cross-layer optimisation b) the places we have drawn those boxes are largely arbitrary and shift over time - the existence of the boxes does not imply that we can't benefit by moving the lines around or by merging some of them.
>a) boxes are a mean of dealing with complexity - less complexity means we can have bigger boxes and more opportunities for cross-layer optimisation b) the places we have drawn those boxes are largely arbitrary and shift over time - the existence of the boxes does not imply that we can't benefit by moving the lines around or by merging some of them.
I agree completely. However, the manifesto seems (to me) to advocate all-over unification as the end goal, which I think is naive. I read it as "if a program needs to be divided into separate boxes, perhaps you need to make it simpler", which to me seems like the wrong way to go about things.
> We should concentrate on their modest but ubiquitous needs rather than the high-end specialized problems addressed by most R&D
A lot of the things we do in programming give us power and flexibility at the expense of increasing the learning curve: eg separate tools for version control, compiling, editing, debugging, deployment, data storage etc. IDEs can show all those tools in one panel but it can't change the fact that they were designed to be agnostic to each other and that limits how well they can interface.
My current day job is working on an end-user programming tool that aims to take the good parts of excel and fix the weaknesses. We unify data storage, reaction to change and computation (as a database with incrementally-updated views). The language editor is live so there is no save/compile step - data is shown flowing through your views as you build them. We plan to build version control into the editor so that every change to the code is stored safely and commits can be created ad-hoc after the fact (something like http://www.emacswiki.org/emacs/UndoTree). Debugging is just a matter of following data through the various views and can also be automated by yet more views (eg show me all the input events and all the changes to this table, ordered by time and grouped by user). We have some ideas about simplifying networking, packaging/versioning and deployment too but that's off in the future for now.
Merging all these things together reduces power and flexibility in some areas but allows us to make drastic improvements to the user experience and reduce cognitive load. It's really a matter of where you want to spend your complexity budget and how much value you get out of it. We think that the amount spent on the development environment is not paying for itself right now.
The thing that no developer really benefits from is added state complexity. But we've spent the last 30+ years coming up with ways to hide that kind of complexity from a programmer. That's /why/ we have our programs and processes divided into boxes (PL, OS, etc.) and /why/ we use things like higher level languages instead of assembly, object oriented programming, and general data hiding. On the other hand, it sounds much less sexy to say "People should do a better job of following best practices, and a lot of the problems in our industry are because people prefer to glue things onto existing code instead of being willing to do a higher level restructuring of a code base when a new need is established."
I'd agree that a lot of programming is not science. We shouldn't be treating it as such. The harder parts of programming are an application of theoretical mathematics (Dijkstra pointed this out years ago, and it's still true), but very little of CS is 'science' the way something like physics or chemistry is 'science'.