This sounds like Engineering, but be aware that in mechanical engineering the requirements are well-defined for materials and load. Consider the famous bridge resonance problem and you will see why wind resonance is now a required subject for structural engineering.
ME has an advantage over CSE in this regard because in CS the requirements are usually very poorly stated. About the best and most rigorous requirements you get are from mathematicans, but most devs and managers now certainly view that as wasting resources.
To make not to fine a point of this, I always like to turn to Alan Kay's comparison: TCP/IP vs the web. When was the last time any human technology like TCP/IP scaled so well and so invisibly, it's like air we breathe without even thinking about it. The web in contrast was the work of rank amateurs.
Alas, mathematics is really the "good enough" standard we in CS should strive for just like physics is the "good enough" standard behind ME and EE. Unfortunately as CS opened to the mainstream, I think a deep fear of mathematics led us to view this as "over engineering" even when it wasn't. The results are that the majority of the web is woefully underengineered, requiring far more money and time for inferior products.
We know they are inferior, because even the simplest gui application has more consistency than a web variation of it. And that's what marketing constantly compares things too when they can't understand why the web sucks as much as it does.
"Good Enough"? Please! For the last 20 years we haven't even come close!
Alas, mathematics is really the "good enough" standard we in CS should strive for just like physics is the "good enough" standard behind ME and EE. Unfortunately as CS opened to the mainstream, I think a deep fear of mathematics led us to view this as "over engineering" even when it wasn't. The results are that the majority of the web is woefully underengineered, requiring far more money and time for inferior products.
As I have stated multiple times in the past, I think the crux of the problem is that software is an immature field that needs to stratify into a proper engineering discipline as it matures. Computer science should be the "good enough" standard behind software engineering. "Computer scientists" should not be the ones actually implementing software systems any more than physicists should be the ones designing cam shafts or laying out circuit boards.
The opening to the mainstream you refer to illustrates the problem. The people in the mainstream should not be studying computer science, and what they practice should not be called such. They are the engineers, technicians, and mechanics of software; they are not the physicists. Forcing all of these strata into the same bucket is doing more harm than good at this point and is likely hampering the field's drive to mature.
(my bias as a C# dev will show here)
Most likely a professional association, with a test you have to take to call yourself a software engineer demonstrating basic competency in a couple of paradigms, and a rudimentary knowledge of patterns and practices.
It's 2015, there's no reason that anyone should be writing Big Object Oriented Code, without practicing dependency injection, basic mocking and testing, and other modern development principles. And yet, here we are, with millions to billions of lines of terrible new code written every year.
It's interesting that you mention TCP/IP, because that's the most famous example of a technology that "works in practice, but not in theory". Take a look at Van Jacobsen's presentation on the history of ARPANET:
Packet switching was "utter heresy" (20m in) when it was invented. It "wasn't a network, it was an inefficient way to use an existing network". And it almost collapsed 25 years after it was invented; the presenter is famous for inventing the modern TCP/IP congestion control that saved the Internet [1]. The TCP/IP flow congestion algorithm has been redesigned several times since [2]. It works not because it was designed well, but because it wasn't designed and instead evolved over many years with many contributions from people devoted to keeping it working.
Alan Kay is usually who I think of for great ideas that "work in theory, but not in practice". He's done some crucially important work in OOP, programming languages, and GUIs. But note that we don't actually use SmallTalk; instead we got C++ and Java. Nor do we use Dynabooks and Altos; instead, we got Microsoft Windows.
Good points. We might be focusing on different evidence for good design. Certainly a system has to be tested -- the theory is not enough. But areas of TCP that were designed up front were things like being able to carry arbitrary payloads, including itself! This is what makes ssh and vpns possible. By contrast, many web protocols break when tunneling: soap within soap.
Also, for those unfamiliar with the actual process of protocol development back in those days, it was a wire protocol, which meant formal modeling and testing. Sure, it doesnt catch everything, but the web is far less formal. For example, the w3c originally said it wasn't going to provide an XML parser reference implementation because any graduate student should be able to code it up in two weeks. WTH?! While I don't doubt that is true, in practice it has meant that dozens of slightly different parsers were written, leading to hundreds of slightly different incompatibilities. Anyone who has had to integrate two different XML stacks will know.
I use Ruby, which was inspired by smalltalk; modern Java is also becoming much more functional.
In some ways, it has taken the larger community 20 years to understand Kay's vision. Also, he always said that the systems he worked on were prototypes -- he's commented before that he fully expected real-world systems to have surpased his long ago.
But now we have Ruby, Node and Rust. Even Java and Spring.io have dramatically reshaped things towards a "smalltalkish" future. So I still put a lot of weight behind some of Kay's observations of the industry.
The web might have been created by amateurs, but it works incredibly well. Sure, we have browser incompatibilities and whatnot, but this is an effect of having multiple independent implementations of the standards, which is part of what makes the web work in the first place.
no, it doesn't. Having multiple independent implementations is a PITA, but that's why you have testing & validation labs along with standards. Take windows graphics driver labs... They test for pixel perfect compliance of output across hundreds of vendor implementations. Contrast that to the web where it took a separate group outside the w3c to embarrass browsers with the ACID2&3 tests. Now separate browsers look a lot closer in output.
Devs are trying to fix these ecosystems: why does react use a virtual dom? Why do we need css resets? Why do we need js shims and polyfills? Because its the only way to come close to normalizing the platform.
But have you ever wondered why you expect no two browsers display the same image? Postscript met that bar and is just as old as the web. Why didn't the w3c base the web on device independent coordinates instead of this confusing and unpredicatble layering of partial scalars and "angle subtended by a pixel on a 96dpi surface at a nominal arms length from the surface" crap? No one could have made a reference implementation off those requirements, much less a consistent verification & validation suite.
And no offense to TBL, but HTTP didn't even survive first contact with netscape's vision of shopping carts. Cookies? An elegant solution? Or simple a new hell of tunneling client/server state over a supposedly stateless protocol. HTTPS everywhere requires long lived sessions as the basis?!? No wonder people are heading towards web sockets, etc. webapps are client/server apps -- HTTP was always grossly misapplied to them.
Webdev is hard, not because I'm building beatiful bridges in the sky that are "good enough" poetic balances of constraints while coming in on time and on budget... Webdev is hard because of all the underlying assumptions I constantly have to check and recheck because I can't rely on them as an ME would (or hell, even as a backend J2EE engineer would). This is why some of us lament that people don't know the stack all the way down, because we have to in order to solve real problems. Every abstraction leaks, but hell, web abstractions are flipping sieves!
No, the thing that "works incredibly well" is not the web, but whats under it that lets us make so very many mistakes and yet keep on trucking.
Regarding Postscript vs web: I'm only talking about device independence. Specifically in the case of pixel perfect layouts. I am not talking about the layout constraint problem, which is extremely challenging no matter the technology. But layout constraints depend on a solid notion of coordinate system, which the web lacks. Device independence gives you that in postscript and SVG.
Besides, windows faces a similar problem of multiple resolutions and devices. How do they v&v? They set the resolutions the same for certain tests! Even if you do this for browsers, they cant pass the test. Yes, it would be nice if we could have device independent layout constraints as well, but even the simplest most constrained test not involving layouts fails. At least now, its close. Before ACID it wasn't even close.
If webdevs can't even rely on their browser coordinate system in the most heavily constrained case, how can they hope to trust it when they try to solve challenging problems of dynamic layouts across multiple resolutions?
Two browsers cannot display the same image if they say use screens of different sizes and dimensions. This is the difficult problem that HTML and CSS try to solve, so that the same web page is actually readable both on a desktop screen and a mobile.
PostScript does not even attempt to solve this problem, so would never work on the web. Unless you mandate that everyone should have screens with the same dimensions and dpi.
>The web might have been created by amateurs, but it works incredibly well.
The fact that it was created by amateurs helped make it work incredibly well for amateurs.
The professional CS alternatives - gopher and the like - were not successful in comparison.
There's a thing in CS where solutions become so clever they become stupid - because the goal stops being task-oriented usefulness, and becomes ideological and formal purity.
It's the process that turns a plain hammer into an atomic pile driver you can only control remotely from the moon by sending it messages using catapulted owls in space suits. It's better at hammering in some abstract sense, but maybe not so much for hitting nails.
Abstraction without contextual insight is one of the most powerful and destructive of all anti-patterns.
On the web the professionals took over from the amateurs, and now web technology is another example of design-by-committee.
It still works surprisingly well because interplanetary owls are kind of fun, maybe, for some people. But is it ever a mess of half-solved problems generating recursive epicycles of complication.
In any case, Tim Berners-Lee wasn't an amateur. He was a computer scientist who had experience with information systems before creating the Web. But his design was clever AND simple and accessible to use for amateurs.
If I remember the context of the "created by amateurs" quote, it was really Alan Kay complaining that the web wasn't designed by OO principles. He wanted the web to consist of objects encapsulating their own presentation logic, rater than document in declarative languages. So basically something like Java applets instead of web pages.
While OO is great for software design, I believe declarative documents have proven to be much better as a foundation for a decentralized information system. Think about how to implement Google, accessibility, readability.com and so on in a web of encapsulated objects. And it is not by accident that TBL chose declarative languages over objects, he actually though about it: http://www.w3.org/2001/tag/doc/leastPower-2006-01-23.html
This is an example of the contextual insight you talk about, and which I believe Kay lacks in this case.
EDIT: The interview is here: http://www.drdobbs.com/article/print?articleId=240003442&sit... It is not totally clear what he is arguing, but is seems he is suggesting that the only job of the browser should be to execute arbitrary code safely, but any actual features beyond this should be provided by the objects. So the browser should really be a VM or a mini-operating system executing object code in a safe sandbox. This seem to be the philosophically opposite of TBL's principle of least power.
Honestly, it seems like Kay is ranting a lot in the interview. When something like the web is not designed the way he would have done it, the only reason he can imagine is that the designers must have been ignorant amateurs.
So I agree that TBL himself did a great job designing HTML for exactly what he concieved: distributed documentation. It was not however a system designed for web applications. Almost immediately after it gained popularity, people wanted to represent shopping carts. Even the places where Roy Fielding's thesis on REST are well understood and applied, it is very difficult to turn documents into applications without implicit client server state.
Just because TBL is brilliant doesnt mean his work can be misapplied. Of course, i also blame the people who thought of scaling thousands of existing client server applications for a fraction of the cost: things like shopping carts and online banking. True, it drove the web to what it is today, but at great cost.
Here is another thought: if the web is so great, why are so many companies creating their own tablet/mobile app experience instead? It cant be because it requires less dev knowledge and effort?
And that applies not only to engineers, but to managers - i.e., task definers. Managers aren't interested to ask engineers to build something which will withstand the test of time unless they are sure that time will be actually needed - and with limited information (and high speed of changes) you have today managers tend to think short term. So the whole civilization works along the principle of dog chasing the rabbit running from left to right - instead of predicting where rabbit will be in 10 seconds, let's run straight towards the rabbit and in half a second update the course. We win since rabbits don't usually run predictably along the straight line.
It sounds like a good task definer (or engineer) should be able to know when thinking a bit further ahead matters and when it doesn't.
For instance, you probably need to take a bit more care with defining your core database structure, than you do with positioning a button 20px to the left or right.
> take a bit more care with defining your core database structure, than you do with positioning a button 20px to the left or right.
not when you are trying to prototype the UX - and don't care about the backend. There's no rule of thumb - it's all very subjective and intuition based. That's where experience comes in, and no amount of book studying will help you.
> Anyone can build a bridge that stands up, but only an engineer can build a bridge that barely stands up.
That's a nice quip but once you cross spans of 5 meters or so you'll find out that that is a lot harder than it seems, especially for non-trivial loads.
The joke of course refers to the fact that to build any structure that has to be both safe and economical is hard but please don't make it seem as if building bridges is easy, it's anything but.
Again, sounds nice, but an ME wouldnt be so cocky if he had to smelt his own materials and quality grade them before even starting to build the bridge. ME's get the benefit of a older, more mature industry that surrounds them and enables them to make rational decisions. See how quickly that goes to hell when getting substandard parts from a sketchy supplier. Then you can see the schedules and the budgets go to hell.
Multiply this times 100 and you are just about in the same situation as web developers. Maybe in 100 years, an ecosystem will grow around to support us? I can only hope. For now, i cant even rely on xml to be marshalled the same unless i control both client and server.
Good Enough is our main design goal. Anything more than good enough means you're wasting resources.