Hacker News new | past | comments | ask | show | jobs | submit login
Engineering hints you'll rarely hear (ibm.com)
128 points by wickedchicken on June 16, 2011 | hide | past | favorite | 25 comments



>Almost an incidental issue, but the technician's training was obviously somewhat lacking.

Maybe, but if she could do that, then so could an end user. Better it be caught as early as possible. Some places actually have testers who try to mess up assembling prototypes, seeing what could be put together wrong, before it is sent to production. And with final prototypes to see what still needs work before actual users get their hands on it.


Tactical information is of little or no value to engineering, because product development is so slow (in most hi-tech corporations, typically between one and two years) that it is inherently strategic.

We're not quite that slow, but this is true for my company, and I've never been able to put it into words. I think this might be a helpful formulation for convincing people in my organization to stop taking development direction so lightly.


it is impossible to design an idiot-proof device, because nature continues to develop better idiots

I think this is the most insightful thing I've heard in weeks. I am having a terribly difficult time adjusting to this truth. Before I hand in projects, I now sit down for an hour to think about how stupidity can ruin what I've done. If I think of anything at all, I write it down and restart the hour.


I've heard the aphorism before, but the reason I like this interpretation is that it mentions that "even technical people" make mistakes. I usually take this quote to imply that the people using your products are getting dumber, but here the implication is that as people are getting smarter they'll test your products in ways that you weren't smart enough to think of first. The products need to be safe from the smartest users, not just the dumbest users.


I prefer something half-full: As the knowledge island expands, so does the shore of ignorance.

I don't expect astrophysicist PhDs to understand finance, politics, sewing, etc.



Not sure when that quote was made, as I can't find the date on Google, but Douglas Adams had a similar one that may predate it: http://www.quotes.net/quote/34668


My own version of this, developed back in the 1980s, is: "Nothing can be made fool-proof, because fools can screw things up in ways you will never be able to imagine."


Best quote (IMO):

-snip-

Remember the movie Gremlins, featuring cute fluffy creatures that turn into evil demonic beasts if you feed them after midnight? One of the rules for owning a gremlin (in its fuzzy, lovable form) was "But the most important thing, the thing you must never forget... no matter how much they cry, no matter how much they beg, never, never feed them after midnight!"

Marketing and Sales personnel can be very similar to this. ...

-snip-


I beg to differ, here's the best quote:

-snip-

The BGA part in question lay right on the flexure line and had mechanical stress transmitted directly to its balls -- leading to premature failure.

-snip-

Okay, I guess that makes me a little immature.


See the "jellybean" definition in Tip 4 for a possible inspiration for "JavaBean".


Enterprise JavaBean, yes.


The title of this reminds me of the awful "5 Simple Rules to Reduce Belly Fat" banner ads.

That said, it's an interesting read. Especially for someone that lives in the instant-gratification world of web apps.


I trimmed the title from "The top five engineering tips..." :)


It's so refreshing to hear about fixed requirements, as a separate stage. I know it's waterfall, but too many requirements, ill-defined, changing, lead to dread-invoking code. But we don't need to have separate requirements with software, because it can be changed much more easily cheaply than physical products. There are other advantages of requirements, but they seem to be on the wane.

Reminds me of compiled, statically-typed languages: they used to be necessary for speed. These days, hardware is so fast, non-compiled, non-statically-typed languages like ruby and python are fast enough. But there are other advantages of static-typing, but those languages seem to be on the wane... (is that true, or just an artifact of the preferences of HN/startups?)


These are still hotly debated and one paradigm is hardly 'waning' compared to the other. I think when you say "non-compiled, non-statically-typed" you actually mean "dynamically-dispatched." This page: http://madhadron.com/?p=191 popped up on HN a while ago and talks about this, the author talks about how dynamic dispatch lets you create and mold the language as you see fit -- "language as medium."

It's not a better/worse comparison as much as it is a design decision: non dynamically-dispatched languages (C, Go) don't have their semantics modified at runtime and behave in a more "predictable" manner (and are easier to analyze and prove, which compilers take advantage of for speed). On the other hand, dynamically-dispatched languages (Smalltalk, Ruby and Lisp) let you modify the semantics of the program as it's running (see Ruby's method_missing and define_method for an example). This gives you tremendous power to craft really amazing things (Rails magic), but it also creates tremendous complexity and makes things less predictable. A side-effect of this is that a JIT is needed for programmatic analysis/optimization.

I tend to consider dynamic dispatch the transfinite numbers of computing -- staying within the realm of regular integers "frees your mind" from mind-blowing concepts; you're limited in what you can make but you have more brainpower focused on making that "perfect." On the other hand, full dynamic dispatch opens up so many possibilities that sometimes you worry too much about making "the perfect abstraction" than actually writing code. You may be inexperienced in working in such complexity and (like Cantor) slowly go insane :). One may look down upon the other but I tend to think they're just two different ways to solve the problem.


I'm not sure I understand your use of the term dynamic dispatch. Single dispatch is pretty run of the mill, basically everyone does it if they do OOP. Multiple dispatch used to be a lot less common but these days most interesting languages can attain equivalent functionality e.g. even C# can now achieve some semblance of multiple dispatch using 'dynamic'.

The really interesting mode of dispatch is predicate dispatch since it generalizes pattern matching and all forms of dynamic dispatch. I haven't seen it done fully or with wide use yet. Clojure, Lisp, Haskell (views) and F# (active patterns) all have close approximtions of it. But I don't think that is what you meant.

Best I can figure from what you mean based on naming smalltalk, ruby , lisp is that you mean powerful reflection and metaprogramming abilities in the language.

-----------------------------

You know, the fact that you mention dynamic dispatch and Transfinite numbers in the same post makes you a really cool person in my book but its not fair on Cantor to perpetuate the myth that he went crazy trying to grapple with infinity. He struggled with depression through out his life.

Note also that you don't even have to invoke the transfinite numbers to get some craziness. I am sure you know that the reals are pretty weird themselves - really more an indictment on nonconstructive mathematics.

...Pick a real at random, and the probability is zero that it's accessible - the probability is zero that it will ever be accessible to us as an individual mathematical object...

http://www-history.mcs.st-and.ac.uk/HistTopics/Real_numbers_...

-----------------------------

p.s. if you like transfinite numbers then you may be interested in reading about jaina mathematics who had a notion of sets and mathematics on infinite numbers nearly 1000 years before Cantor.


In large companies, where your code might be worked on by numerous other people, and it';s not guaranteed that the next person to change it will have ever seen it before, statically typed languages are still prevalent, because it means the stupidest errors are spotted as early as possible.


In the wider world most people are still using C, C++, Java and C#. And then in the more interesting statically typed languages the most used are Haskell, Scala, F# and Ocaml. So no I don't think static languages are in the wane.


A nice article, but obviously more than a little hardware-oriented. You could apply some of the same lessons to the software side, but the translation is imperfect.


I wouldn't say the advice is more hardware-oriented (although the examples certainly are). Most of the advice can be translated pretty well to software. However, it's definitely geared towards organizations where production and development do have a clean separation; those obviously tend to be larger and older (> 2yrs).


I would agree it's not so much strictly hardware as strictly Big Co.

But I don't think a clean separation between development and production necessarily leads to a slow development cycle. I've worked for both federally regulated biotech startups and huge biotech companies, and you can definitely be agile and fast even in that space, as long as you're small.


I've worked on numerous projects that defy several of these.


If we were only allowed to post stuff that was universally true with no exceptions, HN would just be an "iPhone news" outlet. ;-)


I'm very confused as to what exactly happened in the first story (what was or wasn't changed vs. what was or wasn't supposed to be changed).




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: