If we invented the car, but there was no reverse and no left turn, we could say that the problems were due to poor drivers and poor planning, but the problem would clearly be that the car is not sufficiently wieldy.
You can say that software sucks because of poor programmers and poor project management, but the truth is that the code is not sufficiently wieldy. There's no way to manage the code. I can't query all places in the code where the UI interacts with a database column. Accounting systems can give you a variety of reports based on abstractions at a variety of layers, slicing the information in different ways (horizontally, vertically, etc). Software systems? Go fish.
Software sucks because at some point we got too excited about what we were doing and stopped (sufficiently) caring about how we were doing it.
There is one thing that makes software suck, and that's how far apart its developers and users are. I dream of going to SAP's offices in Germany and seeing how they book their own vacation and submit their own expenses. I can't believe they do it with their own product, or it would be slick...
the users are not the target group for enterprise software.
expenses is a great example, it needs to accomodate so many different regulations, compliance issues, legacy compatibility needs within an implementation, that it can't be more than a generic shell in vanilla SAP. so you need consultants to set it up in the first place. what do they do? ask the client's project team to tell what they want...and this is why all enterprise software implementations produce shit outcomes. non-software people having a direct say on UI/UX.
There certainly can be a disconnect which impacts a developers ability to see what a user wants, but even if there is no disconnect, bad software can still result.
Consider internal software - there is literally no disconnect whatsoever, but very often the software is terrible. This is relevant because it raises the issue that the business simply may not care about quality, and thus putting in the work required to make the thing good might simply not be possible.
There are many, many factors involved in determining whether software sucks or not, and key is the understanding or lack thereof of what the users want and need, agreed. But keep in mind, there is a big difference between seeing what the users' need superficially vs. understanding clearly and deeply what the users' need and implementing it well.
At the risk of sounding like a broken record: Paper protoyping. Even if it's for yourself, it never feels the same to interact with a product than you think it will. This holds even more true when it's for other people.
For internal projects, investment is obviously a huge problem, and once you've built the wrong thing you're unlikely to get more money to do it right. But for a given price point, there's a huge range of quality you can obtain. Paper protoyping helps you maximize that at very low expense.
absolutely. I worked for a few years at a place that used SAP for everything HR related. I still recall the pain of trying to record a sick day or book a day off (for extra points, try a half day!)
we also used a ticket tracking packages called, I think, CAMS. possibly a CA product. oh, and we used Harvest for source control. I still recall the joy in learning CAMS would be replaced, and the horror that followed as we came to grips with ManageNow. Never dreamt I'd wish for CAMS to return, but I did.
So, yes, I agree. I can't believe the companies that peddle this crap actually use it.
We were discussing this recently whilst trying to come up with a guideline about when the computer should "help" and when it should just leave you in peace. We looked at lots of software we hated and loved, and tried to pinpoint the why. One of the things that came up most about hated software was that it had a high frustration level
Who's ever wished for a giant red "PLEASE STOP HELPING ME" button on their computer? Everyone, right? One of the causes of frustration we looked at was when the software helps you, but gets it wrong. Frustration is increased when the effort to fix the software's "help" is anything other than minor, and increased again when the computer repeats it's efforts at helping. Throw in no obvious way to get it to stop and you're in for some blood-boiling times.
The guideline we came up with was to not concentrate so much on how cool it is when things go well, but think about what happens when the software guesses user intent incorrectly. Consider:
* How often is the software likely to get it wrong?
* what was the level of irritation is caused by getting it wrong?
* How much effort is required to undo the computer's help?
and weight it up against how much effort you're saving the user when you get it right.
Thinking about this has caused us not to add a particular cool feature we had planned, because even though (we thought) it was very cool, there was about 30% chance or so of it being wrong, and the payoff wasn't worth it.
Software sucking is a direct outcome of future user's involvement in the design process.
as in: if you're writing software for a client and the client drives the specs directly, the outcome will suck balls.
evidence: all enterprise software installations ever done. from ERP to CRM, the vanilla software packages might even have good UX/UI, but once implementation with all its "critical" customizations is done, you'll end up with a turd.
I tend to think that splitting development into API and front end solves some of this.
The main problem with "enterprise" software is that it's done cheaply, and violates levels of abstraction. Thus why it frequently requires specific versions of antiquated technology ("Our CRM requires IE6", "We can't upgrade past Windows 2000").
If it was designed properly, you could easily rip out both parts and iteratively redesign them to meet business and technical goals.
Seconded. The university I'm attending uses PeopleSoft (acquired by Oracle) as their ERP system, and it's horrendous. Not a week goes by where I don't hear students and faculty members openly complain about it.
I've got a live PeopleSoft installation running on a server in my apartment that I'm outfitting with code to do exactly what you mentioned - split out the back-end from the front-end. So far it's going brilliantly - for some sadistic reason, I enjoy trying to reduce the complexities of these applications.
You know, I wanted to do something very similar with my university's installation of PeopleSoft - but my intent was more focus on improving the UI and frontend than backend. But I have to ask: how did you get a copy of PeopleSoft?
Actually, what I'm working on involves both front-end and back-end. I've got a UI that trades data back and forth with a web service endpoint called Integration Broker within PeopleSoft. I'm focused on enrollment right now, and currently I've got a system that allows me to enroll in classes using the new UI on a live PeopleSoft install - all without touching/modifying the business logic in the delivered vanilla PeopleSoft implementation.
Re. the PeopleSoft copy - Oracle provides all of their software (and master license codes) for download for evaluation purposes through a portal called eDelivery. I had to read a few hundred pages of documentation, but after a month I was able to get all the components to talk together. I'm trying to convince Oracle to give me a non-support license so I can cover myself legally, but I'm getting the silent treatment since it's just me and I don't have the budget of a CTO lol.
A little while ago I overhead a conversation between some friends of mine, one of whom is an interior designer who does residential remodeling:
a: "We always tear out everything, down to the rafters."
b: "That sounds expensive."
a: "It can be, especially if you find out the wiring or something isn't up to code."
b: "What if someone wants only---"
a: "No, there's no 'only'."
In my experience, if programmers always had a complete design specification before starting, and if we always "tore up" the old mess (down to something unquestionably stable) before making any improvements, our software would be much, much better. It'd probably also cost more.
The biggest cause of this that I see is that the people paying for it are not the people using it. In the consumer market we are starting to see that good design does sell (I think it was Gropius who predicted that the cost of design is amortized). But with enterprise software, the people paying for it are rarely the ones using it, so they're perfectly willing to buy a lousy program if it means saving cash in the short term.
In my experience, if programmers always had a complete design specification before starting, and if we always "tore up" the old mess (down to something unquestionably stable) before making any improvements, our software would be much, much better. It'd probably also cost more.
And by the time the software was done, it'd be useless. A house doesn't get obsolete in ten years, but software - which is much more complex than a house - can be obsolete in ten months.
Not to mention that I'd rather not be reinventing the wheel every six months.
I often hear that but it is no true in my experience. Maybe your run-of-the-mill flavour-of-the-month hipster web 2.0 app has a limited shelf life but most enterprise apps stay deployed for years.
But is it because the needs haven't changed a bit, or because it's good enough and it'd cost too much to replace it? Because if software was always tored out, that would only accentuate the latter problem.
“No one makes bad software on purpose” is not strictly true: many esoteric programming languages are designed to be “bad” in interesting ways. That extends to any subversion of highly-regarded tropes in games and other media, even the non-interactive kind. It’s also not true that “in order for people to say ‘this sucks’ they have to care enough about the thing you’ve made to spend time with it and recognize how bad it is”. People routinely criticise programming languages and other software they’ve never used before, even if said software doesn’t actually suck much at all.
Software tends to suck when either you build the wrong thing or you bit off more than you could chew and the implementation of a good thing ends in disaster. Or both.
Generally the "lean startup" credo holds true here: Build an MVP fast, get feedback and iterate.
However, there is one technique which doesn't get enough airtime in my view and that is paper prototyping. It gives you a surprising amount of quality feedback with a fraction of the effort of coding something up (and no bugs!), and allows you to iterate there and then.
Very amusing that you think that as that approach is what got us into this mess. MS (not to single anyone out in particular) has always been notorious for rushing a buggy 1.0 to market then iterating it. That's why you need a quad-core 3Ghz PC with 8G RAM to write a single page of A4 now.
I can't agree that a buggy and slow version of a product is an MVP.
There are a lot of things that you can cut from a product to make it minimal. I don't believe quality is one of them.
However, I also believe that user feedback cannot be the only factor to build something truly great.
Point taken, though I meant minimal feature set, not buggy. Also, 2000 and later Office and Windows are hardly the worst offenders in terms of "software that sucks"
One reason I don't think gets mentioned enough for software sucking: developers don't really use the product enough to optimize it (they don't have time). Especially for complex applications. If there isn't an effective feedback loop from users about pain points, and an institutional drive to address them, major problems in software persist version after version.
Software sucks because too often developers fail to learn what the software is supposed to do.
Developers trivialize things and interpret them in far "superior" ways that lead to huge gaps of "they never told us".
Development has to move from just coding, to learning to first understand what details are being managed, and how those details interact in a system in all the stages the details/data exist.
If we believe every business is becoming a software business, the reverse is true, all software developers must understand the business more and continually develop the skills to be the bridge between business goals and technology.
How to do that?
Shut the hell up and learn.
Ask (and learn) why things are done a certain way. Uncover any competitive advantages the business has from doing things a certain way before getting on the high horse and deciding to improve the world because it's so obvious. Developers may be surrounded by non-techs, but they certainly might be surprised to see the organization itself does have processes and competitive advantages that have to be maintained for the business to survive.
Avoiding the classic SAP-esq kiss of death of doing it the SAP way, wiping out the competitive advantage (seen first hand), and then spending tons of money customizing and automating any ERP to get back to what they had before (and more) seems silly, but I can't say the 70% of failed software projects fare much better.
So, before we think we understand something, shut up.
Before we think we know better, shut up.
Before we think we can simplify things, shut up.
Before we think we can make things more efficient, shut up.
Shut up, listen to the people using the current systems and processes and learn what is working for them, or not, first.
Shut up and learn. Don't finish people's sentences. Don't tune out. Don't think things are beneath you. Don't think you've seen it before, or built it before.
At each step ask them if you understand their process correctly before going off to formulate a faster way of doing things for their confirmation.
Software has the power to uplift the lives of people and help them get more done with less effort. If you don't value this, don't make the rest of us look bad for your laziness and inability to continually develop your own skills.
Once you have learnt why the business does what it does, the way it does, it's fair to ask the question "How should it be?", and see what differs. That, is the beginning of what you should start thinking about.
Having integrated custom systems and built new ones to replace existing ones since '99, this is the single worst thing I see. Enough developers simply don't have a healthy paranoia of their understanding. Knowing a little bit about something can make developers just as dangerous as the "business" folks they judge for doing the same. It all comes out in the wash with the 70% software failure rate.
Without understanding the data of the business, how it interacts, exists in different stages, and how it needs to be input/output, and why, amongst other things is the leading contributor to software failure.
It's as much "the customer didn't know what they wanted" as "developers failed to understand their job is to go learn the business first and then design something to build".
To be clear, this can mean working in people's positions first hand to see what they're going through / facing that they can't explain to you.
It can mean seeing what state a business is in, infancy (no systems or processes), adolescence (some systems or processes), or maturity (a mature system and process, even if it's all manual).
Do those three scenarios equal one approach to all of them? Hell no.
There is, though, a few common things to keep in mind:
- Ask customers to teach you the business as they know it first. Pretend you're the next apprentice, or the owner's right hand man. Ask to be taught not just how to do everything, but why it's done that way.
- Your goal is to get more done with less effort. The software you design and build should not simply make less work for some, and more for others. It should free people from BEING the tools and systems, to USING the tools and systems. The people of an organization should do what they know best, instead of being computers, they should be interacting with each other, and customers.
I could go on a long time about this. But it's Sunday and I hope the positive wishes come through.
So in summary: When you have a great idea, translate your intended statement of "we can improve things by doing X instead of Y" to the question "why are you doing Y?".
And again: Paper prototyping. Sticking an interactive bit of paper in front of a user is an effective and inexpensive way to get him to explain the holes in your great idea, and you can adapt it there and then and maybe come up with something that fixes the problem that you were hoping to solve, without creating a ton of other issues.
A question I like asking is, "Teach me why this needs to be improved".
So when a question like this takes us 8 hours a week, every week, comes up, you can say we can save 416 man hours a year if you let me work on this for 40 hours.
I'm not sure whether or not there's paper prototyping. I use a lot of different tools, the most important of which helps me whittle down an idea to it's essence. For me that magic happens on whiteboards, and on paper first.
I've recently started doing it using a stylus on an iPad and a Galaxy Note with increasing success. I almost prefer the iPad or Galaxy Note because I can keep erasing and refining to get the perfect layout/design.
Surely the banner of user experience, and it's component disciplines, try to directly tackle this problem? There is a disconnection between the motivation of developers and the needs of users, but areas like IxD, IA, usability research, user experience planning and HCi all help to bridge the gap.
The human race excels at engineering shovels and hammers, knives and other primitive tools; for anything more complex than that our capabilities are still pretty much infantile.
Give it a few thousand years. If we can manage to survive it, the miserable suckiness of our software will taper off.
Software is too complicated. Now that we have a generation growing up with computers software engineers are happily brain-damaging them into expecting hugely complex software that doesn't really work. The browser is the perfect example. Why can the user manually reload a page? Shouldn't it all update automatically? The answer is: it's that way because it was convenient for lazy designers a long time ago and nobody bothered fixing it. And it will be rationalised as a good decision now that people have been damaged into thinking it's a good thing. What's there has blinded people to what is possible. Don't let people fool you into thinking that there are good reasons why we ended up with, say, C instead of Forth. It's all rubbish. We've become a field of charlatans.
This isn't rationalizing, but... no, automatic refreshing isn't being 'lazy'. They're called browsers because they were meant to let people browse, not consume as a TV watcher.
If I was reading a book, and the author made an update, I wouldn't necessarily want the new update from the author. Notify me there's an update? Sure! Automatically replace the copy I'm reading with the update, simultaneously removing my ability to ever get back to the version I was reading? No thanks.
"What's there has blinded people to what is possible." As someone who has a good idea of what's possible, I tend to agree, and think of browser 'cookie' support as a good example. 'cookies' were lambasted as the embodiment of evil in the mid-late 90's. The hysteria around them was insane. Why? We had little control over the visibility of what a cookie was, what it said, and how to manage them. Even today, in 2012, cookies are little understood, and to 'manager' them requires multiple clicks in to layers of menus often marked 'advanced' or 'privacy' (both are silly labels for cookie data, imo). We could have given people easier access to view/manage/understand cookie data, and probably avoided national legislation like we see in some countries now, but that would have meant breaking the status quo. Far easier to focus on browser market share and "standards compliance" as the primary measurements of browser utility and acceptance.
<blockquote>Automatically replace the copy I'm reading with the update, simultaneously removing my ability to ever get back to the version I was reading?</blockquote>
This is an asinine example. Why would the author do this except to correct typos or update the kind of information you want updated? Who are these nefarious authors trying to destroy your reading experience? They don't exist, and this kind of lazy thinking is why it took until very recently for browsers to even have the possibility of auto-update without polling (hello 1970).
Your first sentence is in conflict with your example. The UI complication of the refresh button is minuscule in comparison to the technical complexity of implementing universally canonical web page views. But even assuming you could make that work well enough it's impossible to make software that anticipates all needs correctly. A good example was brought up on 5by5 a few weeks ago: should the iPhone alarm be silenced by silent mode. People will expect it to work one way or another and they will be burned badly when it doesn't do what they expect (think setting an alarm to catch an early flight vs being in an opera).
It's easy to sit on a mountain top and spout off declarations about how things should be, but in no place is the old adage "the devil is in the details" more apropos then software development. Even the UI designer runs into these fundamental conflicts, but the low-level programmer is inundated with them. And worse yet, software stacks are so deep and CPUs so powerful that the scope of what software actually does is increasingly difficult to mentally model. The only way out of this is to ruthlessly narrow the scope and have an organization-wide emphasis on UX. This works for Apple, but consider that in order to do this they basically punt on all the hairiest business requirements because it just doesn't serve their vision. Nevertheless those requirements are real and unignorable which is why SAP et al thrive.
The reason most software is bad is not for lack of effort, it's just damn hard to do right. It's good to ask big questions, but then go try to solve some of them rather than admonish those of us who pour our lives into this.
Stop making stupid excuses. The fact that some people might get confused by some detail (like your iPhone alarm example) does not change the fact that some some other things are idiotic (like the refresh button). And your take on the refresh problem shows why this whole thing is a complete mess: I'm not suggesting "universal" auto-refresh. This is why the browsers is such a failure technologically: people want to try to solve all the problems in the platform instead of just providing simple, powerful primitives. As you recognise we need to restrict scope. The only software that actually works is simple and small. And I'm not just "admonishing others". I dropped out of academia and industry and moved into a cheaper neighbourhood to be independent so I can write the simplest software possible. To me that's better than empty talk about "UX" and other modernistic double-talk about how hard we've got it. And having an independent perspective, I can see that it's rationalisations all the way down. We will have botnets etc forever until people tackle the problems as a basic level by simple designs that eliminate the problems. But people can't even conceive of putting ethics before their own financial well-being because as an industry we have no ethics. Just pride in spreading our bullshit around as much as possible.
If software engineers built bridges we'd make it drive up and down the river, put in a few houses to be "efficient". And then when the thing breaks down we'd complain that it's "too hard". No shit it's too hard: because we promise more than we can deliver, hiding the risks and problems from the user. Firefox has as a feature in every version the ability for criminals to install rogue programs on your computer. And yet you won't see this in their marketing material. This is a solved problem. We know how to eliminate memory-safety problems, but Mozilla would rather take N years to do their rust rewrite (if it even happens) because it's in their self interest. If we had any ethics as an industry Firefox would come with a warning that it makes it possible to install rogue programs on your computer. But it won't.
You're off the deep end man. Sure, things could be immeasurably better if you could get everyone to swim in the same direction to solve the big problems. That's not how things work though, we're all in the same boat you are, trying to do the best we can in an imperfect world. This is not a failure of the profession, it's a universal human failing, maybe even more universal than that.
I only seem off the deep end because the industry has brainwashed itself. The computer no longer exclusively serves humans. Rather, humans now look to the computer to know what to do. Talk about "imperfect worlds" is the same thing irresponsible scientists say about "imperfect information". People think spreading harmful theories is fine because they have "imperfect information". It is the same with software. People have lost touch with the possibility of doing nothing instead of engaging in harmful action. If the economy was in a downturn would you see that as an excuse to turn to the tobacco business? The basic misconception here is that the production of more software overrides problems such as increasingly irreversible dependency on complex systems. I keep hearing the same thing I heard when I was in academia: admissions that people were creating problems along with endless excuses and expressions of career-motivated cowardice. Anyone who criticises the bank bailouts as moral hazard and who is also working to build complex systems today is a hypocrite. Because it is the same heroin addict thinking: that we can fix an addiction by increasing the dosage.
Refresh does have a bit of an "implementation model of yesteryear" feel to it. On the other hand I've never observed it to be a massive pain point for users and removing it would need to be done very carefully so as not to create more problems than it solves.
A warning from experience if you are thinking of implementing Alan Cooper's theories on the topic of legacy UI: Be very careful when removing conventional things because they seem archaic and make sure that they respond to a genuine pain point and your dev team has the chops to fully implement the solution you propose. Otherwise you're going to end up with a half-arsed unconventional UI which will just confuse everyone.
I might agree with this statement if the web had produced anything like a "conventional UI" that is consistent. Every website is completely different. Just look at Facebook. This is a horribly confusing website and most of the features are the kind of self-justifying nonsense I'm talking about. Nobody wanted Facebook until Facebook came along and convinced people to turn photo/comment sharing into a sterile game.
Err, what are you supposed to do when you edit a file on your webserver and want to see how it looks? HTTP is not NFS!
But you are mostly right, software has far, far too many pointless layers of abstraction now, requiring vast resources just to do trivial tasks. There's nothing 99% of people use a wordprocessor or a spreadsheet for that you couldn't do on an 8-bit micro in the 80s. Games these days are just not fun, whereas the 8-bit days were a golden era of creativity. We need to take it back to the old school.
See guidelines 97 and 98 from Jakob Nielsen and Marie Tahir's book 50 Websites Deconstructed [1] for the primary reason why there has been little interest in removing the "refresh" button from browsers.
Technical limitations don't really exist (and if they do exist it'd be fairly easy to solve). Server-sent events[2] and WebSocket[3] are already implemented in the latest versions of popular browsers. Modules or implementations within popular HTTP servers already exist for doing HTTP push (they tend to use older AJAX-like techniques though).
If usability was no concern (or very carefully handled) it'd be fairly easy to write your own nginx module or "WebSocket server"[4] that uses inotify to check for file system changes. For each change that impacts an open WebSocket connection, a "refresh this page" notification can be sent to the browser (which then uses JavaScript to force a page refresh). There is a potential for smarter refresh mechanisms in browsers that maintain the current scroll state, field values, etc but you'd still be frustrating the user with severe usability problems.
Yes but none of these things existed in 1993. You can call it "lazy" I suppose that TBL didn't implement all the features you take for granted 20 years later before releasing the first browser(!)
Interrupts are stateful on the server side; that creates problems in terms of scalability, both due to increased memory usage and by being less flexible (either you have each user "locked" to a single process, or you have to implement state sharing, which adds overhead).
It can also be extremely wasteful - if I leave a tab open for hours or days, you'll have to waste your resources and mine to keep pushing me stuff I won't see, while now I just hit refresh when I want to see new content.
In terms of usability, it's often jarring to watch content change when you're interacting with it - that's why even sites that implement real time notifications often have a link or button that you have to press to update the UI. In many cases, doing that completely negates the benefits of pushing.
Polling is a good default for the web; it fits most use cases (content that rarely changes) in a simple and economical way. WebSockets are useful for the exceptions.
It fits most cases.. except that web applications are _constantly_ polling for information. Which may be fine, but you can implement polling with interrupts. What part of that do you not understand? Are you aware that the OS is using an interrupt-driven system to poll the server? It's just hidden from you, so you have no choice. This is just dumb engineering. And to say that it's good for "most things" is just lazy thinking. "Most things" are that way precisely because of the crappy architecture. Rationalisations.
You haven't made any technical points. You just waffled nonsense about scalability. I see you're bowing out of the argument because you have no case. Once you have interrupts on a platform, you have polling. If you're stuck with polling you can't recover interrupts. Which is why we have websockets decades late. The truth is there is not tension between interrupts and polling. Interrupts are plainly superior, since you can opt out of them trivially. But the platform should not opt out ahead of the developer. This turns out to be inadequate, so we get polling implemented inside interrupts and then a separate mechanism for interrupts. And the interrupt driven stuff has stupid reload buttons and so forth on the GUI. It's dumb engineering because increases complexity and leads to bad results for the user. You have no case. Polling is dumb. And if you think software engineers ever needed "good reasons" to add a bunch of pointless complexity to things, then you simply have no clue on the history of software development.
I had not downvoted you, but I thought about it - because you are making ranty, ad hominem attacks ("lazy thinking", "dumb excuses"), and you appear to be ignoring or handwaving away all the actual points anyone makes. I have no interest in engaging you in any discussion for these reasons, and I expect that other people feel the same way: hence, downvotes and lack of replies. Feel free to do what you wish with this information.
"HTTP is not NFS" is a technical challenge that is potentially solvable. With single page apps the reload button is becoming increasingly irrelevant anyway though.
It's only pointless because we are still using the "Desktop" model. I'm not advocating one over the other but as we see Mozilla try boot to gecko and Google trying Chrome OS we are seeing the early stages of the "webtopbrowserthing" being refined. We are in that transition state where we are trying to figure out what platform is where the money is.
It just goes to show that market forces do not care about technical merit and ultimate purpose in the least. NeWS never got traction, yet the advance of the web is inexorable. Why?
Well the web is supported everywhere because it was simple. The fact that it is available everywhere (which is actually pretty amazing and unique in the entire history of media if you stop and think about it) meant people kept pushing the boundaries. The limitations caused standards to be pushed forward. Wide use of the standards means no platform can break free from supporting it (Microsoft did their best for almost a decade!).
Obviously it looks like a Frankenstein monster from an engineering perspective, but it doesn't matter because that's where the investment is going. The good news is it will evolve and improve, and soon enough the grey beards who remember that there could have been a better way will all be dead; a lot like UNIX actually ;)
edit: I'd appreciate if bitter GUI developers would respond rather than misdirect their anger at the down vote button.
Well. You're getting downvoted for wishing death on people you've never met.
>The good news is it will evolve and improve, and soon enough the grey beards who remember that there could have been a better way will all be dead; a lot like UNIX actually ;)
That sounds like some sort of evil plot. For that matter my beard isn't anywhere close to being gray and I know that things could have been better. And if you didn't have +8000 karma I would think you were trolling.
>The good news is...soon enough the grey beards...will all be dead... ;)
You're the one who asked. If thats what I got out of it it's probably what the people downvoting got out of it too.
For what it's worth maybe the intended message would have come through better if you were saying it in your voice with tone and inflection rather than the way I read it as words on the screen.
Dev time, you say. Well let me tell you a story. I am quite into 8-bit, I like buying micros from the 80s as junk and fixing them up. Recently, my old lady and I wanted a spreadsheet to track days off, vacations and so on. I could have done it on the quad-core, 8G Mac in Excel or online with Google Docs... But I actually did it in ViewSheet on a BBC Micro.
So dev time, well the last THIRTY YEARS of dev time haven't gotten me any anything I didn't have already.
And today I walked a mile from my grandmother's to my house. The last HUNDRED YEARS of car manufacturing haven't gotten me any anything I didn't have already.
A car could get you there faster, or dryer in the rain, or carrying more stuff. But there's not much Excel can do for the vast majority of users who just want a table with a few dozen records, that ViewSheet (or the granddaddy of them all VisiCalc) can't. Same with wordprocessing.
Except many of those users don't want just a table with a few dozen records - they want that plus a couple of small features. And the real problem is that they don't want the same couple of features. Hence, you end up with huge beasts to support each combination.
For example, what if you want to use your spreadsheet when you're not home? Never happened to you? Well, it happened to others.
Yes, as long as you can code; but in that case, so can raw assembly. Most people can't code, so that doesn't help them if they want something more. Excel, on the other hand, has features they can actually use.
Or to use my previous analogy, you're essentially saying that because you can build your own custom designed car from parts, pre-built cars are useless.
People should simply design something small that satisfies a niche. If it doesn't have enough features for some person, rewrite it. When software is small you can rewrite as much as you want. Software engineers labour in the delusion that their creations are so fantastic and amazing that it can only accomplished through huge complexity.
1. Software that is built to deadline will decay, even if the developers are good. That doesn't mean that an occasional deadline is the end, but if a long-term "deadline culture" sets in, get out. A long-standing deadline-oriented culture means you should be looking to jump to another project or company before the maintenance phase starts, because (1) the maintainers will be underappreciated (that's typical deadline culture) and (2) once the original architects get promoted it will be politically impossible to point out the real reason maintainers are unable to deliver in a timely fashion, and the slowest one to run away will get eaten by the bear. It means that technical debt will never be paid off; management will never budget time, and engineers will be too busy to clean up the code. Software engineers generally lack both the political pull and the broad-based knowledge to push back on deadlines and tease out which ones actually matter and which don't.
2. Entropy. Good software is less stable than bad software. Think of this as akin to the "broken windows" theory. Once software reaches a certain state of degradation, each change, although it might fix a bug or add a feature, will make the state of the software worse. There are creeping kinds of badness that can't be caught in incremental code reviews, such as adding 10 lines to a long for-loop or a "necessary" boolean parameter to a method that over time ends up with 15 boolean parameters. Often the managerial solution (once it's far past too late) is to put maintenance of this bad system on the calendar and make it someone's full-time job (instead of a shared responsibility) but no one wants that job and often that work is allocated to marginally skilled junior programmers with no clout. Then you get adverse selection: the more skilled people in that set will leave the project (or company) before they put in enough time to become decent at it.
3. "Pay as you go" maintenance, which includes periodic fixit spells, is always better than after-shit-breaks maintenance. That said, existing tools don't make it easy to revert quality degradation. IDEs really don't perform this function as commonly used. (I'm sure IDEs can be really powerful if well-learned, but people who are dedicated enough to master IDEs are also dedicated enough to jump wholesale to better languages for which IDEs are unnecessary and often poorly-supported. IDEs, in large part, exist to compensate for weak languages.) Code can rot in any language, but one advantage that languages like Scala and Python have is that, because they have REPLs, which are far more useful than any IDE, people can interact with the software at a code-level and fix things while the code is in that "moderately bad" state before it is too late. In 2012, I wouldn't start anything important in a REPL-less language. (C is not "REPL-less" because Unix is the C programming environment. C++ is, not on account of language intrinsics but because it has departed from the small-program Unix philosophy and is used for large-object programming which requires interactivity at a code level.) At least some programmers will have enough of a sense of ownership and citizenry to clean up failing code as they work with it, but if you deprive them of the REPL, the one tool that any good programmer will recognize as essential, they won't put in the work.
4. The transition from being a mediocre to a good and then to a great engineer is about moving away from being an "adder" (someone who increases codebase complexity and functionality, thereby having an additive business value-- ignoring long-term costs of complexity, which may or may not offset that additive value) to a "multiplier" (someone with broad-based positive effects that make the whole team more productive). Contemporary tools and programming environments (Java, C++, IDEs, IOC, dependency injection frameworks) are about helping more mediocre engineers become solid adders at the expense of the really great engineers, whose creativity is constrained by less powerful languages and tools. One of the goals behind Microsoft's professional certifications, the design of VB (and later, the hijacking of Java), and the attempted ghettoization of the command-line (which good engineers like) was to make it possible for huge teams of "commodity" programmers to be productive as adders, with the hope that "someone" would have the patience to staple together the zillion classes they cranked out. From an MBA perspective, this is a win, because 2-4 times more people are eligible to be adders, but it also holds people back from becoming multipliers. The long-term problem is that a team without any multipliers will accumulate complexity and the emergent design (because you want a solid engineer doing your design work, and you can't get them in commodity-programmer environments, "design" coming out of a commodity shop will be ad hoc) will be disastrous.
5. With a few exceptions, the real fuckups in software don't seem to be blameable on a single person. They usually emerge either from jobs no one does (because the people who care about them being done aren't in power) or that too many people do (once code has been passed over by too many hands, it turns to shit).
I'd be curious to hear why you're so anti-IDE: do you have extensive experience working in Java with a good IDE like IntelliJ? I've heard this "IDE's are to help mediocre programmers be mediocre" argument before, but it's so alien to my experience with IntelliJ (which I've used every day for about 10 years now) that there's nearly no way to reconcile that argument with my personal experience. When I'm writing C or Javascript, I'm hesitant to, say, rename a method, because finding and fixing all references is a pain. In IntelliJ, it's trivial. The end result is that I refactor my Java code much more aggressively than code in a language where I don't have a (good) IDE. Similarly, while you use the REPL to explore libraries, I use the IDE: exploring source in a Java project is trivial because every class and method is instantly cross-linked, and my IDE knows where all the code is (including for my libraries). It's not exactly the same as a REPL (I can't call the method right then, of course, but I'll get to that in a minute), but it serves a different purpose, and your argument in that linked post about how IDEs aren't made to read code is, honestly, laughable: it's way, way easier to read and explore a Java code base within an IDE than it would be in a text editor and a REPL. Now, you can argue that Java itself is verbose enough that reading it is painful because of all the boilerplate: sure, that's a fair point, but it has nothing to do with an IDE. If you had a language with cleaner syntax and an IDE, that would be better than just a language with cleaner syntax and a REPL when it came to reading code.
In addition, it's worth pointing out that many Java programmers use unit tests as a poor-man's REPL; it's not the same, but it serves a similar purpose: I want to write some code, then execute it to make sure it does what I think. It's less dynamic, but it has the advantage of leaving you with regression tests, and it does let you explore and quickly iterate your code. If I'm not sure how to use a library, I'll do exactly what you'd do with a REPL: I'll write some code to use the library, then write a simple test that executes that code, and then I'll iterate my way to a correct solution. Again, the integration of the IDE with the unit tests makes running, debugging, and bouncing between the test and the code much easier than it would be in, say, vim/emacs and a terminal.
My point is that good programmers in any language find a way to do the sort of iterative evolution and exploration of code that you act like is only possible with a REPL, allowing them to fix errors early.
Many of your other points here are good, I just really feel like your "the REPL is essential" argument is pretty misguided.
Regarding IDEs: I completely agree with you. And I say this as someone who's used Emacs and REPL for 10 years.
I find that I am generally faster in development (at least with new libraries) in Java, than I used to be in Ruby and Python. This is all thanks to the "discovery" ability of IDEs. (I admittedly never tried a Python or Ruby IDE.)
And I have the same feeling regarding refactoring. It is not merely limited to renaming a method. I find myself very often making major structural changes to my code. Moving packages around, introducing interfaces, changing type signatures. In this regard working with an IDE makes me feel like a "software architect", I get a big-picture of the project in a much faster and better way than I used to with purely a text editor.
I also feel I waste no time on boiler-plate code (which admittedly Java has a lot of). In Netbeans (I am sure it's the same in Eclipse and IDEA) the code generation abilities are terrific. For instance, I can just write "class C implements Interface", press Alt+Enter+Enter and see all interface methods written out and ready for me to fill in the implementation.
Regarding the REPL: I've found that with strict typing, I just end up knocking out the code that _I think_ should work, and then I test if it works. I can sometimes type for 200-300 lines without running the code, then test it and see that it actually works. Of course, sometimes it fails too: luckily Java debugging is easy and incredibly capable.
However, if you really want a Java REPL, you can have something a little bit similar with BeanShell (you can even embed it into NetBeans).
How do you measure your productivity? One would naturally tend to count the number of lines changed, added, maybe removed. But they are a poor proxy. What really matters is the value brought to the customer, and the cost of this value to your company. These are obviously very hard to measure at the programmer level.
I think the important question here is, does IntelliJ help you simplify existing code? Do you routinely simplify existing code? Does your team routinely simplify code? Or even better, does IntelliJ help you write simpler code in the first place? Meaning, is code written with IntelliJ routinely simpler than code written with Eclipse or Emacs?
My experience with IDE overall, is that they are of tremendous help for navigating complexity. On the other hand, they are of very little help for reducing complexity. (You cite the method renaming as an example, but compiler errors keep track of broken references just fine)
Regarding the REPL, it's not the REPL itself which is essential. It's the tight feedback loop. There are other ways to provide such a loop. Some of them are much, much better than REPLs: http://vimeo.com/36579366
I'd be curious to hear why you're so anti-IDE: do you have extensive experience working in Java with a good IDE like IntelliJ?
My objection has more to do with IDE-dependence. Also, Java is a language in which it's way too painful not to use an IDE-- I certainly use an IDE when I'm in Java-- and IDEs tend not to play nice with outside-of-IDE actors such as version control, so there tends to be a Mafia (once you're in, you can't get out) nature to them.
I don't dislike IDEs themselves. I dislike the fact that people use them to make insufficient languages less bad in lieu of using a better language, and worse yet, that business types end up with the impression that other languages are less mature/usable because they lack IDE support. When you have an expressive language, you don't need an IDE (or, at least, I've never found myself missing one).
That said, it may be that an IDE is pleasant to use with a better language, and makes it even better. A lot of what IDEs offer is useful. That said, I'd rather have a good language like Ocaml and no IDE than Java and the best IDE on the market. Java development is just not very "flow"-ful in my experience, and the productivity benefits conferred by an IDE are small compared to the astronomical bump conferred by an expressive language.
It's not exactly the same as a REPL (I can't call the method right then, of course, but I'll get to that in a minute), but it serves a different purpose, and your argument in that linked post about how IDEs aren't made to read code is, honestly, laughable: it's way, way easier to read and explore a Java code base within an IDE than it would be in a text editor and a REPL.
Ok, I see where you're coming from. I agree that reading Java code pretty much requires an IDE.
Now, you can argue that Java itself is verbose enough that reading it is painful because of all the boilerplate: sure, that's a fair point, but it has nothing to do with an IDE.
Not directly, but I think there's a cultural problem that might be enabled by the IDE. Just as it's said that 4-wheel drive helps a person get stuck in an even more inaccessible place, I feel like IDEs enable people to program who shouldn't be programming, inappropriate languages to just kinda work, and bad practices not to totally fall flat on people who use them. I can't prove this, but it seems like this is the case, taking an industry-wide perspective.
If you had a language with cleaner syntax and an IDE, that would be better than just a language with cleaner syntax and a REPL when it came to reading code.
I'd like to try this experiment. You could easily be right. My experience with IDEs is in weak languages and they seem not to be used in strong languages (Scala being an exception, although I haven't tried its IDE support).
In addition, it's worth pointing out that many Java programmers use unit tests as a poor-man's REPL; it's not the same, but it serves a similar purpose
Interactivity and unit tests serve different purposes. I don't think either is an acceptable substitute for the other (and yes, I've seen people attempt both substitutions).
For me, interactivity is the only thing that keeps me in a state of flow (instead of boredom) when I have to read code, especially because there's at least one library for which I really want a REPL so I can see what the calls do.
I just really feel like your "the REPL is essential" argument is pretty misguided.
I guess I should be saying "interactivity is essential". C technically doesn't have a REPL, but it succeeded as a language because the C/Unix philosophy encouraged small programs that could be used and explored at the command line, which keeps the C environment engaging and tractable as long as people aren't writing huge programs. I haven't seen interactivity superior to what the REPL provides, but I haven't seen enough to rule it out either.
I did not appreciate how useful IronPython is when developing with C# at work until read this and thought about what the alternative would be like. To test our wafer handling robot, I can just instantiate it's individual axes at the REPL and move them around. Our I can create the whole device object and move them around in a coordinated manner. Any level of the system can be tested or cycled for reliability without having to write test fixtures in C#
I'd like to see some empirical evidence for some of your arguments. Not that I don't _want_ to believe you - I do, but my occam's razor is starting to twitch a little.
You can say that software sucks because of poor programmers and poor project management, but the truth is that the code is not sufficiently wieldy. There's no way to manage the code. I can't query all places in the code where the UI interacts with a database column. Accounting systems can give you a variety of reports based on abstractions at a variety of layers, slicing the information in different ways (horizontally, vertically, etc). Software systems? Go fish.
Software sucks because at some point we got too excited about what we were doing and stopped (sufficiently) caring about how we were doing it.