My first real-world job in the industry was in a large-ish shop that worked as follows:
Create a full-detail schematic of the system in version-controlled UML.
At some point, "deploy" the UML by printing it into a 4 cm-thick binder of paper, then distribute these binders to the head architects.
Iterate on the UML until the architects are happy. (The architects spent many years trying to auto-generate code from the UML diagrams and have the results "fleshed out" by lowest-bidder consultants, though this never really worked. Their stated goal was to no longer have to write any code in house, but rather nothing but UML.)
Begin implementing the system in house with auto-generated code from the binder-of-UML as a baseline, after the lowest-bidder consultants had failed.
Quickly get into big fights between the coders-on-the-ground and management when it was found that the UML diagrams contained major architectural flaws and the UML-phase would not, actually, account for 80% of the project's duration and budget. Needless to say, more than half of the projects failed entirely.
This experience nearly made me leave the industry, before I discovered that there was plenty of software being written in a saner way. This was more than a decade ago, but to this day, just seeing UML diagrams turns my stomach.
"Their stated goal was to no longer have to write any code in house, but rather nothing but UML"
This goal was promoted heavily in the late '90s and early '00s. Some tools could supposedly "round-trip" between UML and code, but I never saw that actually work. As originally conceived, UML was supposed to foster communication and there are times when a UML diagram is the perfect means to communicate the intent. I tend to use sequence diagrams and ER diagrams the most (yes - I know ER diagrams aren't UML but they're roughly analogous to class diagrams and what we're usually trying to flesh out is the relationship between models).
As an aside, there were/are graphical programming languages. I wrote a couple of Windows and Mac utilities in ProGraph [1] during the '90s. The ProGraph environment lives on as Marten [2] which I recently played with on Linux. It's not nearly as clean as I remember it being.
Sequence diagrams are quite useful, static diagrams, not so much. I only find myself drawing them if I'm trying to do a necropsy to figure out hosed up object relationships.
The other useful diagram tfa doesn't cover is the finite state machine diagram. Handy, plus, they look cool.
As an aside, I find it's much, much easier to do with pen & paper than in any tool i've ever used (looking at you omnigraffle & visio!)
> UML was supposed to foster communication
> could supposedly "round-trip" between UML and code, but I never saw that actually work
This is the root of the problem with UML-as-spec: it's used as a tool for one-way communication, for open-loop management that doesn't want to have feedback or dialogue stages in development.
I think this is the real problem with UML. It first kinda seems to work when project is managed with waterfall-style, but when actual development is done iteratively (when is it not?) it just adds confusion or work either by providing out-of-date spec or introducing more stuff to keep in sync.
UML is a way to talk about things. Just having a standard-ish way to draw on whiteboards is pretty helpful, even if the diagrams are super temporary.
In practice in most industries, most folks can probably get away with just knowing the ISA/HASA type stuff and 1..N, * arrity, etc. Swimlanes are also somewhat useful as a concept (but so are regular flowcharts).
There's a whole other ton of UML (see 5/7th of Rational Rose around the early 2000s, which is probably gone all deep and crazy now, etc) that is, IMHO, pretty skippable.
I've not even seen some of the "stuff you have to know" (ball and socket, etc) used.
I do think that designs get better when you can draw them, as when they get hard to draw, it becomes conceptually obvious there may be problems.
UML generation code tools running against 1300+ Java classes and making some huge diagram covered up in arrows definitely doesn't work too well though :)
If you have a problem with out of date documentation, I think you might have a problem with the development team rather than the documentation.
If you don't think the documentation is useful, please get rid of it. But blaming UML for your documentation problems is like blaming English for forcing you to rewrite comments that no longer make sense.
My sense is that people associate bad processes and bad teams with the tools that they were using. Crappy tools exist, and people have had some pretty dumb ideas about how to use UML, but is useful as a way to express aspects of your design that may lend themselves to a more object-oriented approach. This is something for which no other widespread diagramming language exists.
Whether you're using an iterative or waterfall approach, at some point it is necessary to provide some context for your software units, like explaining how they are associated or what their areas of responsibility are. That's all the documentation does.
Do you have link to an example of a "realistic"/real-world deployment diagrams? In my courses they always looked quite silly and useless, but I wouldn't be surprised if that's due to bad examples...
It can be hard to convey the proper use of a technology/framework in academia when there is not a lot of background to work with. I felt the same about unit testing (shouldn't you know how your code works?) and UML diagrams (shouldn't you know the class/object relationships?) as well. They're all bigger picture than that.
The point at which I started to realize the usefulness of diagrams was when I started working on projects that were more than a few hours worth of coding (my first internship). UML provides a way to visually represent how your code will both relate to itself (classes, interfaces, objects) and the overall project to both the developer AND the business analysts who have no idea what inheritance or polymorphism mean. I've included an early draft of a piece of a rather large data model. It won't mean anything to you without the overall diagram, but you see some relationships between the customer's account, application and product: http://s18.postimg.org/gxfkjytft/diag.png
You probably had an example like grocery store checkout software, hotel booking software, or library cataloging software. A UML diagram can show to everyone all the possible routes a customer can take such at starting points, ending points, resume milestones, activities that are and are not allowed, transient and permanent data and everything in between. This all applies to the academic scenarios listed above, but it's more in depth (likely) than an assignment that you'll have when learning about UML.
Just keep in mind that large project will have multiple developers of varying expertise with business people who likely cannot write "Hello World!" in any language. UML attempts to bridge the gap between you, your superiors, your subordinates, and the non-tech side of the project.
I was involved with one of those UML -> Code systems back in the early 2000's. It was company called Nuevis. I had proposed a simple CMS to be written in ASP (or maybe Cold Fusion). We had a project plan, requirements and design documents and developers ready to start. Some architect said we would be the perfect use case to test this new technology.
We met with them, licensed their system and began to make UML. After several weeks we started to generate code, but it wouldn't run. Several more weeks and we had running code, but were missing key functional requirements. We pulled the plug after 6 months. It turns out that UML class and state diagrams are just not robust enough to explain specific functional details. The system generated thousands of lines of code to do simple things.
We ended up writing it ourselves in ASP, it took us about 3 weeks to code everything.
Nuevis went out of business in the original dot-com bust.
I had a similar experience about 12 years ago - turned out the company in question simply used their UML based methodology as a sales tool. Projects never actually worked that way in reality - what they said they did and what they actually did were completely different. Once I realised that the development process was actually perfectly OK and nobody actually looked at the UML I was fine.... ;-)
I was developing SW in the 1990s (before XP, then Agile, become popular), and we were supposed to use UML, Rational Rose etc. However, that was how it was supposed to work, not how it actually worked.
The biggest relief in starting to use XP was that the methodology described how we actually worked, not how we were supposed to work.
I was on a big project in the early aughts (pure XP, pairing, tickets, stories, tests (40,000 of them at the end) etc), on which, we were contractually obliged to A. hire on at least 4 accenture consultants, and B. purchase 4 rational rose seats.
We simply stuck the accenture clowns alone in a room with rational rose, and we never heard from them again (nor did we ever see any uml). Best case scenario.
I used to do product reviews for a magazine, and Rose was the only review software I decided not to keep. And I'd have felt bad giving it to anyone else.
From that time on, any mention of Rose in a job description automatically disqualified it.
I worked on at least one project that used ObjecTime, a predecessor to UML. You were coding in it, in a very ship-in-a-bottle way. The article completely ignores how state machine patterns are done in UML, which ( IMO ) is where you start to see some gain. This being said, just doing state machines in a non-OO way isn't exactly rocket surgery.
It was tolerable. But changes were a bit expensive, and we kept a reference machine for builds, which was not very nice. The environment was hard to replicate - behind the scenes was a sandbox of Smalltalk and that was sort of ugly.
We did a port to Rose, but Rose was not, frankly, as capable.
What's funny-peculiar is - as described in your experience - the whole idea of "architects that can't code." At the very least there's apparently no market for tools to enable that.
In 2013 I encountered a major IC design company that still works in this way: spec -> complete English+diagrams description of all system components -> Verilog -> production.
Part of your story reminds me of the customer of the computer company I worked for that would, every week, print out sales and financial reports that ran for hundreds of pages and about as thick as you mention. The girl who ran the reports told me that the only people who ever looked at the reports were the sales guys who would flip to page 20 and look at the bottom line sales number. It was then filed away in the warehouse, never to be seen again.
Back then, I found it strange that people would cheerfully exchange stories such as yours, stick Dilbert strips all over their offices, swap dailywtf links, and just carry on. I was never able to compartmentalize enough to be happy while also admitting that I live in some Kafkaesque version of Office Space. I remember being told by my seniors that I'll grow up and grow a thicker skin, and that I will accept that this is just how things work.
Now, I'm happy to find out that that is not indeed how things HAVE to work, and I'm a lot happier and less conflicted. And now that we have many prominent examples of more functional and humane corporate and engineering cultures, I'm sad for all those people still toiling away in dilbert-land.
Startups aren't Dilbert land. They literally can't afford it. They have their own pathologies, of course, but they aren't Dilbert pathologies. (And due to scale, if you are motivated, they are often, but not always, more amenable to fixing.)
You can also look around for medium-sized places where software is the primary product; they can typically afford some Dilbertiness, but not too much, or they get eaten by somebody else. Depends on your tolerance, and on your ability to carve your own defensible niche out.
But either way, the key is certainly to start looking. No, working in Dilbert is not inevitable, but you'll have to take positive action if you are stuck there now.
Absolutely. In addition to startups, I've worked with quite a few small-to-medium sized agencies, and as you say, they tend to have a small but manageable amount of Dilbertiness.
As I see it, the crucial difference between Dilbert and non-Dilbert is responsiveness to unhappiness and the ability to learn from failure.
In Dilbert-land, an entire engineering department can grow to be silently miserable. Outside of Dilbert-land, many people would raise their voices and with enough protest, major change would happen. That's because management knows that bad morale is deadly, and can't afford to restaff after mass departures. And outside Dilbert-land, people tend to not see themselves as trapped, and really will, indeed, quit.
Also, medium-sized agencies simply cannot afford to have more than 10% of their projects utterly fail - their cash-flow is too limited, and they live on their reputation and good relationships with clients. Unlike the Dilberts, they simply can't walk straight into failure over and over.
As someone in the orbit of Berlin startup culture, I'd say the answer is simple - you have to overcome arrogance and ageism.
Let's be honest, those of us who have built careers on the "modern-web-stack running in AWS" tend to tilt in the bearded-hipster direction. I've interviewed people coming out of corporate behemoths, and they're typically older, have families, have a more conservative demeanor, and are a few years behind the HN curve, tech-wise. It takes some effort to cross that cultural gap and recognize the decades of engineering and inter-personal experience that some of these people have.
One thing that helped me was to move. That's obviously a big step. I was unmarried at the time, no kids, and knew I wanted something different, so I found a job in a new city and just packed up and moved.
If that's not in the cards, maybe you could look slightly farther afield in your geographical area than you would normally, or look farther afield in your technical area than you normally would.
It might also help to work on some either open source or other public-facing projects (assuming you can and they don't conflict with your current employment, etc.). Then you show that you have experience other than just what you might have from your day job.
I'm not sure what you're getting at. If you're implying that I casually leaped to the conclusion that it's a bad thing, I didn't; I was giving guidelines to someone who seemed to be implying (s)he wanted out. If you don't want out, more power to you; I'm not really a big "you have to be passionate about your job" sort myself. (It's nice, but it's a bonus in most cases rather than a necessity.)
I think I miscommunicated my point. I was speaking of bias against people who have been stuck in Dilbert Land for long periods of time and are trying to get out, but being ignored by companies outside Dilbert Land because they are inside it.
Ah, I see. My apologies for misreading. But, alas, I have no magic answers. Except for the fact, I suppose, that there are no magic answer in general, and you may have to do something a bit spectacular if you want out, to prove to those outside that you can function outside. This is where programmers have it easy; it's really easy to get involved in an open source community or something, compared to a lot of the other people who can get stuck in Dilbert-land.
The mid/late 90s were this weird time when people were printing things out like crazy. It was after the first cheap inkjet and laser printers came out. I must have printed out hundreds of Yahoo maps directions and other things at the time. I remember possessing lots of big three-ring binders filled with various manuals for programming libraries. At the time, printing was almost free, and email and web were still somewhat lacking in user friendliness for reading docs in the modem and pre-PDF ubiquity days. These two factors lead to this brief printing things out trend.
The printing thing stopped when online help got better and more ubiquitous. Now, I usually read docs on the web and only buy a book when I want to majorly invest in learning a new technology.
I remember in the late 90's I'd download the full set of javadocs to have them locally on my computer so I didn't have to wait to browse them over the network.
In fact I had burned them to a cd-rom so I could pass them out easily to co-workers...
Better than printing them out, but still unthinkable now.
No, all the UML you need to know is this: (1) draw and label a box for each class; (2) draw an arrow from one class to another to show dependency; (3) draw a different kind of arrow from one class to another to show inheritance; (4) [bonus material, for super-geniuses like you] use regex-style symbols * , +, 1 and suchlike to mark the ends of dependency arrows in order to indicate when you have one-to-one or one-to-many relationships and so forth.
There. My 20+ years of experience in software architecture in various fields from games to networking tells me that you now know enough to work out the classes and their relationships in a large software system.
Don't fuss around with "aggregation" or "composition" or whatever. Don't spell out functions (though occasionally I'll jot one below a line to remind myself what the salient feature of the dependency is). And by no means write the class properties, their types, or their access specifiers (public, protected...)—this is way too much detail. A UML diagram is useful in modeling broad object relationships in a system. If you want to work out what properties a class should have, write the damn class. Any software developer worth his salt can figure out the code from a high-level diagram; don't write the code for him. Or do, but then don't call it an architectural diagram.
I know there's a whole culture of software development where architects design code but don't dirty their hands with writing it, then hand it off to underlings who type it up for them, and so on down some kind of techno-bureaucracy Great Chain of Being. Rubbish. Code architecture is a thing and some kind of diagramming is helpful, but UML as such is the sort of busywork and IRS-style hierarchism that marks bloated government jobs, not real productivity or real teamwork.
Give UML a miss and use something very, very simple.
I've generally found UML to be a complete waste of time.
I'd rather outline the major components of a system by drawing (on real paper) simple boxes and lines, or write the code that implements the system.
Not sure what code-as-picture achieves - it's generally has worse tooling (less editable, less versionable, etc.) and tends to be used by 'architects' who don't write code, only for that UML to be essentially ignored by the coders on the ground.
As long a the "simple boxes and lines" describe the functionality of your design at the level you wish to communicate, then you're doing good. You can stop reading here.
The problem is that complex systems need complex design. As your design get more sophisticated, you need more sophisticated ways to communicate the design. Concepts like dependency, multiplicity, inheritance, inclusion, asynchronous message, exception, timed event are all things you can represent with boxes and lines, but unless you adorn the diagram with a huge amount of text you're not going to capture the distinctions between those different types of relationship.
It's not unlike any other language: I can use the word thought to describe something in my head that isn't physically manifested, but words like lie, memory, hypothesis, idea, belief, image, nostalgia, forecast, imagination are all used to distinguish between types of thoughts. If I used only simple words like "thought" then I wouldn't be able to express anything more sophisticated while still being concise.
In short, if you want to graphically describe something complex, you can either use a sophisticated visual language to do it concisely and unambiguously, or you can use a simple visual language and write tonnes of documentation to support it, with all the problems of attention, ambiguity, and rot that go along with it.
The problem is that complex systems need complex design. As your design get more sophisticated, you need more sophisticated ways to communicate the design
I've never found any need for a third, intermediate level of sophistication that sits between code and natural language + informal diagrams.
Really? The device you're reading this on probably contains more complexity than it's possible for one person to understand in a lifetime, and that's one device.
It's incredibly naive (or at best disingenuous) to assume that complex systems can be reduced by thinking and refactoring. Perhaps some can, but when you consider many of the software artefacts we use daily (seeing as that's the context of this subthread), such as operating systems, virtualization platforms, or even good ol' clunky enterprise applications, you can't just wish the complexity away with a bit of hard work.
I don't think anyone is arguing that complex systems don't need some kind of up front high-level design - what we are arguing about is whether UML is an effective format for design artifacts.
I'd be pretty surprised if any of the code that I am using to read this was designed with detailed UML diagrams.
>Really? The device you're reading this on probably contains more complexity than it's possible for one person to understand in a lifetime, and that's one device.
It wouldn't have been possible at all if the pieces that make it up weren't loosely coupled.
>It's incredibly naive (or at best disingenuous) to assume that complex systems can be reduced by thinking and refactoring.
It's naive to assume that you can't. Loose coupling pretty much only comes as a result of extensive refactoring.
...and one of the ways you can communicate such coupling is by diagramming the (complex) system's design.
Google for "linux kernel diagram"[1]. It's quite well designed, displays loose coupling, high cohesion, and all the other things we like about good complex systems. If I want to dig in and start working on a module, I'd still have to understand the relationship between all the bits I work directly with (and probably some that are further away).
Exactly - I think the more complex the underlying system the more effort has got to go into finding suitable very high abstractions - you have to step away from the detail and that's something that UML seems to have a really hard time doing.
We already have a perfectly good way of representing the details of a system - it's called source code.
> We already have a perfectly good way of representing the details of a system - it's called source code.
Interestingly enough, that's a bit of a false dichotomy.
Imagine it's your first day on the team, and there's thirty classes (or thirty functions) in this software package, and you want to understand their inheritance/composition, what's a good way to take a step back and just see how they relate to each other? You can browse some interface files or some source files, accumulating a model in your head. But the visual throughput into your brain is just so much better when you can see it in a diagram.
Regardless of whether a human draws the UML representation of the design in advance of the implementation or a machine deduces this from the existing source code, you can still benefit from a class diagram that will cut out the noise of the implementation itself. As long as we're drawing a diagram, why not have a common way to represent the concepts we use in software?
Nothing about UML specifies its use in relation to when or how you wrote the code. Like I said, it's a false dichotomy to suggest that we can have source code or UML but not both. Yes, UML diagrams or any other design docs rot quickly. But like you said, we can generate diagrams after the fact too (or in lieu of ever writing UML diagrams before coding).
> perfectly good way of representing the details of a system - it's called source code.
...
> simply use a tool to generate a class diagram from the code
Once you resolve to generate that class diagram you have a choice in how to represent it. UML is just one way, proposed as a standard way, to represent that visually.
Except when you are selling a closed source binary which must be specified to the very small details (e.g. accounting systems from an external vendor, sometimes they are not buying the source code - and you don't want to sell it either, since they are going to replace you...).
Your statement is only somewhat correct. What complex systems need isn't a single, complex design document but rather levels of simple, intuitive design. It's like OOP--the basic principle of OOP is that you can manage (but not eliminate!) complexity by sharding it into small-ish objects that only need to know about those whom they interact with. Similarly, in a large system, you'd want a high-level, very abstract design detail, and as you peer in closer to the code, you want increasingly concrete design.
The problem of UML is that it tries to be the language of all these levels of design (and even different stages of design!), to the point that the highest-level design document is trying to be something so completely specified that there's no abstraction.
> I'd rather outline the major components of a system by drawing (on real paper) simple boxes and lines,
One of the most constructive things you can do when designing software as a part of a team is to review these diagrams with the team. Inevitably, if you create boxes and lines, you'll need some sort of legend that identifies the relationships established by the visual style of boxes and lines. UML is intended to be a common legend, like the schematic representation of circuits (resistors, capacitors, diodes, grounds, etc).
> Not sure what code-as-picture achieves - it's generally has worse tooling (less editable, less versionable, etc.) and tends to be used by 'architects' who don't write code, only for that UML to be essentially ignored by the coders on the ground.
An error that happens more often than we'd like to admit is when we totally mis-assess the requirements or omit big portions of an element's design. Sketching a quick diagram that shows "the UserAccount has a user_name, user_id, and a password_hash element" allows the team to sanity check the design approach -- "Oh, but a UserAccount also needs an email_addr element." "Are you kidding? How would we email them, there's no interface for that on this system." It's sad, but sometimes entire (enormous) features like this don't get caught until much later in the design cycle.
Having conversations about what the system being architected/designed/planned is going to do can be very useful.
There are definitely better techniques than waiting until someone quibbles a field in a lines-and-boxes design session to pull that kind of insight out though. Just sitting around and telling stories about what the user will be able to do, and how they will be able to do it can work wonders.
UML is best when used informally. I find that some bits and pieces, like class and sequence diagrams, are useful for sketching ideas and communicating them.
But of course, that is exactly the opposite of what UML set out to achieve: a system so formal and complete that it could be used to generate code automatically.
I totally agree with picking bits and pieces. My favs were the state diagrams. I remember a couple of times when they saved the day. It was a matter of communication with the customers, a way to show them that their initial description was light years from a real specification and make them commit with the agreed diagram.
Believe it or not, the customers understood it perfectly, after some minutes drawing and deleting lines. There was never a single complaint about that module and only minor bugs, not affecting to the functionality.
I hated use cases diagrams, only useful for CYA purposes. The classes diagrams that I've seen in the wild were E/R models translated to UML, but they were the most part of documentation for some projects, so it was better than nothing.
The traceability systems from requirements to complete UML diagrams to code were a bunch of good intentions that fortunately never materialized.
UML cannot be appreciated out of its context, both socially and technologically. It was created out of big projects at a time where big things were the norm, and then became a product with its own dose of fad, almost dogma (let's do everything in UML, ontologies, sequences, deployments; precompile it, compile it, run it...).
I found it to be a pain as a student (it clearly was a fad at that point), but every time I go back to shipped code in mainstream paradigms, I understand why people went this way. It's as boring as intractable. We don't want to code at that level, we want abstractions, but these paradigms didn't help so we needed diagrams. And it's an offspring of OOP mostly which helped reducing cpp-like complexity but not for free.
IMO UML boils down to a visual type checker. We can layout things more efficiently than with linear text, and can appreciate the relationships between parts. But in a way, that's what a type checker does. I remember never needing a diagram (for medium single person programs that is) in Eclipse since it could look at the whole system for me and show contradictions or suggest options.
I'm now very regularly tempted to do things with a bit of empty interface logic checking, FP-ish minded. You can still write this as text, and as long as it the types flow logically it's OK.
UML emerged out of the whole CASE (Computer Aided Software Engineering) movement. (Now there's a term you don't hear much any more!) Things were never well standardized and UML was an attempt to bring some sanity to the whole undertaking.
I used to work on a CASE tool early in my career. I was always struck by the fact that we never ate our own dog food; that is, we never employed the tool we were working on to guide its own construction. This was the first time my eyes were opened to the fact that perhaps there was a lot of junk being built :)
CASE itself I believe was an attempt to replicate for assembling bits some of the benefits CAD/CAM was bringing to assembling atoms.
Yes, CASE grew out of CAD/CAM. There is probably a whole field of anthropological work to be done as to how this came about and why nobody talks about CASE any more.
Although the latter is partly because the idea of doing 'SE' without the 'CA' is now inconcievable, even to senior management.
I never connected CASE with CAD, I stopped at Booch's history.
Same about dogfooding. When I realized RSA wasn't (at least when I was working on it) developed with RSA or any UML I felt weird... Doubly weird since other tools like IBM Jazz (team collaboration) used themselves to ensure a certain dose of value.
UML, if you don't want to generate code from it, is nothing more than boxes and lines. You just use a common set of lines and boxes, so that anybody else who knows about UML understands your doodles.
Nobody forces you to use software to draw UML diagrams and nobody forces you to include every aspect of your objects in the UML diagram. The basic idea of UML is pretty sane, many people just overdo it.
Right. The problem is that formal UML diagrams tend to be part of formal documentation - i.e., outdated from the moment they are saved to disk (unless they are used for reproducible code generation, in which case.. good luck).
Informal UML diagrams - dynamic box-and-line relationship diagrams scrawled on a whiteboard where one arrow has been drawn in extra thick to emphasize a point; sequence diagrams where one colleague has drawn the sequence arrows as they work today and another has scribbled over the top the arrows as they should work in a different dry-wipe color; those are an appropriate use for "sort-of-UML" - as tools in rapid communication of ideas. My phone's full of photos of that kind of UML. But at that level of formality, nobody should mind if you get the open or filled diamond arrowheads wrong, or you use arrows for one to many relationships instead of inheritance, so long as everybody working on the diagram understands what is meant in that moment.
UML is a formal standard for back-of-the-envelope doodles, and that is exactly as ridiculous as it sounds.
I found class diagrams quite wasteful as well, you need load of time just to fill in redundant details and any issue you hit in development (i.e. you got the initialization order wrong on a not so well documented third party lib) is a mess to backtrack.
however, I found that a good sequence diagram here and there and a deployment diagram could really ease up communication when comparing the critical points of two or more different solutions - if done at the right level of detail.
I get your opinion from a programmer point of view. However from the big picture point of view, either from the architect or from the client (if she (even partially) gets UML), it could be really useful.
E.g. let's consider working for a large financial institute, designing and/or implementing very difficult workflows / algorithms what would you use instead? In my experience most of the clients don't take the time to read half (if at all) of the agreed spec, not to mention understanding. In that case an activity/state diagram really helped a lot.
And on not versionable: you are right only to an extent (e.g. exported Visio diagrams to JPG...). However there are tools/methods, which can be versioned (e.g. generetaing UML via Graphviz/dot, and AFAIK Enterprise Architect supports versioning too).
"..I'd rather outline the major components of a system by drawing (on real paper) simple boxes and lines, or write the code that implements the system."
I understand you, but the problem of simple drawings is that is not a common language broadly understood.
I've found that diagrams generated from code or APIs can be useful sometimes - Graphviz is great for that. But I never really go beyond "simple boxes and lines" myself when sketching things.
When I left university I was convinced that software would be better if more projects would use UML. Then I've learned that UML didn't help in many situations and I stopped more or less using it. But from time to time UML is a very useful tool. Recently I worked on a system with a very complex data structure and UML really helped us to order the mess and to communicate with other parts of the team.
Another good use case for UML is the visualization of complex flows in frameworks.
I gave up on UML for the exact same reason about 6-7 years ago and have written over half-a-million lines of code in three different languages since then. Never missed it.
++ E-R (entity-relationship) diagrams. I find it easier to look at boxses for each table follow the lines signifying relationships to other boxes. The "crows feet" can signify 1-to-many. The diagram is easier than reading a sequential list of SQL CREATE TABLE statements and making a mental note of "FOREIGN KEY" strings and mentally backtracking to the parent table.
++ swim lanes to show how the "state" of a system is supposed to change with "time". This can succinctly show how data "flows" without having to actually run a program and watch variables in a debugger to manually recreate the "swim lane" in your head.
++ truth tables to summarize combinations of valid/invalid business rules and associated side effects. A grid is easier than parsing a long (and often nested) list of if/then/else/switch statements.
As for UML, the notation never seems to be that succinct or helpful to me. On the surface level, it seems that UML (for code) should have the same return-on-investment as E-R (for db tables) but it doesn't in my experience.
I also wonder if there is a cultural component to UML usage. It doesn't seem like tech companies (such as Microsoft/Google/Amazon/Ubisoft/etc) internally use UML as a prerequisite step for building systems. On the other hand, I could see more UML usage at non-tech companies (such as banks/manufacturing/government) building line-of-business CRUD apps. Grady Booch (Booch & UML notation) did consulting about software methodology at non-tech companies so that may have been a factor.
I love UML. UML is overused. More people need to know UML. The less UML you do, the better.
I believe all of these statements to be true.
I had a contract many years ago with a large insurer. Their development process basically consisted of drawing really complex UML diagrams, then hitting the Big Red Button and having the modeling tool generate 40,000 lines of framework code. The chief architect explained to me that really the only work required was just a tiny bit of business logic in the appropriate places.
Fortunately I was not part of the main dev team, which for some strange reason (at least in the lead architect's mind) had the damnedest time with this system. My job was to create an internal permissions system. Given app X, user Y, and action Z, was the action allowed or not.
I looked at the problem for a while, and no matter how I thought about it, to me I had three lookup tables and one method. Boom, I'm done.
The lead architect wanted me to still draw a diagram with one class, push the button, and get the 40,000 lines of code. For some reason, this did not appeal to me.
Took me about 3 weeks to convince him that really 20 lines or so of code was all we needed. I still had to draw the diagram, though.
That's the horror story -- one among dozens I have. But on the flip side, I've been with teams that interviewed the customer while sketching out a domain model. Since we all understood UML, a quick and lightweight sketch using the appropriate notation got agreement on a ton of detail just taking 30 minutes or so. That would have been very difficult using a conversation or word processor. Sketching without some common lightweight understanding could have led to rookie errors.
There is nothing in this world better for getting quick technical agreement on complex problems than group sketching using lightweight UML. The trick is sticking to the absolute minimum.
I tend to agree. I studied UML and earn certs. At the time, I really liked it and I had fun drawing diagrams for both new and existing projects. Now, I only use it for sketching and I think the vocabulary is the key: it is more important for you and your team to know the important terms than the nuances of the actual spec. For example, the transitions on activity diagrams have sophisticated semantics for describing events, functions, etc. that may be more cumbersome to document and explain than, for example, a label just saying "convert to binary".
In my career, I have only seen a few things that caused religious fervor on any significant scale: XML and UML. But I would like to know if the level of specification around these languages is the off-putting aspect or if it is something else. Did mathematicians of the Newton-Leibiz time exchange letters denouncing proofs or trolling with irreducible polynomials?
I agree. Ten to fifteen years ago I saw stream of UML horror stories. The tools were expensive, complex, and you never needed or wanted 90% of what they could do. But PM's loved to show a stack of paperwork to the client.
Now something like Lucidchart is a great way to knock out some swim lanes or ER diagrams without the nonsense of automatically generated code. You can use the diagrams to get your team and the client on the same page without UML becoming a religion.
Plot twist: this article doesn't have all the UML you need to know, it's just the class diagram.
Particularly, for my personal projects, I use the Use Cases diagram to map the requirements and the features my application will have, associated with my prototypes. Other diagrams, like Class diagram, usually I use just to map the Domain before develop the persistence. This is how all my projects start, even if I am working alone. It is good to me and it's part of my creative process.
All the "Software Engineering" classes at university were just UML. Many years I thought SE was just that, UML.
When I did my first job interview I was rather relieved that the hiring manager wanted me to draw UML. He was very pleased with my UML skills, but he always said, I shouldn't draw it with all the details, just the basics are enough :D
That was 2011... I never had to use UML in an interview again. But I often used it as a documentation tool for a big PHP codebase, I had to manage.
UML and ER diagrams are often an overkill when designing an app. But they are nice to visualize what is going on in legacy stuff.
The complete overapplication of UML for many years gave UML an undeservedly bad name. The top comments in this thread are testament to that: many programmers are simply all too happy to go "haha! UML! that's for enterprise losers in suits who prefer paper over working code!"
Thing is, the core elements of UML are very useful in communicating a design or an idea. Class diagrams are a great way to discuss an OO-ish codebase in front of a whiteboard (or any data model, really). When you do that, it really helps when everybody knows that an arrow in static UML diagram types means "dependency" and not "the data flows from here to there".
Similarly, I still haven't seen a better way to visualise state than with a UML state chart.
It's also very nice if you can draw a UML object diagram that people understand (looks like a class diagram, except you basically draw a hypothetical runtime situation of instantiated classes and you underline the object identifier names). This works best when people understand that the picture on the left is a class diagram (design time) and the one on the right is an object diagram (runtime example) of the same classes. This is not complicated stuff, but it doesn't really work as well when half the team thinks UML is for losers.
Now, bear with me, I'll be the first to agree, UML is a bloated piece of shit. Package diagrams, wtf, who needs that? Use case diagrams that show which use cases are specified, instead of how the use cases go - seriously? Activity diagrams so you can draw a 5-line method on an entire sheet of paper, big fucking what the hell were you guys thinking?? Why do I even know what this stuff is? What a waste of time - even the decent diagram types have 60% bullshit syntax and only 40% useful stuff. And message sequence charts are nice enough for protocols but impossible to draw right.
But to dismiss UML just because some enterprise architects went a little overboard in 2002 is a bit like dismissing all of OOP because 15-level inheritance hierarchies used to be hip.
I wish we could agree on a tiny subset of UML that actually makes sense, and all learn that. This post makes a good start for class diagrams, although IMO even the ball-and-socket notation is overblown nonsense from a time long gone. Maybe we should do this, and give it a separate name.
On a mildly related note, one thing I like about OOP is that you can draw pictures of it easily. Does anyone here know of a good way to visualize functional code structure? You can draw a dependency chart of modules of functions but that only gets you so far.
After many many years not touching UML, I recently had to design some state charts - because it's something really helpful in an app we're working on at the moment - and found that using an UML tool for that is very useful. So I agree that it's something good, if taken with a big grain of salt.
Too bad he skipped sequence diagrams, saying that he had already covered them in class.
Years ago I co-authored a book on UML, but the only UML diagrams that I still use are sequence diagrams which I think are great for explaining interactions between objects or separate services.
Yup, even if you never write OOP code, this is a somewhat common language you will encounter between colleagues, so it is worth knowing exactly what is on this page.
Personally I use quite a lot sketch like UML and find it very helpful for clarifying complex environments or ideas that are not clear yet. Most of the time sketches are just boxes, circles and lines, but those communicates and clarifies the problem for others very well. I don't use lot of time when sketching, system can be described graphically very quickly, just to get the idea out or it usually goes too detailed. We have also built a tool that helps sketching systems with remote teammates (https://sketchboard.me).
Graphical representations are useful for representing spatial relationships because then you can take advantage of the inverse-GPU in the human visual cortex to do a lot of computation for you. But software doesn't have spatial relationships because it doesn't exist in space. So trying to represent software concepts graphically is generally doomed to fail. There are a few exceptions, like code indentation, but there's a reason that flowcharts aren't used much any more.
I've never figured out how to use UML abstractions to fix client and/or user production bugs, or met anyone else who could, so who the hell has the time?
I'm just curious and I'm not trying to be snide but is there any spec that OMG has produced that people actually like and use still? (they are also makers of CORBA)
I live in the Boston rt128 area and I pass OMG's building all the time and I just have no idea how they are still in business (they are near Trip Advisors new complete awesome building).
I wonder how many massive companies continuously donate to OMG and do not realize it.
The general feeling about UML is that it's overkill for most projects and actively harmful when used to generate code. On the other hand, most agree that parts of UML are useful as communication tools.
However, UML was designed as a standard, near-UML is not UML. Ergo, UML is useless.
Create a full-detail schematic of the system in version-controlled UML.
At some point, "deploy" the UML by printing it into a 4 cm-thick binder of paper, then distribute these binders to the head architects.
Iterate on the UML until the architects are happy. (The architects spent many years trying to auto-generate code from the UML diagrams and have the results "fleshed out" by lowest-bidder consultants, though this never really worked. Their stated goal was to no longer have to write any code in house, but rather nothing but UML.)
Begin implementing the system in house with auto-generated code from the binder-of-UML as a baseline, after the lowest-bidder consultants had failed.
Quickly get into big fights between the coders-on-the-ground and management when it was found that the UML diagrams contained major architectural flaws and the UML-phase would not, actually, account for 80% of the project's duration and budget. Needless to say, more than half of the projects failed entirely.
This experience nearly made me leave the industry, before I discovered that there was plenty of software being written in a saner way. This was more than a decade ago, but to this day, just seeing UML diagrams turns my stomach.