Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You've hit the nail perfectly with most of those points.

For all its faults, the older styles of UI and icons made better use of screen space and conveyed much more information to the user. That should always have priority over a "designer" or "artist" imposing a "style" on the system. Any UI is meant to facilitate work, not create work in trying to understand whatever new trend.

It's quite an uphill struggle, but I've managed to salvage some better UI work from the past on my present GNU/Linux desktop (my loathing for flat and padded knows no bounds). But I'd prefer it if the user at least had the option of a coloured, 3d-textured UI in modern applications. Life itself is 3D, we should once again embrace that on the screen.




Designer rant. I've been a computer user for 35 years, developer for 25 years (10 full-time professionally), *nix user for 20 years, and am currently in a degree program as a designer, and I've got to say that I disagree entirely. I think you're conflating your personal taste with functional design, and then again conflating functional design with styling and decoration. Interface design principles— as a subset of industrial design— have been theorized about, extensively tested, and refined for decades. Stylistic trends in visual representations of ideas in icons, etc. are much less clearcut.

At least in good quality commercial software, I can assure you that a designer's primary concern is usability. Visual cohesion and hierarchy, the flow of the eye around the screen, determining what a user actually needs to see and interact with during a task, and moving unnecessary elements to different views are all valuable tools a designer has to make a program more usable. They aren't merely stylists imposing trendy visuals on otherwise entirely usable software, and are certainly not "artists."

However, open-source interfaces are sometimes created by inexperienced enthusiasts or developers trying to "make it look nice" rather than experienced designers. Projects often don't impose the same quality control for interface design decisions as they do for code design decisions. That might lead some "artists" or "designers" to do things that designers (sans-quotes) should be doing. Designers should be more involved in open-source projects and open-source projects should court design contributions and stop downplaying the importance of good interface design.

Most users simply don't share your tolerance (preference?) for visually crowded interfaces that aim to cram as many elements on a screen as you can possibly fit with little editing or regard for what users actually 'need' on a screen for a given task. Your citing old school Linux UIs as a high point in UI design is portrays a perspective that doesn't have a ton in common with a typical user.


It wouldn't surprise me if the periodic stylistic changes are analogous to fashion industry periodic makeovers, apparently a tool to keep users engaged and purchasing gadgets (plus a lot of sociological reasons) -- not necessarily for superiority to what came before in any absolute sense.


I bet every serious design change aims to alleviate some of the pains that the previous design was producing. I bet it's always an honest "let's make it right this time" attempt.

But every new design also contains certain compromises, and also is found to has pain points that have not been anticipated.

So the cycle continues.


On the level of the designers I agree -- each designer is without doubt doing what he thinks is best, but on the larger scale that prompts redesign in the first place, I'm skeptical.

For example, would it be acceptable/comfortable for the designer to conclude "Everything is fine as is, we've done a pretty good job last time around, let's ship it like it is." ? If not that reflects a systemic bias to introduce changes in the name of changing.


Absolutely. As is the case with architectural styles, features and forms in footwear, clothing materials, popular branding color pallets and all sorts of other things like that, graphical interface norms certainly follow trends. Skeuomorphism, for example.


>Most users simply don't share your tolerance (preference?) for visually crowded interfaces that aim to cram as many elements on a screen as you can possibly fit with little editing or regard for what users actually 'need' on a screen for a given task.

I don't think anyone has a preference for needlessly crammed UIs. But the question is what you prioritise when a task really does benefit from seeing a lot of information at a glance.

There's an easy choice. There's a hard choice. And there's an ugly choice.

The easy choice is to "clean up" the interface and just remove stuff even if it makes the interface less useful. Apparently, that's a popular choice with designers. It can often be justified by pointing to "most users", because that creates a statistical bias toward less demanding requirements.

The hard (and expensive) choice is to think deeply about data visualisation, gain a detailed understanding of the task at hand to the point where the designer has to become a user themselves, and communicate with the most demanding users until you get something functional and not crammed or until you learn what customisation options are needed.

When the hard choice fails or isn't an option for economic reasons, that's when only ugly choices remain, and that's when I would rather have a developer who is intimately familiar with the task come up with a UI that is at least fit for purpose.


I'm not trying to be a jerk here, but this is an incredibly glib oversimplification of design. It's pretty pervasive too, especially in FOSS, which is why FOSS interfaces often suck so bad.

Not all coding is software design, and not arrangement of elements in an interface is interface design. If someone is taking an interface and making it "look nice" by removing beneficial components, then what they are doing is not design. Design is a process and a mentality, not a singular activity.

What you describe as the hard choice is merely a point (not even an extreme one) on the spectrum of what constitutes interface design. Broadly speaking, the first step, always, is to figure out what the user's goals are, to whatever extent possible. You might be able to do in-depth user research with focus groups and eye-tracking and put together user stories with A/B testing etc. etc. etc. You might only be able to do some personal research and play around with the software a bit and draw some diagrams. The second step is to figure out how you can help your users accomplish those goals most effectively through the arrangement and functionality of the on-screen elements. Without your primary concern being what the user actually needs, you're just decorating, or maybe organizing.

Reducing the amount of data on a screen isn't a end in itself. If it contributes to the user performing their task better, then great. If it inhibits it, it's the wrong choice. Any person with formal design training should be perfectly comfortable arranging a large amount of complex elements and information on a screen in a comprehensible way. See magazines, train timetables, newspapers, etc.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: