I don't really care about "fun", but I just can't find the words to explain how happy I am that there's a chance we might put all this flat madness behind us. I'm looking forward to:
* Not having this conversation with my mom (who, at 60+, is remarkably adept with computers but hey, we all run into trouble sometimes) over the phone anymore:
Me: Okay mom, now press the "Edit" button
Mom: Which one's the edit button?
Me: Uh, the... um, it's the one that kindda looks like... a bunch of lines in a rectangle, I guess?
Mom: Alex they all look like lines in a rectangle
* Not hovering everything for five minutes until I can figure out what I can click and what I can't click (for bonus points: not trying to click something for five minutes like an idiot, only to find out it's a label, not a disabled button)
* Buttons, tree views, tabs and all that having relief borders again, not necessarily because I like that, but because without it, the only way to "isolate" the information in them is using whitespace, and seven years of flat design hell later I can fit only slightly more content on my 1920x1080 screen than I could fit my Amiga's 1024x768 screen 20+ years ago.
* Being able to tell file types apart from each other when I'm browsing at minimum zoom level -- which is how you end up browsing any collection of more than a few dozen items or so
* Being able to tell application icons apart based on what's in the icons, not based on colours. People keep parroting this idea that "symbolic icons are easily to tell apart from each other because they're so simple" but when all icons are a letter or some anonymous symbol on a blob of colour, they all look the same when you put a few dozen of them next to each other. Maybe they're easy to tell apart when you have like four 128x128px icons but when you have 40 of them in a tiny dock at the bottom of your screen, the only useful information they retain is what colour they are.
Honestly, I was actually excited by flat design when I first saw it, but that was on Windows Phone 7. And WP7 had an exceptionally rigid and consistent approach to flat design. If something was colored, it was a button. If you wanted to color something that wasn't a button? Too bad for you. Basically, on WP7 you didn't get to "design" at all - the UI framework basically said "this is what your widgets look like, if you don't like it you can make software for a different operating system.
Ironically, I think "bringing back the fun" is what destroyed flat design, because people who wanted to make their designs "fun" destroyed the design-language.
Overall, I don't care if you go flat or otherwise - I want the WP7 attitude in my software. The platform provides a rigid design-language, and the individual applications do not deviate from it. Unfortunately, web is fundamentally a platform designed for styled documents and not GUIs so we don't have that kind of standardization available, and the web is the center of modern design.
Windows Phone had amazing human interface guidelines for hand-held touch devices. I still believe that Live Tiles are unmatched as a home screen pattern.
> The platform provides a rigid design-language, and the individual applications do not deviate from it
The problem here is twofold:
1. Apps actually deviate a lot from the guidelines for their platform. A lot of posts from grumpy.website et al. concern apps (even first-party ones!), not just websites. For instance, title bars that are undraggable/hidden/used as toolbars are a worrying trend.
2. The design language itself may be flawed, so following it blindly would result in a worse UX. (See: Material design and its weird, obnoxious floating button)
Can’t wait for the flat design trend to return in 4-5 years after this light nudge towards flat + a little depth, leads us right back to yellow legal pads with faux leather binding, but all the elements look flatter.
There’s precious little “new frontier” to chase for 2D screens design. Small touches like expanded drag, swipe, etc would be nice. Designing towards doing not fostering emotional attachment to design. The epistemology has come unhinged from its ontology for wankery of theory sake and here we are looking to escape it. It’s no longer serving the true purpose of the gadget, to do utilitarian work. We’re bored with flat and they need to keep us attached emotionally, so here’s our new season of designs!
I’d prefer we return to utilitarian UI of Win2k, OS9, so I can get a job done and not want to be “in love” with a screen
If you look deeper it’s the exact opposite. “Non-flat” design is utilitarian. Win2k, NextStep, Amiga, OS9, all used skeuomorphism and visual affordable in a very straight-forward fashion.
You've hit the nail perfectly with most of those points.
For all its faults, the older styles of UI and icons made better use of screen space and conveyed much more information to the user. That should always have priority over a "designer" or "artist" imposing a "style" on the system. Any UI is meant to facilitate work, not create work in trying to understand whatever new trend.
It's quite an uphill struggle, but I've managed to salvage some better UI work from the past on my present GNU/Linux desktop (my loathing for flat and padded knows no bounds). But I'd prefer it if the user at least had the option of a coloured, 3d-textured UI in modern applications. Life itself is 3D, we should once again embrace that on the screen.
Designer rant. I've been a computer user for 35 years, developer for 25 years (10 full-time professionally), *nix user for 20 years, and am currently in a degree program as a designer, and I've got to say that I disagree entirely. I think you're conflating your personal taste with functional design, and then again conflating functional design with styling and decoration. Interface design principles— as a subset of industrial design— have been theorized about, extensively tested, and refined for decades. Stylistic trends in visual representations of ideas in icons, etc. are much less clearcut.
At least in good quality commercial software, I can assure you that a designer's primary concern is usability. Visual cohesion and hierarchy, the flow of the eye around the screen, determining what a user actually needs to see and interact with during a task, and moving unnecessary elements to different views are all valuable tools a designer has to make a program more usable. They aren't merely stylists imposing trendy visuals on otherwise entirely usable software, and are certainly not "artists."
However, open-source interfaces are sometimes created by inexperienced enthusiasts or developers trying to "make it look nice" rather than experienced designers. Projects often don't impose the same quality control for interface design decisions as they do for code design decisions. That might lead some "artists" or "designers" to do things that designers (sans-quotes) should be doing. Designers should be more involved in open-source projects and open-source projects should court design contributions and stop downplaying the importance of good interface design.
Most users simply don't share your tolerance (preference?) for visually crowded interfaces that aim to cram as many elements on a screen as you can possibly fit with little editing or regard for what users actually 'need' on a screen for a given task. Your citing old school Linux UIs as a high point in UI design is portrays a perspective that doesn't have a ton in common with a typical user.
It wouldn't surprise me if the periodic stylistic changes are analogous to fashion industry periodic makeovers, apparently a tool to keep users engaged and purchasing gadgets (plus a lot of sociological reasons) -- not necessarily for superiority to what came before in any absolute sense.
I bet every serious design change aims to alleviate some of the pains that the previous design was producing. I bet it's always an honest "let's make it right this time" attempt.
But every new design also contains certain compromises, and also is found to has pain points that have not been anticipated.
On the level of the designers I agree -- each designer is without doubt doing what he thinks is best, but on the larger scale that prompts redesign in the first place, I'm skeptical.
For example, would it be acceptable/comfortable for the designer to conclude "Everything is fine as is, we've done a pretty good job last time around, let's ship it like it is." ? If not that reflects a systemic bias to introduce changes in the name of changing.
Absolutely. As is the case with architectural styles, features and forms in footwear, clothing materials, popular branding color pallets and all sorts of other things like that, graphical interface norms certainly follow trends. Skeuomorphism, for example.
>Most users simply don't share your tolerance (preference?) for visually crowded interfaces that aim to cram as many elements on a screen as you can possibly fit with little editing or regard for what users actually 'need' on a screen for a given task.
I don't think anyone has a preference for needlessly crammed UIs. But the question is what you prioritise when a task really does benefit from seeing a lot of information at a glance.
There's an easy choice. There's a hard choice. And there's an ugly choice.
The easy choice is to "clean up" the interface and just remove stuff even if it makes the interface less useful. Apparently, that's a popular choice with designers. It can often be justified by pointing to "most users", because that creates a statistical bias toward less demanding requirements.
The hard (and expensive) choice is to think deeply about data visualisation, gain a detailed understanding of the task at hand to the point where the designer has to become a user themselves, and communicate with the most demanding users until you get something functional and not crammed or until you learn what customisation options are needed.
When the hard choice fails or isn't an option for economic reasons, that's when only ugly choices remain, and that's when I would rather have a developer who is intimately familiar with the task come up with a UI that is at least fit for purpose.
I'm not trying to be a jerk here, but this is an incredibly glib oversimplification of design. It's pretty pervasive too, especially in FOSS, which is why FOSS interfaces often suck so bad.
Not all coding is software design, and not arrangement of elements in an interface is interface design. If someone is taking an interface and making it "look nice" by removing beneficial components, then what they are doing is not design. Design is a process and a mentality, not a singular activity.
What you describe as the hard choice is merely a point (not even an extreme one) on the spectrum of what constitutes interface design. Broadly speaking, the first step, always, is to figure out what the user's goals are, to whatever extent possible. You might be able to do in-depth user research with focus groups and eye-tracking and put together user stories with A/B testing etc. etc. etc. You might only be able to do some personal research and play around with the software a bit and draw some diagrams. The second step is to figure out how you can help your users accomplish those goals most effectively through the arrangement and functionality of the on-screen elements. Without your primary concern being what the user actually needs, you're just decorating, or maybe organizing.
Reducing the amount of data on a screen isn't a end in itself. If it contributes to the user performing their task better, then great. If it inhibits it, it's the wrong choice. Any person with formal design training should be perfectly comfortable arranging a large amount of complex elements and information on a screen in a comprehensible way. See magazines, train timetables, newspapers, etc.
100% with you. I absolutely hate the material design, it's the least intuitive way to use a computer. Paired with touch-first interfaces it's one of the worst thing to happen to computers.
I don't hate material design itself, because there's plenty of guidelines for condensed content. It's just that web designers and app artists don't seem to understand that not everything needs whitespace for miles.
A condensed table in material design strikes a nice balance between whitespace and information density in my opinion. It's just sad that nobody uses it.
I'm also that person that has their DPI set to "small" on my phone though. I bought a phone with nearly 6 inches of screen diagonal and I could barely see any more information on it then on my old 3.5" phone from almost a decade ago. Cranked it down in the setting to just above the point where apps start displaying tablet UI on my phone and I've been very happy with it.
We need to take away design from artists and hand it back to specialised UI designers in my opinion. It'd make phones a lot easier to use.
The problem I see with abolishing material design is that a whole range of elderly people have now gotten used to it. Changing all the layouts for apps and websites would quickly lead to a disaster on the scale of Windows 8 for those who can barely grasp technology as it is. I'd much rather keep the waste of whitespace if I couldn't chat with my grandma otherwise.
Personally I went through the opposite process. I was a huge fan of Android up to and including KitKat, and I thought Holo was great. My enthusiasm for the entire platform disappeared as soon as Lollipop came out with the dreadful new Material UI. Even with themes I just hated using it on every level and eventually switched to an iPhone.
It was a lot of things around that release. The look, the feel of the "cards" everywhere, the deprecation of power-user features in the name of security (like full SD card read/write access for apps). Totally the right trade for a dumbed-down-but-safe mass-market OS, but that's not why I liked Android. When presented a wannabe iPhone I prefer the real thing :p
I agree with all your points. However, Big Sur is going to be very, very, very bad accessibility-wise.
The entire interface is washed out. There's little to no distinction between selected items. Translucency breaks readability of text. And so on and so forth.
Oof, that was painful. Not only is the low contrast bad for accessibility, the unbalanced X paddings are just wrong.. Something must be amiss in their QA if this was allowed to reach public release.
I'm sure Apple's response is that you can turn on high contrast mode in accessibility settings.
...and my problem with that argument is that high contrast mode looks aesthetically terrible. So basically, only people with great eyesight get to appreciate aesthetically-pleasing UIs.
Obviously, some people just have very bad eyesight, and there comes a point when you need to say "screw aesthetics, here's something you can actually see." But, the less people who need to resort to that, the better.
It's really hard explaining to my grandma what's tappable on her iPad when everything's not just flat, but buttons aren't even marked in a way that suggests you can interact with them.
> it's the representations that are not really representative to everyone.
That's true of virtually every graphical representation in a computer application. Save icon? How about folders? Where I'm from, folders (the physical item) never took off. Even today, we use a sort of slimmed-down binder for small collections of documents. For millions of people, myself included, the folders in Windows 95 and Windows 98 were the first ones we ever saw. That never prevented anyone from learning how to use them, or figuring out what the "Open..." button does.
So yeah, "Click the folder button to open" never worked, for the same reason -- no one had ever seen a folder. But the button was visually distinctive enough that you could say "click the yellow button in the toolbar, it's way up there on the left".
Very few icons are really globally unambiguous (and I still think designers in the early/mid-'90s really had the right idea when they just put the frickin' text next to, or below the damn icon). But making all icons look basically the same makes an already difficult situation even worse.
The folder metaphor helped me grasp what was going on. The first computers I spent significant time with were BBCs. They had two filesystems: DFS and ADFS (I think). The Advanced Disk Filing System had folders, but all the documentation I found called them "files". Files could point to files and make a tree of files. 11 year old me just gave up on this. Why would you want a tree of files? Why should a file link to a file...? Nonsense.
Folders (in Gem and Windows) made MUCH more sense. Aaah... it's for organising ideas. Bingo!
Yeah, things were very bad for 10 year-old me: in English, the slimmed-down binder we use around here for, well, filing, is called a file. It made absolutely zero sense that "a folder is a collection of files" and "a file is a document". We didn't use folders, and a real-life file usually had several documents.
It was pretty stupid but it made sense on its own after a while.
I'm actually genuinely curious if young people have an issue with the floppy disk icon. I feel like at this point it's meant "save" longer than it was actually physically in use.
Icons are always abstractions. Their purpose is to succinctly distinguish one thing from another. In isolation they are sometimes disconnected from the original meaning, particularly if the original meaning was an abstract idea rather than a concrete object.
Are you bothered that “A” no longer means “cattle” because that is where it started.
In the original GUI interfaces you had a combination of icons where space was tight and menus with functions represented by words. That made a lot of sense when very few people had experience with how computers work. Now that more people are familiar with how they work it is not surprising that menus are de-emphasized.
This. Most critics here seem to be thinking of "how it used to be" in the good old days, without considering it from the eyes of new users currently being introduced to computers and how they might perceive things.
The "flat design" we've had for the past decade really is flat design, but the dubbing of what preceded it as "skeuomorphism" was a misnomer, in most cases.
In theory, an old "save" icon would look like a floppy disk (or perhaps a picture of a red tape-recorder "button") whereas a modern, flat "save" icon would be the word "SAVE".
In practice, if a button is labelled "SAVE", but in beautifully shaded and embossed typography, it too is "skeuomorphic", while some ugly disk-drive symbol, in the flat style of highway sign, is acceptably "flat".
A decade ago, most popular software did not abuse skeuomorphism. It was primarily great masses of iPhone crap apps, and four or five, widely talked-about, Scott Forstall iOS projects.
Like others in this thread are saying, I don't care much whether the software I use is "fun", but I do want it to be beautiful and intuitive. "Flat design" has been bad in that regard, and primarily as a reaction to a minority of apps looked like toasters or radios.
> Not hovering everything for five minutes until I can figure out what I can click
I really like the way material addresses this : any clickable element should use the primary color of your app.
This way, no guesswork. Button, switches, icons, links, they all use the same action color so if something uses it, you know it has to be clickable.
Except that every app can have its own "primary" colour, so you have to learn as many rules as you have apps, instead of one if a button would just look like a button.
I admit to the same feeling. I like akeuomorphism .. but as with most, we probably got carried away a bit. What I liked about the flat design era (not flat design itself) was the typography - you could suddenly use any typeface you wanted because displays were all retina! What I want is some kind of wabesabe design ethic that have the richness of texture and flavour (which is part of what the OP is referring to as "fun" I suppose). The digital world has been getting incredibly dull and monotonic these past several years.
> you could suddenly use any typeface you wanted because displays were all retina
You're thinking only of Apple land, right? Most displays in the world are still 1366x768 and 1920x1080, and huge fonts and widgets everywhere is exactly what I'm not going to miss about this era.
On mobile devices. My recent two (relatively) cheap ($200 range) phones have both been retina .. but yeah apple drove the retina evolution and typography benefited from that greatly I think .. on mobile devices.
You could technically get 1024x768 (with really bad flickering) on anything with an AGA. Check out e.g. https://aminet.net/package/driver/moni/WBHacksAGA or https://aminet.net/package/driver/moni/HighGFX40_6 . The only high-res Amiga I saw at the time was an Amiga 4000 (it wasn't mine, but I had pretty much unlimited access to one at $parental_unit's workplace, where they had some fancy multimedia lab that generally went unused), but I expect the Amiga 1200 could do it, too. The flickering was pretty terrible but I was a kid and I had good eyes :).
HighGFX was actually popular (and useful) enough that it caused a bunch of "flickerfixer" hardware to sprout in the mid/late 90s but I've never used/seen one.
Well, not really in my back pocket, but I did post something similar a while back on my Twitter account. It's not like my list of grievances has changed since then :).
The original comment was just the first sentence, I felt like it spoke for itself but apparently the hive was of a different opinion. I utterly hate this place it's like a stalinist gulag.
It's possible your first sentence was not sarcastic or insulting, but it kind of came off that way. Then your edit made it clear you were throwing shade at the person for having their rant in their head.
Dude, several people gave you feedback on why you got downvoted. I have now spent... at least four minutes of my life dedicated to helping an internet stranger understand why their comment seemed rash. And you're complaining about "Stalinist Gulag"? Get over yourself, stop being melodramatic, or go read about the torture and starvation of millions of people while you sit drinking tea and typing out armchair philosophy on your mechanical keyboard.
I'm on my iPhone but you have a point we've become so meta sarcastic that in all honesty I meant 'kudos for itemizing all your grievances so clearly and succinctly' and it came off as condescending.
I always wonder, do people read messages in their own voice or the voice of the writer? It helps to have at least a measure of compassion.
Good of you to invest a few minutes of your day on clarifying that. I mean it with no hint of sarcasm but I can see if someone were struggling with a tough turd while reading it, it may come across much differently.
Edit: and by the way, all Stalinist gulags start with a petty quasi meritocratic bureaucracy and people just following orders and doing their assigned tasks.
Edit 2: the OP understood my meaning, I was referring to the barrage of downvotes. This past few days I've hit a great streak.
* Not having this conversation with my mom (who, at 60+, is remarkably adept with computers but hey, we all run into trouble sometimes) over the phone anymore:
Me: Okay mom, now press the "Edit" button
Mom: Which one's the edit button?
Me: Uh, the... um, it's the one that kindda looks like... a bunch of lines in a rectangle, I guess?
Mom: Alex they all look like lines in a rectangle
* Not hovering everything for five minutes until I can figure out what I can click and what I can't click (for bonus points: not trying to click something for five minutes like an idiot, only to find out it's a label, not a disabled button)
* Buttons, tree views, tabs and all that having relief borders again, not necessarily because I like that, but because without it, the only way to "isolate" the information in them is using whitespace, and seven years of flat design hell later I can fit only slightly more content on my 1920x1080 screen than I could fit my Amiga's 1024x768 screen 20+ years ago.
* Being able to tell file types apart from each other when I'm browsing at minimum zoom level -- which is how you end up browsing any collection of more than a few dozen items or so
* Being able to tell application icons apart based on what's in the icons, not based on colours. People keep parroting this idea that "symbolic icons are easily to tell apart from each other because they're so simple" but when all icons are a letter or some anonymous symbol on a blob of colour, they all look the same when you put a few dozen of them next to each other. Maybe they're easy to tell apart when you have like four 128x128px icons but when you have 40 of them in a tiny dock at the bottom of your screen, the only useful information they retain is what colour they are.