IIRC this is one of the reasons the UI on the Mac opted for a fixed context-dependant menu bar at the top of the screen instead of the per-window one used by Windows (and Java).
It's basically 'fling your pointing device at the top' and 'go left or right to get the button you want'. Due to the lack of borders/stops, this would be harder if it was sandwiched between a titlebar and window content.
The reasoning is that mouse stops the border of the screen no matter how far the mouse is moved, making an effective target that is huge off the screen, so easy and quick to hit. This would hold true even with large screens, unless you dial down the acceleration of the mouse for fine control--as others have pointed out.
But the issue is Apple broke the whole mechanism with hot corners. Now if I move fast anywhere near a hot corner, it gets activated. And now the menu bar near the corners is tiny and hard to hit with a "huge" hot corner right nearby (the hot corner gets the benefit of the inifinte off-screen target). I find the same problems will full-size browsers (with tabs along the top), I'm always hitting the hot corners instead of the top lerpft and right tabs. I guess I can always change my corner settings.
Additional gripe about the top menu in MacOS: the biggest fault I've found is that it can be active for an app whose windows are hidden or that currently have any windows, thus creating a mismatch between what you see (other windows) and what is active (responding to keyboard shortcuts for example).
I'm super happy you brought up the infinite border aspect of the top menu, because that is the best aspect of it, and whenever I go back to Windows, that required fine touch control drives me nuts.
But WRT hot corners, the best part of hot corners is being able to assign a modifier key to them.
Without using the modifier, it's crazy annoying, but having a modal aspect that requires active engagement seems to me the best of both.
Then again, I haven't actually engaged a hot corner in years. I find Keyboard Maestro the best option of all.
(P.S. just for anyone who doesn't know how to add a modifier to the hot corners, just hold down Ctrl, shift, option or CMD when selecting the setting).
OTOH, that behavior is absolutely necessary for me!
You can use Cmd-Tab to activate an app that has no windows, in order to activate hotkeys that let you create new windows! Especially when you’re using multiple desktops, this ability is invaluable.
You can however also argue against having such a global menubar with Fitts's Law, as it means that other UI elements can't be placed at the screen edge, such as the min/max/close-buttons (though there are also concepts where those are global, too) and nowadays also browser tabs.
While partially true, this could be solved by simply assigning priority to UI elements. Launchers/window managers often list their stuff at the bottom or sides (like docks and window lists) which allow for the same ease of navigation. There is of course only a 4-side box that you can use, and cluttering an environment with bars at all sides doesn't help a user.
The close/resize/etc buttons are indeed a different issue, while the controls are often duplicated (File or window menu on most operating systems and window managers have the close/resize/max options too). At some point I think Ubuntu had a default desktop environment where the close/max/min buttons were added to the top menu bar. I thought it was quite a nice idea, but the implementation never spread to other systems and sadly, support wasn't very universal across all applications.
Thinking along those lines, what if you added the title bar at the top too, you'd end up without a border/bar at the top of the window, making it harder to find something to 'grab' with your pointing device. Some operating systems use a modifier key + the mouse to turn all windows draggable and disable inputs while doing it, but that hasn't had much success (aside from it being the default on certain window managers).
"People for years have been explaining to me very patiently that in this era of giant screen monitors, we just have to do something about those menu bars way up there at the top of the screen" -- Tog, column 15, May 1990
He then describes an experiment where he used a 21-inch monitor and a 13-inch monitor attached to a Mac, and had subjects change the color of folders on one screen by selecting menu items on the other. Even compared to a pop-up menu right under the mouse, the far-off edge menu bar was still faster.
Objects on a 2018 MacBook Pro can be 70% further apart (D) than on a 1984 Macintosh, but the edges of the screen are still infinitely big (W).
If you have a "large" screen, I would contend that having a dedicated menu button on your mouse/trackpad/trackball/pen is the best possible thing for any user who isn't a complete novice. Don't wave the pointer somewhere else, make the menu come to you.
If you can't do that for whatever reason, then having a high-acceleration pointer that can be flung all the way against an edge is next best. Going back to where you were, though, will be relatively hard.
The 'make the menu come to you' approach would make sense for large and/or unusually controlled screens. Take game consoles and their directional controllers or old style mobile phones with numeric keypad for example, they made most menus pop up in context-menu style.
I imagine something like holding a menu button which acts like a modifier key that also pops up a menu on-screen. Then, using a physical layout on the screen that matches the layout of the buttons on your controller or pointing device to navigate/select items of choice without using x,y pointing systems such as the mouse arrow. This does however create a new problem: how to decide where the menu is going to overlap over the stuff you were working on?
I remember this explanation too, but think it's one of those "good in theory, bad in practice" things --- going all the way to the edges all the time (and back!) is very tiring if you have your mouse set for high accuracy and have (a) huge screen(s).
This is something I remember very clearly since it was the explanation I was given when I first tried out the original Macintosh decades ago. Even with its tiny screen I had to lift and reposition the mouse, or otherwise move my whole arm, in order to go from one edge to the other. The other annoyance that stood out was menus that didn't stay open unless you held down the button, and a complete lack of keyboard navigation (I know about the keyboard shortcuts, but it doesn't compare to being able to browse through the menus with only the arrow keys, which could be done even with early Windows.)
-move the pointer diagonally across two screens, then pick something in a submenu (carefully, I think the submenu disappears if you don’t hit the end of the first menu and instead tries to move directly to the item in the sub menu)
macOS treats click-hold and click-release differently for menus. Click-hold is for when you want the menu to disappear by itself, click-release for when you want it to stay open until you click on something else.
The engineers way of thinking about Fitt's Law is as a human control system. We motion control our hands by using feedback (visual, tactile, proprioceptive). The servo time response to a step function (new location to click) of that feedback loop depends on the required accuracy and allowed overshoot. The larger the target, the higher a velocity/acceleration you can use to hit it without missing. You learn very quickly that large objects (like edge of screen) allow much more gross movements than single pixel target... and the farther you have to go the larger the time at a given tracking velocity.
What is at least as interesting is the cognitive load of tracking/pointing, clicking/chording . Mental load and apparent time appear to be the reason why typing can be slower than a menu system, but it feels faster. Similarly, people will report a feeling like a trackpoint (IBM keyboard nipple) takes longer than a mouse even when they're actually faster in hitting targets. Presumably, this is because they have to track the cursor to know velocity and position, while a mouse or touchpad uses your body's knowledge of hand position/velocity that is missing from a force based input.
On some early graphical computer user interface, I can’t remember which one, one could specify that the mouse cursor would “wrap” to the opposite edge. It was like the ultimate non-Fitt’s law configuration. I hated it when I tried it, I would lose the cursor and not be able to find it.
You’re right. What I discovered was that I would lose the mouse cursor because I couldn’t quickly move it to an edge without tracking it visually all the way to begin with and once the cursor crosses the edge it breaks visual continuity by jumping to the opposite side. Today’s multimonitor configurations have the same problem to some extent because they have so much area with small discontinuities at the edge where the cursor jumps to a different monitor.
Windows 8 Start UI was designed to take advantage of this. Theoretically it was great: when you open start menu the mouse pointer is in the bottom left corner, tiles close to you are wide and tall, tiles far from the pointer are smaller, wider at the bottom, narrower at the top. Hot corners were supposed to be easily accessible (infinite distance). Yet is was a failure, because uses were not familiar with it, it broke their habits.
The whole problem with the tiles is that they are much too big. Bigger targets are easier to click, but targets farther away are again harder to click. The targets in the start menu were plenty big to start with anyways.
Pie menus benefit from Fitts' Law by minimizing the target distance to a small constant (the radius of the inactive region in the menu center where the cursor starts) and maximizing the target area of each item (a wedge shaped slice that extends to the edge of the screen).
They also have the advantage that you don't need to focus your visual attention on hitting the target (which linear menus require), because you can move in any direction into a big slice without looking at the screen (while parking the cursor in a little rectangle requires visual feedback), and you can learn to use them with muscle memory, with quick "mouse ahead" gestures.
Jack Callahan, Don Hopkins, Mark Weiser (+) and Ben Shneiderman.
Computer Science Department University of Maryland College Park, Maryland 20742
(+) Computer Science Laboratory, Xerox PARC, Palo Alto, Calif. 94303.
Presented at ACM CHI'88 Conference, Washington DC, 1988.
Abstract
Menus are largely formatted in a linear fashion listing items from the top to bottom of the screen or window. Pull down menus are a common example of this format. Bitmapped computer displays, however, allow greater freedom in the placement, font, and general presentation of menus. A pie menu is a format where the items are placed along the circumference of a circle at equal radial distances from the center. Pie menus gain over traditional linear menus by reducing target seek time, lowering error rates by fixing the distance factor and increasing the target size in Fitts's Law, minimizing the drift distance after target selection, and are, in general, subjectively equivalent to the linear style.
The Design and Implementation of Pie Menus -- Dr. Dobb's Journal, Dec. 1991
There're Fast, Easy, and Self-Revealing.
Copyright (C) 1991 by Don Hopkins.
Originally published in Dr. Dobb's Journal, Dec. 1991, lead cover story, user interface issue.
Introduction
Although the computer screen is two-dimensional, today most users of windowing environments control their systems with a one-dimensional list of choices -- the standard pull-down or drop-down menus such as those found on Microsoft Windows, Presentation Manager, or the Macintosh.
This article describes an alternative user-interface technique I call "pie" menus, which is two-dimensional, circular, and in many ways easier to use and faster than conventional linear menus. Pie menus also work well with alternative pointing devices such as those found in stylus or pen-based systems. I developed pie menus at the University of Maryland in 1986 and have been studying and improving them over the last five years.
During that time, pie menus have been implemented by myself and my colleagues on four different platforms: X10 with the uwm window manager, SunView, NeWS with the Lite Toolkit, and OpenWindows with the NeWS Toolkit. Fellow researchers have conducted both comparison tests between pie menus and linear menus, and also tests with different kinds of pointing devices, including mice, pens, and trackballs.
Included with this article are relevant code excerpts from the most recent NeWS implementation, written in Sun's object-oriented PostScript dialect.
MediaGraph Music Navigation with Pie Menus Prototype developed for Will Wright's Stupid Fun Club: This is a demo of a user interface research prototype that I developed for Will Wright at the Stupid Fun Club. It includes pie menus, an editable map of music interconnected with roads, and cellular automata.
The Sims, Pie Menus, Edith Editing, and SimAntics Visual Programming Demo:
This is a demonstration of the pie menus, architectural editing tools, and Edith visual programming tools that I developed for The Sims with Will Wright at Maxis and Electronic Arts.
Any idea why these are not often used with touchscreen mobile interfaces, e.g. press for contextual pie menu? Even without OS support, they could be implemented within apps.
There have been various implementations of pie menus for Android [1] and iOS [2]. And of course there was the Momenta pen computer in 1991 [3], and I developed a Palm app called ConnectedTV [4] in 2001 with "Finger Pies" (cf Penny Lane ;). But Apple has lost their way when it comes to user interface design, and iOS isn't open enough that a third party could add pie menus to the system the way they've done with Android. But you could still implement them in individual apps, just not system wide.
Also see my comment above about the problem of non-transparent fingers.
Swiping gestures are essentially like invisible pie menus, but actual pie menus have the advantage of being "Self Revealing" [5] because they have a way to prompt and show you what the possible gestures are, and give you feedback as you make the selection.
They also provide the ability of "Reselection" [6], which means you as you're making a gesture, you can change it in-flight, and browse around to any of the items, in case you need to correct a mistake or change your mind, or just want to preview the effect or see the description of each item as you browse around the menu.
Compared to typical gesture recognition systems, like Palm's graffiti for example, you can think of the gesture space of all possible gestures between touching the screen, moving around through any possible path, then releasing: most gestures are invalid syntax errors, and they only recognizes well formed gestures.
There is no way to correct or abort a gesture once you start making it (other than scribbling, but that might be recognized as another undesired gesture!). Ideally each gesture should be as far away as possible from all other gestures in gesture space, to minimize the possibility of errors, but in practice they tend to be clumped (so "2" and "Z" are easily confused, while many other possible gestures are unused and wasted).
But with pie menus, only the direction between the touch and the release matter, not the path. All gestures are valid and distinct: there are no possible syntax errors, so none of gesture space is wasted. There's a simple intuitive mapping of direction to selection that the user can understand (unlike the mysterious fuzzy black box of a handwriting recognizer), that gives you the ability to refine your selection by moving out further (to get more leverage), return to the center to cancel, move around to correct and change the selection.
Pie menus also support "Rehearsal" [7] -- the way a novice uses them is actually practice for the way an expert uses them, so they have a smooth learning curve. Contrast this with keyboard accelerators for linear menus: you pull down a linear menu with the mouse to learn the keyboard accelerators, but using the keyboard accelerators is a totally different action, so it's not rehearsal.
Pie menu users tend to learn them in three stages: 1) novice pops up an unfamiliar menu, looks at all the items, moves in the direction of the desired item, and selects it. 2) intermediate remembers the direction of the item they want, pop up the menu and moves in that direction without hesitating (mousing ahead but not selecting), looks at the screen to make sure the desired item is selected, then clicks to select the item. 3) expert knows which direction the item they want is, and has confidence that they can reliably select it, so they just flick in the appropriate direction without even looking at the screen.
I wrote some more stuff about pie menus in the previous discussion of Fitts' Law. [8]
Self-revealing gestures are a philosophy for design of gestural interfaces that posits
that the only way to see a behavior in your users is to induce it ( afford it, for
the Gibsonians among us). Users are presented with an interface to which their
response is gestural input. This approach contradicts some designers’ apparent
assumption that a gesture is some kind of “shortcut” that is performed in some
ephemeral layer hovering above the user interface. In reality, a successful development
of a gestural system requires the development of a gestural user interface.
Objects are shown on the screen to which the user reacts, instead of somehow intuiting
their performance. The trick, of course, is to not overload the user with UI
“chrome” that overly complicates the UI, but rather to afford as many suitable gestures
as possible with a minimum of extra on-screen graphics. To the user, she is
simply operating your UI, when in reality, she is learning a gesture language.
In general, subjects used approximately straight strokes. No alternate strategies such as always starting at the top item and then moving to the correct item were observed. However, there was evidence of reselection from time to time, where subjects would begin a straight stroke and then change stroke direction in order to select something different.
Surprisingly, we observed reselection even in the hidden menu groups. This was especially unexpected in the Marking group since we felt the affordances of marking do not naturally suggest the possibility of reselection. It was clear though, that training the subjects in the hidden groups on exposed menus first made the option of reselection apparent. Clearly many of the subjects in the Marking group were not thinking of the task as making marks per se, but of making selections from menus that they had to imagine. This brings into question our a priori assumption that the Marking group was using a marking metaphor, while the Hidden group was using a menu selection metaphor. This may explain why very few behavioral differences were found between the two groups.
Reselection in the hidden groups most likely occurred when subjects began a selection in error but detected and corrected the error before confirming the selection. This was even observed in the "easy" 4-slice menu, which supports the assumption that many of these reselections are due to detected mental slips as opposed to problems in articulation. There was also evidence of fine tuning in the hidden cases, where subjects first moved directly to an approximate area of the screen, and then appeared to adjust between two adjacent sectors.
Requirement: Novices need to find out what commands are available and how to invoke the commands. Design feature: pop-up menu.
Requirement: Experts desire fast invocation. Once the user is aware of the available commands, speed of invocation becomes a priority. Design feature: easy to draw marks.
Requirement: A user's expertise varies over time and therefore a user must be able to seamlessly switch between novice and expert behavior. Design feature: menuing and marking are not mutually exclusive modes. Switching between the two can be accomplished in the same interaction by pressing-and-waiting or not waiting.
Our model of user behavior with marking menus is that users start off using menus but with practice gravitate towards using marks and using a mark is significantly faster than using a menu. Furthermore, even users that are expert (i.e., primarily use marks) will occasionally return to using the menu to remind themselves of the available commands or menu item/mark associations.
Well, radial menus typically are displayed around your mouse cursor, so the proximity aspect is there. They also fill out the space, well, radially, so you can really just fling your cursor into a direction and will have the total width of the menu item to hit all the way.
With touch screens, there's two major differences compared to the desktop:
1) You don't have screen edges that you can fling your cursor against, so placing UI elements at the edge does not make them easier to hit.
2) Users are generally quicker to traverse the screen and hit something, but are much worse at hitting something that's small, so you often want to make UI elements bigger (which does result in them being more spaced out) and then put the UI elements on several screens instead.
The other problem with touch screens is that your finger isn't transparent, so you can't see what you're pointing at the same as you can on a screen with a mouse. So you have to come up with different strategies for displaying menu items and feedback. Like showing the selected item title at the top of the screen where your hand isn't covering it.
It's basically 'fling your pointing device at the top' and 'go left or right to get the button you want'. Due to the lack of borders/stops, this would be harder if it was sandwiched between a titlebar and window content.