Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Oh no, not again! (Just kidding :) )

Immediate mode UIs come up every now and then, especially in games, and for relatively simple things they're a neat idea (imagine not retaining any extra view state!) But - and there's a big "but" coming up - it does not scale to complex and stateful UIs.

The problem is that, for games that have complex needs - think simulation and strategy games, not racing or arcade - complex UIs are needed to express complex relationships between underlying data and the relevant UX flows. But how do you do that in IMGUI? This information about UI state, layout, etc. is not in the UI anymore, but it has to live somewhere - so you end up having to build your own layout and UI management stuff, and basically duplicate what a RMGUI library would offer, but now with less generality and potentially more bugs.

For the probably best known example, Unity used to have an IMGUI [1] but they've actively moved away from it. It's still there in vestigial form, the API is not removed, but game developers don't really use it.

[1] https://docs.unity3d.com/Manual/GUIScriptingGuide.html

PS. And there's also the lack of visual layout tools, which is directly related to IMGUI's reliance on code to produce layout.



I mean, React is essentially a hack to get an immediate mode API on top of a retain mode platform (DOM), because the immediate mode API scales better to handle complex state.

I'd push that yes, you do have to maintain your state externally in an immediate mode UI, but when you have a lot of "state" that's really just a function of other state, that's exactly what you want.


The interesting thing about React is actually that it encapsulates a lot of state retention inside an immediate-style API. state/setState (and now state hooks) are the unsung heroes of the ecosystem, because they let you focus on only the business-relevant global state, making it natural for a dropdown menu to handle its own open/closed state and insulate its user from that complexity. And many intermediate mode libraries gloss over this need, so you have to track much more state than you’d like.

It’s a fascinating hybrid - rather than “BYO State” where the library does nothing but layout and draw, or “BYO hierarchy” where you need to retain instantiated components, React is “hierarchy tracking included” - all the benefits of persistent components, but tracked by the renderer rather than the caller. It’s an evolutionary step that I think desktop programming would do well to adopt and continue to iterate on.


React is half-and-half. It is immediate mode in usage, but the data structure (vdom) is what you'd expect from retained.


> For the probably best known example, Unity used to have an IMGUI [1] but they've actively moved away from it. It's still there in vestigial form, the API is not removed, but game developers don't really use it.

It's true that the newer retained mode GUI has supplanted the older IMGUI for "game UI" (e.g. what users interact with when playing an actual game built with Unity). On the other hand, the Unity editor itself is still built entirely with their immediate-mode GUI library, and I haven't heard of any plan to move away from that. Building "editor-like" tools (including custom user tools) is still significantly easier to accomplish using immediate-mode libraries.


Oh, they're switching to RMGUI in the editor as well, though more slowly. More info here:

https://blogs.unity3d.com/2019/04/23/whats-new-with-uielemen...


Interesting (even a surprising) choice Unity is making. What are the performance implications of this change?


In Unity you're still forced to use the IMGUI for making editor tools. In fact their new xml layout prototype still uses the IMGUI to draw an instantiated layout! What a mess...


Honestly -- given the amount of time I've spent adapting my own data model classes and structures to the straightjacket of various GUI or web toolkits -- not having the toolkit enforce its normative lifestyle assumptions on data structure feels like a feature to me.

I'd rather manage the state and relationships myself.


>given the amount of time I've spent adapting my own data model classes and structures to the straightjacket of various GUI

This is why every tells you to use MVC or some variant that separates out the UI from the data. IMGUI lets you bend the UI to the data instead but the real lesson is to decouple the view and the model.


I'm very familiar with MVC -- I've been in the industry for 20+ years -- but so far disgusted with the way pretty much every GUI framework expects you to handle this. In reality I may already have a data model, but often have to wrap _another_ model around my model, just to satisfy the GUI framework's constraints on what it thinks a model needs to inherit from, etc. Recently ran into this with Qt. Very frustrating.


The trick is to just accept that the UI model, the business logic model and the denormalized db model all have different goals and often times should be decoupled and adapted.


Which is why it seems almost every project with a complicated UI turns into a bloated pile of semi-duplicated boiler plate 'model' classes with little business logic. Clearly, MVC as the industry understands it, has a problem.

Which is why the original Smalltalk people -- who invented MVC -- mostly switched to Self's Morphic framework in their later projects (Squeak, etc.)


I wouldn’t use one for a production UI, but they’re super handy for building light weight debugging tools.


> But how do you do that in IMGUI? This information about UI state, layout, etc. is not in the UI anymore, but it has to live somewhere - so you end up having to build your own layout and UI management stuff

Immediate mode is about the interface to rendering the UI, not necessarily about the implementation (although some people conflate these). So your UI elements can be stateful and retain their own state, but they can be released once they are no longer rendered in the current frame.


Well with retained mode one still has to handle a bunch of application state, and handle keeping that state synchronised with the view. Immediate mode typically forces you to manage the view state too but it’s typically easier to keep it synchronised with the application state that way.

I think the following is an interesting example here: your application state is some type, Config which would correspond to some boring components, e.g. text boxes and radio buttons, but the interface you want to present looks like:

  [X] default settings
Or

  [ ] default settings
  ...
Now the type of data you want by the end of the form looks like (eg Haskell)

  Maybe Config
Where Nothing means “do the default thing”. But the state of the view has to look more like:

  Bool * Config
Because you want to remember what options the user had changed in case they change their mind a few times about using the default state. And now maybe you want a state like

  (mutable) Bool * Config * (mutable) Config
Because the default state is not static and you want to make fields bold if they are not default. And so now you still have to do the complicated conversion from view to application state (but at least you don’t have to go back).

I suppose this is like the very early web dev style of having no direct state and directly reading values from controls, but at least in this world one has control over how the view state is managed and represented so it can be easier to avoid getting into invalid states.


Immediate mode GUI's don't let you apply useful object oriented programming techniques like subclassing and prefabs (persistence) to implement and configure custom components.

Properly object oriented retained mode GUIs let you subclass components to implement custom views and event handling, and develop and publish reusable prefabs and libraries of pre-configured, stylized, specialized component.


> don't let you apply useful object oriented programming techniques [to get] reusable libraries of specialized component.

But immediate mode GUI lets you compose very well. It's a bottoms up technique of programming, where instead of following a framework established by the library author, you are given low level building blocks. You will find it hard to mix and match two retained mode GUI libraries unless they all follow a common set of standard apis (e.g., java swing). But with immediate mode GUI, it's actually easy to mix and match (as the immedate mode gui has no need to know what framework you're working under nor need to make any assumptions about threading or api).


I don't know what you mean about easily composing immediate mode API's, or agree that different immediate mode APIs don't need to know about each other. They depend on a lot of behind-the-scenes implicit hidden state (practically a hidden shadow-DOM-like retained mode), so I suspect in many cases they'd walk all over each other (especially text editors dealing with input focus and keyboard navigation). Unity3D has two immediate mode APIs, one for the editor and one for runtime, and they sure don't mix.

I've implemented various adaptor wrappers for composing different retained mode API's myself, including wrapping The NeWS Toolkit OPEN LOOK widgets in HyperLook widgets, which I used for the first Unix port of SimCity to HyperLook/NeWS. (The OPEN LOOK pin-up menus and pie menus and buttons and sliders are implemented in TNT, but I wrapped them for HyperLook (which has its own persistence system and class hierarchy), so you can dynamically create and edit them at runtime with property sheets and script editors, ala HyperCard.)

https://medium.com/@donhopkins/hyperlook-nee-hypernews-nee-g...

For X11/NeWS at Sun, we even made an ICCCM X11 window manager that wrapped X11 windows in NeWS tabbed window frames with pie menus! (The X11 window manager was written in object oriented "retained mode" PostScript code, subclassing standard and custom NeWS window and menu widgets, like tabbed window frames and pie menus. While PostScript has an immediate mode drawing API, it was driven by extensible "retained mode" widget instances and classes represented by PostScript dictionaries, pushed on the dict stack.)

Open Window Manager PostScript source: https://www.donhopkins.com/home/archive/NeWS/owm.ps.txt

Tabbed window extension: https://donhopkins.com/home/archive/NeWS/win/tab.ps.txt

Discussion of problems with NeWS: https://news.ycombinator.com/item?id=15327339

Political discussion from the Window System Wars, about Sun's problems merging X11 and NeWS: https://donhopkins.com/home/archive/NeWS/sevans.txt

More recently (and more successfully), I've wrapped HTML components and floated them over OpenLaszlo/Flash (not so recently) and Unity3D (quite recently) retained mode user interface layouts in the web browser.

For example, I'm embedding an ACE code editor in a Unity3D retained mode user interface running in WebGL, that falls back to a regular TextMeshPro input field on other builds. There's no way you could do that practically with an immediate mode API.

WebGLAce_TMP_InputField.cs (C# retained mode GUI TMP_InputField subclass): https://github.com/SimHacker/UnityJS/blob/master/Libraries/W...

WebGLAce.cs (C# P/Invoke Unity extension wrapper): https://github.com/SimHacker/UnityJS/blob/master/Libraries/W...

WebGLAce.jslib (JavaScript library Unity extension, compiled by Unity to WebAssembly, converts between C#/WebAssembly and JavaScript data types and thunks messages back and forth between the browser and the Unity3D app): https://github.com/SimHacker/UnityJS/blob/master/Libraries/W...

WebGLAce.jss (ACE code editor adaptor, pure JS source loaded by the browser, as well as ACE JS libraries themselves): https://github.com/SimHacker/UnityJS/blob/master/Libraries/W...

Notice how WebGLAce_TMP_InputField simply subclasses the standard TextMeshPro TMP_InputField, and adds some more hidden state like the editorID and editorConfigScript of the corresponding ACE code editor running in the web browser.

All that is hidden from the code using it, so it doesn't need to come up with another place to store that editorID and editorConfigScript, or even know if it's getting an Ace editor under WebGL or falling back to TextMeshPro on other builds. That's what I mean by leveraging OOP to subclass existing components and hide internal state, which you can't do in immediate mode.

(My plan for non-WebGL builds that aren't already running inside a web browser, is to create an embedded web browser just to run the Ace code editor, and the C# WebGLAce_TMP_InputField API will remain the same, and it will just work transparently on different platforms. I want to support live coding Unity apps in JavaScript, but there just isn't anything for Unity that approaches ACE for editing code, so it's worth going through all the trouble to make an adaptor.)

How would you write a functional extension for Unity3D's old immediate mode GUI that let you embed an ACE code editor in a Unity WebGL app? What would the API and state management even look like? How could you make it platform independent?

And even if you solved all of those problems with an immediate mode API, by its very nature it still wouldn't enable you to build GUIs in the interface editor or store them in prefabs, and you'd still have to recompile you app (which can take half an hour with Unity) every time you wanted to tweak the user interface (which is why I like programming Unity apps in JavaScript as much as possible).

UnityJS Unity3D / JavaScript bridge architecture: https://github.com/SimHacker/UnityJS/blob/master/doc/Anatomy...

UnityJS pie menu component for Unity3D, written in JavaScript, drawn with canvas, using TextMesh Pro labels: https://github.com/SimHacker/UnityJS/blob/master/Libraries/U...

Uses a Unity3D pie menu tracker to translate low level input to high level events, written in C#: https://github.com/SimHacker/UnityJS/blob/master/Libraries/U...

Of course you can take it too far, trying to please everyone by combining the worst of all possible systems and then giving it a ridiculous sounding name (see "MoOLIT"), but we're discussing whether it's possible and practical, not whether it's desirable. And even if it's not desirable, it ends up being necessary.

https://en.wikipedia.org/wiki/MoOLIT

http://nova.polymtl.ca/~coyote/open-look/01-general/faq-doc-...

Embedding object oriented "retained mode" widgets with different APIs inside of each other is old hat and common for backwards compatibility. Whenever you write a new GUI toolkit, embedding widgets from the last toolkit is one of the first things you do (including recursively embedding widgets from the toolkit-before-last).

Concrete example: Microsoft lets you embed old fashioned OLE controls in Windows Forms / Presentation Foundation applications, which might be implemented in MFC themselves. And MFC is all about embedding old Win32 widgets implemented in C, and newer OLE components implemented in Visual Basic or whatever, in venerable C++ MFC user interfaces. Say what you want about how much Win32/MFC/OLE/WF/WPF sucks, and I'll wholeheartedly agree, but if you're running Windows, you probably have widgets on your screen using several different retained mode APIs embedded more than two levels deep right now.

https://en.wikipedia.org/wiki/Windows_Forms#Architecture


You can always encapsulate your custom IMGUI component in a function (or even a class, if you really want!) You should look at how custom components are made in IMGUI (for example, plotting variables: https://github.com/ocornut/imgui/wiki/plot_var_example)


Its a catch 22 of a question. Once you add OO to your immediate mode system, you've simply begun making a retained mode system.


I've used Unity3D's immediate mode a lot, both the editor API and the runtime API (and I've also implemented and used a lot of different retained mode GUIs), and I'm much happier with the newer "retained mode" GUI in Unity3D.

The problem with immediate mode that is you have to come up with somewhere to store and retrieve any values or state required on a case-by-case basis (including the edited value itself, and other view state like scrollbar state for text editors, etc), and that tightly couples your component function with whatever's using it, so they're not very reusable or simple. With OOP, the object has its own place to store that stuff, which is cleanly decoupled from whatever's using the component.

Then there's the issue of event handlers. Some user interface components just aren't so simple that they only have one true/false return value like buttons. Text editors can notify on value change, end edit, select, deselect, etc. And Unity's retained mode GUI supports persistent event handlers that let designers hook up events with methods of objects with parameters, without writing or recompiling any code.

And there's also a lot more to layout that you can easily express with an immediate mode GUI. Sometimes you have to measure a bunch of things and align them in various ways, like grids or flex layouts, and adapt to the screen size and other constraints (i.e. responsive design), and that's really hard to do with immediate mode, while retain mode has a rich set of layout components you can use and extend. And there's nothing like XCode's layout constraints for immediate mode.

A perfect example is TextMesh Pro's text editor. It has so many different (and useful) features and parameters and callbacks and ways to configure it, and it keeps so much state in order to redisplay efficiently with the fewest number of draw calls and reformatting, that it would be extremely impractical and inefficient and unwieldy for it to use an immediate mode API. Especially with C# positional arguments instead of Pythonic or Lisp-like optional unordered keyword arguments.

When you encapsulate your immediate mode layout in a class, or write an engine that interprets (i.e. JSON) data to drive the immediate mode API, or use reflection in your wrapper functions to dispatch event callbacks and bind your widget data to your model data, you're just rolling your own informally-specified, bug-ridden, slow implementation of half of retained mode. (See Greenspun's tenth rule.)

https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule


Here's a demo of Unity3D's new retained mode UIElements user interface for the editor, that lets you wrap old immediate mode IMGUI code in IMGUIContainer wrappers!

https://youtu.be/MNNURw0LeoQ?t=17m37s




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: