Hacker News new | past | comments | ask | show | jobs | submit login
DearPyGui (github.com/hoffstadt)
201 points by fractalb on Aug 29, 2020 | hide | past | favorite | 71 comments



Oh, I love Dear Imgui -- it's very simple to use and has a nice... "engineering"/scientific aesthetic. Good to hear it's been ported to python.

If you're looking for an end-user product, this may not give you the control you're looking for. But if you're looking for a dead-simple way to create a quick GUI for a side project, this is perfect.


The underlying "imgui" https://github.com/ocornut/imgui project:

From the github page - EDIT: Read the bindings / frameworks page too, https://github.com/ocornut/imgui/wiki/Bindings It's still c++ like.

"""

Officially maintained bindings (in repository):

Renderers: DirectX9, DirectX10, DirectX11, DirectX12, OpenGL (legacy), OpenGL3/ES/ES2 (modern), Vulkan, Metal.

Platforms: GLFW, SDL2, Win32, Glut, OSX.

Frameworks: Emscripten, Allegro5, Marmalade.

Third-party bindings (see Bindings page):

Languages: C, C# and: Beef, ChaiScript, D, Go, Haskell, Haxe/hxcpp, Java, JavaScript, Julia, Kotlin, Lua, Odin, Pascal, PureBasic, Python, Ruby, Rust, Swift...

Frameworks: AGS/Adventure Game Studio, Amethyst, bsf, Cinder, Cocos2d-x, Diligent Engine, Flexium, GML/Game Maker Studio2, Godot, GTK3+OpenGL3, Irrlicht Engine, LÖVE+LUA, Magnum, NanoRT, Nim Game Lib, Ogre, openFrameworks, OSG/OpenSceneGraph, Orx, Photoshop, px_render, Qt/QtDirect3D, SFML, Sokol, Unity, Unreal Engine 4, vtk, Win32 GDI, WxWidgets.

Note that C bindings (cimgui) are auto-generated, you can use its json/lua output to generate bindings for other languages.

"""


> Note that C bindings (cimgui) are auto-generated, you can use its json/lua output to generate bindings for other languages.

note about this. the C++ imgui library uses stuff like out-arguments (pass a pointer that gets filled out with a result) almost everywhere, which makes wrapping it with a pointer-less language non-trivial – in python, you want an interface that just returns a tuple and you'll have to code that part up manually anyway (or come up with a really clever generator)

[source: i've contributed to pyimgui, another imgui wrapper, and we looked at auto-generating wrappers at some point, but decided against it because of how much work you'd have to do on top to make it "pythonic". i think they're looking at that again now though.]


SWIG can sometimes help with this.


What's your recommendation for a Python-engineered end-user product?


Native, if cross-plattform then Qt or wx (only "old" widgets).


A semi-related question; what is the current 'correct way' to package up Python projects and all their dependencies as native executables for Windows/OSX/Linux? I haven't done any desktop Python since 2.5 and then it was a fiddly process of 'freezing' exe's, is that still the recommended way with Python 3?


It sounds cynical, but I've found that porting Python code to C++ and using Qt is much less headache than trying to make a clean Python "app" that can be run by non-technical users. However a simple solution is to distribute a local copy of Python with all the packages pre-installed. That's not as clean as a single exe, but generally you're going to need DLLs or whatever anyway, and to the end user, running a bat file that calls Python hides most of the mess. The bigger problem with that is hiding code, if you need to do that, but you can always distribute pyc files on their own.

This may have changed, but the last time I tried freezing it took a lot of massaging to get it to work reliably. So I've always opted to just ship a complete python env. I've seen big commercial products do the same when they need to provide python support - you may have experienced this when some new install accidentally adds a new python interpreter to your PATH.


I use pyinstaller to build windows executables directly from Linux using this docker image: https://github.com/cdrx/docker-pyinstaller

Between Qt and Python there is a huge amount of cross platform support. For example, I found out that Qt supports bluetooth, which currently has no other cross platform support in python (the unmaintained pybluez has all sorts of issues).


This is a fairly comprehensive build system for gui apps in python, I think: https://build-system.fman.io/


I've had good luck with https://www.pyinstaller.org/ and Qt in python.


That's still the way to go, and it's still fiddly and always requires manual tweaking until it works. Especially on Linux. Deploying binaries on Linux is just always a hassle, and never works "everywhere"...


Pretty much the same, though it was never that difficult. There is also PyOxidizer, and zipapp is an option for some.


I can't find a clear explanation of "immediate mode" GUI creation (which this library enables) but it appears to be the sort of paradigm that pre-dated object-oriented and event-driven interaction handling that became the norm in the mid 90s (especially after the popularising of dev tools from Borland and Microsoft eg. Visual Basic etc), so I guess it is a formalisation of the procedural (?) methods from the era preceding that.

https://wiki.c2.com/?ImmediateModeGui


Immediate Mode GUIs are just GUIs that are instanced and drawn immediately during a single frame. There's no need to persist any object or structure because the whole screen will be cleaned and redrawn in the next frame. Immediate mode is popular in video games because in most games the screen is re-rendered on each frame.

This is in opposition from retained-mode, where things are re-rendered only when necessary, like in Win32/Cocoa or the HTML DOM. With those, you need to keep an object in memory.

The nature of the job is what allows for very simple procedure calls that looks almost declarative. Here's an example (it's Unity3D btw):

    GUI.Label (new Rect (25, 25, 100, 30), "Label");
    if (GUI.Button (new Rect (25, 25, 100, 30), "Button")) {
      // This code is executed when the Button is clicked
    }


> There's no need to persist any object or structure because

This isn't quite correct, at least for the library internals. Dear ImGui does persist UI state between frames, the "immediate mode" term only describes how the API looks like to the user, not how the library behind the API is implemented. The user code doesn't need to keep "widget handles" around, and there is no "event handler code" that is called asynchronously. From the user's point of view, control flow is entirely sequential, the UI is described and input events are processed in the same linear control flow context. But this is just what the library user sees, what happens under the hood is very different (e.g. the internal UI state is not rebuilt every frame from scratch, instead the API calls cause changes to the internal UI representation, those API calls just happen to also contain all the information needed to create the required internal UI state if it doesn't exist yet).

There's also nothing preventing the library from only redrawing what has changed, it just turned out that redrawing everything is usually so fast that only drawing what has changed would just add complexity to the implementation for very little gain (even complex ImGui UIs are usually a few dozen drawcalls at most and don't take up any significant chunk of the per-frame budget).

This description may be simplified and in details also slightly incorrect, but it's important to point out that "immediate mode" only applies to the API, not to the implementation, because this argument is often brought up by critics (who often don't quite understand what the "immediate" in "immediate mode UIs" is actually about) as a reason why immediate mode UIs can never be as efficient as traditional "retained mode" UIs (in reality they usually are more efficient than traditional UIs, and situations where traditional UIs are ahead can be optimized in immediate mode UIs just as well).


Correct. In order to tell if a button is clicked, you need to know whether it was clicked last frame, which means keeping around state. One difficulty with immediate-style GUIs is that it's difficult to tell whether two widgets are "the same". Dear ImGUI mostly uses a widget's label as its core identifier, which can cause issues if the button label changes.

There's actually a lot more in common between React and ImGUI than you might think, in terms of "state reconciliation". The difference is that React is diffing to apply itself to a retained model, while ImGUI retains the state behind your back.


This isn't true.

All state you need to keep track of is purely the input state: e.g. whether a button is pressed, keys are pressed, etc. And very importantly, you also need to store the edges of the input signal. That is, you need to store whether left mouse has changed from not pressed to pressed this frame, and whether it has changed from pressed to not pressed.

Then, the Button(...) call computes the hitbox of the button, checks whether the left mouse button has changed from 'not pressed' to 'pressed' this frame, and if yes, whether the mouse coordinates are inside the hitbox. If yes, it returns True, else it returns False.

No widget state needs to be kept.


At least in the case of Dear ImGui, things like the current window position and size is widget state that's owned and persisted between frames by the library, otherwise windows wouldn't be moveable or resizeable by the UI user.

Other immediate mode UIs may decide to delegate this "state housekeeping" to the user (that's how rxi's microui works), but IMHO this isn't quite as convenient for the library user, it keeps the library implementation very simple though.


There's a bit more subtlety than that. If I press down on a button, then move my mouse off of it, it will be deselected. This is normal and not too hard to implement your model. But, if I keep my mouse button held down, and drag back on, it will become re-highlighted, and it will go through. Note that only the button I originally clicked on will have this behavior. I just tested this right now in the ImGUI demo, so clearly per-widget state is tracked (or at least which "widget" is the one that has the mouse's active state): https://github.com/ocornut/imgui/#demo

If you think this is obscure, try implementing a slider widget any other way. You need a way to keep track of which slider you were dragging, even when the mouse cursor leaves to another.


This is solved by the exact same state I mentioned earlier.

The only thing I need to make explicit is that you store the mouse coordinates of each event type with that event, for later retrieval. E.g. `events['mouse_down_edge']` stores a tuple `(frame_nr, coords)`.

Then you know that we are dragging this slider if the mouse is held down, and its initial down edge in the signal was generated while hovering over this slider.

---

For what it's worth, I know that (some parts of) dear ImGUI are not implemented in the above way, and instead do keep track of some widget state with labels and unique ids. People however heavily overestimate to what extent this is necessary.


> No widget state needs to be kept.

This is correct. Windows (which includes things like popups and dialogs) have "state behind your back", but widgets generally don't - at least I've never seen it in the code. 99 % of the examples given by other commenters are not handled by hidden widget state, but by g.activeID. As someone pointed out, changing IDs in the middle of an interaction is problematic for that reason (not because imgui fails to find the widget in its hidden tree of widgets, which doesn't exist).


> Immediate mode is popular in video games because in most games the screen is re-rendered on each frame.

That makes no sense though because while you still need to re-render each frame, you do not necessarily have to waste rendering time on GUI each frame when GUIs are mostly static, why not just render the interface to a framebuffer and then simply draw that when composing a frame and only re-draw the framebuffer when the GUI changed/played an animation frame?


I have my hands on a moderately complex ImGUI project right now and the entire thing is basically a few thousand polys in a handful of drawcalls. For a GUI that doesn't cover the entire viewport I'd wager that your approach would actually be slower, because (1) change detection is CPU-side and non-trivial (2) rasterizing and shading this little each frame is probably less expensive than blending with a viewport-sized buffer.

In games ImGUI is usually used for debugging purposes and I don't think in that usage you have many frames where nothing would change (e.g. if you are looking at scene-graph nodes their properties will probably constantly change due to things like idle animations, camera movements, scripting, ...)


> why not just render the interface to a framebuffer and then simply draw that when composing a frame and only re-draw the framebuffer when the GUI changed/played an animation frame

Some games did that in the past, and some might still do it!

But keep in mind that today the performance gains are so negligible when compared to the rest of the things you have to render on a single frame that the added complexity and the amount of things that could go wrong (glitches) are normally not worth it.

Also, copying that temporary framebuffer to the screen on each frame is not free and involves copying between two memory locations, whereas procedurally drawing things only involves CPU (or GPU) + the main framebuffer, which is super fast in comparison.


In games it is far more important that performance be consistent than that it is be better on average. Spikes in performance mean hitches in the presentation. Hitches are terrible user experience. Better to run a solid 30 fps 100.0% of the time than to run 60 99% of the time and hitch every 2 seconds.


Hitches are still bad, but consistency is less needed with variable refresh rate monitors.

Also, battery life/power usage is an important factor that can push you away from focusing only on consistency.


Something that people don’t realize is that standard deviation in framerate trumps frame rate in perception of smoothness of animation. It’s something that NaughtyDog has blogged about, and something really well known in the Amiga demo scene.

I’ve seen 8 fps marquees that looked smooth as silk the frame rate SD was so low. It’s amazing what the brain and eye tracking will do to make things look right.


Variable refresh rate monitors don't solve hitching due to variable computation. In order to render an animation smoothly, you have to know precisely when a frame will be displayed ahead of time, so that you can render the simulation precisely at that point.


I suppose if you know UI cache of some element will be invalidated this frame you can inject a delay up to expected upper bound in frame time increase it will cause and artificially hold back next frame flip by a variable amount if it takes less than the max expected delay. It could be tricky to get that from the GPU with all the queuing it can potentially do and stuff though.


But if it's fast enough for games, why make GUIs unnecessarily complex? Redrawing every frame from scratch is just much simpler than keeping track of differences, invalidating areas (which can go wrong), etc.

Perhaps in low-power situations it's a different story, though.


In many games it does make sense to update th GUI on every frame. E.g. if you have a minimap you will need to update that every frame during which the player is moving.

Updating every frame also solves the issue of the GUI updates not being synchronized with screen refreshes (v-sync). You could do something like use event driven programming to draw the GUI to a buffer off screen, and layer that on top of the main render. But that's probably about as intensive as drawing the UI, and more memory intensive.


You can do that if you want. You can also just sleep the render thread until inputs are received, if you are sure that only user input can cause changes in the UI. See glfwWaitEvents for example:

https://www.glfw.org/docs/latest/group__window.html#ga554e37...


As far as I can piece together, Casey Muratori first described the approach in an early YouTube video in 2005: https://youtu.be/Z1qyvQsjK5Y

The basic approach certainly existed far earlier, though. Early "graphical" command line applications, for example, most likely all took an immediate mode approach. It's just that no one thought it was worth making the distinction until after living through the hell that is retained mode GUI programming.

In a way, React (along with other vdom-based GUI frameworks) is another rediscovery of immediate mode GUI techniques, but with more of a functional (reactive) programming influence.


from the docs [1]:

"A common misunderstanding is to mistake immediate mode gui for immediate mode rendering, which usually implies hammering your driver/GPU with a bunch of inefficient draw calls and state changes as the gui functions are called. This is NOT what Dear ImGui does. Dear ImGui outputs vertex buffers and a small list of draw calls batches. It never touches your GPU directly. The draw call batches are decently optimal and you can render them later, in your app or even remotely."

This is cool, I had that misunderstanding myself (immediate mode vs immediate rendering)

1: https://github.com/ocornut/imgui#how-it-works


This library has functions such as add_button() that can be used to create a hierarchy of widgets. From this, I suspect that the library is not fully operating in immediate mode, but perhaps it uses some kind of mixed mode?


The example only goes through the code once and then enters a GUI loop and also uses callbacks, so ... yeah. Doesn't look like immediate mode GUI to me.


> From this, I suspect that the library is not fully operating in immediate mode, but perhaps it uses some kind of mixed mode?

From the linked readme: “DearPyGui provides a wrapping of DearImGui that provides a hybrid of a traditional retained mode GUI and Dear ImGui's immediate mode paradigm.”


The hybrid aspect of it is through using it's data storage system, opening the event loop, clearing widgets every frame, and adding them every frame. We haven't completed the full documentation on using it this way but its there. And because of the IMGUI underneath, using it this way doesn't take any real performance hits.


I haven't watched it in a long time, but I recall this as a useful video about it: https://www.youtube.com/watch?v=Z1qyvQsjK5Y


A 3 minute explanation for the visually inclined:

https://www.youtube.com/watch?v=LSRJ1jZq90k


Also adding this which I discovered last week, https://github.com/chriskiehl/Gooey


It'd be interesting to extend this to handle pipelines, maybe by using some sort of thing where you drag from the output field(s) of a utility to an input field of another, causing them somehow to be fused.


I'm curious if this is comparable to some of the other Python-based GUI frameworks out there: Tkinter, PyQT, Kivy, Toga.


Those libraries use retained-mode, which is the complete opposite of immediate-mode that Dear ImGui uses.

So they're geared towards different things. I posted an explanation here about the difference between retained and immediate mode: https://news.ycombinator.com/item?id=24318437


Seems to be more comparable to PySimpleGUI, though on the surface it looks a little less simple and a little more featureful.



I had no luck with these... lol


Streamlit and Gradio are in the same category I think, so what are the differences ?

https://www.streamlit.io/

https://www.gradio.app/


Wow, that is a lot less code than I was expecting to produce the results in their examples.


If you want to play with imgui there are some webassembly demos also:

https://jnmaloney.github.io/WebGui/imgui.html


Pretty crazy how this library blew up in the last 48 hours. Lol


Is this due to being posted here or am I missing something? As an aside, it would be nice if the title was or similar to:

'DearPyGui: A GPU Accelerated Python GUI Framework'


Idk! Could be that or because it’s trending on github and Reddit’s python subreddit.


Looks neat. How easy is this to setup and use? To distribute?


What's the difference between this and pyimgui ?


pyimgui tries to be a 1 to 1 wrapping of Dear ImGui, including the immediate mode paradigm. DearPyGui wraps Dear ImGui, provides a simulated traditional retained mode api, includes additional widgets and add-ons (plots, file dialogs, images, text editing widget, etc.), adds asyncronous support, addition items to the canvas, additional debug tools, etc.

Ultimately it tries to provide a complete package. Not a 1 to 1 wrapping.


How is it different from https://flutter.dev/desktop


You can probably do the same things with both, but Flutter is more geared toward applications, where Dear ImGui is more about putting GUIs in places where the GUI is secondary, such as games, creative tools and other graphic-heavy apps.

Here's what Dear ImGui's readme [1] says:

> Dear ImGui is designed to enable fast iterations and to empower programmers to create content creation tools and visualization / debug tools (as opposed to UI for the average end-user). It favors simplicity and productivity toward this goal, and lacks certain features normally found in more high-level libraries.

> Dear ImGui is particularly suited to integration in games engine (for tooling), real-time 3D applications, fullscreen applications, embedded applications, or any applications on consoles platforms where operating system features are non-standard.

Basically: I wouldn't put Flutter inside a graphics heavy video game. I also wouldn't use ImGui for a social mobile app, or something like a business app that requires accessibility.

[1] https://github.com/ocornut/imgui


> How is it different from https://flutter.dev/desktop

Well, it advertised itself as a simple way to provide a GUI for Python scripts.

I can think of lots of good uses for Flutter, but “a simple way to add a GUI to a Python script” isn't one of them.

So, I'm going to say they are entirely unrelated products with basically non-overlapping domains.


The main difference is that you can put dear imgui in your rendering pipeline and manually trigger it each frame. This fine grained control lets you use it on top of your graphical application as a HUD. Example:

    build_game_frame()
    render_game()
    build_hud_frame()
    render_dearpygui_frame()
    gl_flip()


This may be helpful/relevant: https://github.com/hoffstadt/DearPyGui/issues/83. Edit: (the link discusses making the API more like flutter).


For starters, this library may not be suddenly killed off.


https://github.com/ocornut/imgui

""" Ongoing Dear ImGui development is financially supported by users and private sponsors, recently:

Platinum-chocolate sponsors

Blizzard, Google, Nvidia, Ubisoft """

Joke aside, google did a lot of good for imgui's development. I guess they all love side projects.


Im more concerned about flutter going to the grave sooner.


How is it not?

Flutter is Flutter, DearPyGui is Python binding for ImGui.


I read the question as asking about the differences (advantages and disadvantages) of the two


This is correct


Isn’t that a little weird? Do you mean Flutter and Dart vs. ImGui and Python, ImGui vs Flutter or Python vs. Dart.

Anyway, I don’t think ImGui can be used for app development for neither Android or iOS and Flutter doesn’t seem to allow you to use other languages than Dart (at least that’s my impression).


Difference to pyimgui?


pyimgui tries to be a 1 to 1 wrapping of Dear ImGui, including the immediate mode paradigm.

DearPyGui wraps Dear ImGui, provides a simulated traditional retained mode api, includes additional widgets and add-ons (plots, file dialogs, images, text editing widget, etc.), adds asyncronous support, addition items to the canvas, additional debug tools, etc.

Ultimately it tries to provide a complete package. Not a 1 to 1 wrapping.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: