This apparently uses the back-propagation engine from https://alpha.trycarbide.com/ (which the author of g9 also works on). When I saw it I thought I remembered seeing it in the Carbide demo, so I Googled and I was right. Super neat that it's packaged up into a library though. Really cool system.
Looks like graphics.isManipulating is a gotten variable, not a function. That breaks the kevin clock example and the commented-out part of the piston example.
On firefox, the graph isn't "real time", it only updates when you stop dragging. Other examples seem to work real time in the gallery though, so I suppose it's just a problem with that page ?
The Carbide demos show the original source data values updating in response to manipulating the output. I'm assuming that's still possible with g9 just not hooked up to the editor component?
That is a neat idea! It reminds me of another cool constraint based animation system called "Embedded Constraint Graphics" that Tom Ngo developed at Interval Research Corporation, for which he filed a patent on Aug 6, 1996.
So hey, didn't that patent just expire a month ago?
Golan Levin used the ECG graphical editor to create the vector based face cartoons for his "Mouther" project, by simply dragging the eyes and mouth and other features around like you'd naturally want to be able to do. [2]
It's a really brilliant way to automatically create directly manipulatable interactive graphics from target examples, which you can interpolate between along multiple dimensions at once (zones of a simplicial complex [3]), by dragging different parts of the graphics appropriately. It would figure out how to map the direction and amount you're dragging at a particular location, to appropriate movement in the n-dimensional target interpolation space. It was great for making cartoony direct manipulation user interface widgets!
"A constraint-based graphics system employs different examples of an image to define the constraints of the system. The examples are grouped into subsets which can be interpolated with one another, according to a user-specified input that determines the relative proportion of each example image. An animation can be created by defining a sequence of such interpolated images. Alternatively, a user can directly manipulate observable components of an image to define a particular state for the image. Automatic transformations are defined to provide registration of an image within an overall scene by shifting the frame of reference for an image so that unnatural movements do not occur as the animation proceeds through a sequence of states. The structure of the system permits an animation to be divided into distinct components that can be combined with complementary components of other animations, to provide new results. These components include a clip motion which defines a sequence of states, a clip character which applies the clip motion to a particular image configuration, and clip art."
"The various image examples can be associated with one another in a manner that defines a topological data structure which identifies their relationships. An example of such a data structure for the images of FIGS. 2A-2D is shown in FIG. 3. Referring thereto, each of the four image examples is associated with a vertex of a geometric shape. Specifically, the three image examples of FIGS. 2A, 2B, and 2C, which form one subset, define a triangular shape. The second subset, comprising the examples of FIGS. 2A, 2B, and 2D, defines a second triangle. Since the examples of FIGS. 2A and 2B are shared between the two subsets, the two triangles are joined along a common interface. Each subset of examples constitutes a simplex, or zone, and all of the zones together form a combinatorial structure, or state space, known as a simplicial complex. In the case illustrated in FIG. 5, the state space is composed of two triangular zones, ABC and ABD. While both zones in this case are two-dimensional structures, it is also possible for a state space to include one-dimensional zones, i.e. a line whose end points are defined by two examples, as well as multi-dimensional zones such as a tetrahedron defined by four examples."
"The combinatorial structure defines a state space for the graphics system. Any given point within this space defines a particular image, and movement within the state space causes the appearance of the image to change. More particularly, each of the vertices corresponds to one of the image examples shown in FIGS. 2A-2D. A point located between two vertices results in an interpolated image comprised of a representative portion of each of the two images associated with the respective vertices. Thus, as one moves from the lowest vertex A in the structure of FIG. 3 up the vertical line 10 to the highest vertex B, the figure's arms smoothly move from the position shown in FIG. 2A to that shown in FIG. 2B. Movement from the lowest vertex A to the left vertex C causes a transition in the image from that of FIG. 2A to that of FIG. 2C. A point located somewhere within the triangle defined by the three vertices A, B and C corresponds to an image in which the arms are partially raised and the right leg is partially lifted. For example, the point 12 represents a position in which the image is a weighted composite consisting of 60% of FIG. 2B, 30% of FIG. 2A and 10% of FIG. 2C. The percentages sum up to unity, and the weight values which correspond to these percentages, i.e. 0.6, 0.3 and 0.1, constitute a vector in barycentric coordinates. In a relatively simple embodiment of the invention, the examples are interpolated linearly; but in a different embodiment, the interpolation could employ non-linear functions, such as cubic polynomials. Any state s within the state space can be specified by a zone, in this case the zone containing the examples A, B and C, together with the vector, i.e. 0.3, 0.6, 0.1!."
That's definitely a really interesting piece of prior art! It looks like the patent expired last month. I'm sure there are a lot of other interesting approaches in the space, so I'd personally (and I'm assuming other people in the HN crowd) really appreciate similar things in the comments!
A lot of the ideas for constraint-based graphics date back even earlier (1963!) to Ivan Sutherland's Sketchpad (YCR's HARC revisited some of the under-explored ideas of Sketchpad in 2014 https://github.com/cdglabs/sketchpad14).
Also by HARC, there's Apparatus (http://aprt.us/) which, like g9, uses numerical gradient descent to handle constraints (we actually both use insanely awesome uncmin implementation from NumericJS).
And of course there's the work by Ravi Chugh, et al. on prodirect manipulation with Sketch-N-Sketch (https://ravichugh.github.io/sketch-n-sketch/), which handles constraints symbolically with careful provenance tracking.
From CMU's HCII, Stephen Oney's ConstraintJS (http://cjs.from.so/) also explores the idea of using constraints for building interactive web applications— though, like Sketch-N-Sketch, it's primarily a symbolic approach.
Disclaimer: I'm one of the developers of Carbide, and contributed a little bit to G9.
Yes, there's a lot of interesting prior art about graphical constraints, and Ivan Sutherland's Sketchpad inspired so much work on so many different levels. Thanks for all those references, I'll check them out.
Ivan Sutherland was on the thesis committee of James Gosling, who wrote his doctorate thesis at CMU entitled "The Algebraic Manipulation of Constraints" [1]:
Abstract:
Constraints are a way of expressing relationships among
objects; satisfying a set of constraints involves finding an
assignment of values to variables that is consistent with the
constraints. In its full generality, constructing a constraint
satisfaction algorithm is a hopeless task. This dissertation
focuses on the problem of performing constraint
satisfaction in an interactive graphical layout system. It
takes a pragmatic approach and restricts itself to a narrow
but very useful domain. The algorithms used by
MAGRITTE, an editor for simple line drawings, are
presented. A major portion of the work concerns the algebraic
transformation of sets of constraints. It describes
algorithms for identifying difficult subregions of a constraint
graph and replacing them with a transformed and
simplified new constraint.
I guess the code is supposed to update when I drag the graphics?
But nothing happens for me on firefox [Ubuntu, 64 bit].
This does look great, and I've been wanting to build something like this into a livecoding app for ages, but had no idea how it would work - now, thanks to the comment about backpropagation on this page, I do :)
First, you make up some names and give each name a number.
Then you draw shapes based on the names, not the numbers.
For example, you might make a number called my_cool_number, and set it to 10, and then say "draw a circle my_cool_number pixels from the top, and 300 pixels from the left.".
Then your screen shows a circle 10 pixels from the top and 300 pixels from the left.
If I go and drag the circle, g9 will figure out how to change your numbers. For example, if I drag it down, g9 might say "now my_cool_number is set to 100". If my_cool_number is 100, then "draw a circle my_cool_number pixels from the top, and 300 pixels from the left" will draw a circle that's lower down.
That way it's like I'm actually moving the circle.
The cool part is that if you had other parts of your drawing that also depended on my_cool_number, they'll also move when it changes.
Anyone knows of a library that somehow black-boxes / abstracts away the manual calculus involved in such visualizations? I imagine that’d greatly lower barrier to entry to creation of stunning graphics and animations, and with the help of G9.js even more so.
See https://twitter.com/antimatter15/status/779776900042555393