> Well, for 40 bouncing circles, on a 700x500 grid, that would be on the order of 14 million operations. If we want to have a nice smooth 60fps animation, that would be 840 million operations per second. JavaScript engines may be fast nowadays, but not that fast.
The math is super-cool, and efficiency is important for finding isosurfaces in higher dimensions, but those aren't really scary numbers for normal programs. Just tinting the screen at 2880x1800 is ~2 million operations per frame. GPUs can handle it.
A simple way to render is to draw a quad for the metaball, using the metaball kernel function in the fragment shader. Use additive blending while rendering to a texture for the first pass, then render the texture to screen with thresholding for the second pass. The end result is per-pixel sampling of the isosurface.
Admittedly, it's kind of a brute-force solution, but even the integrated GPU on my laptop can render thousands of metaballs like that at HiDPI resolutions.
(Specifically, I use a Gaussian kernel for my metaballs. It requires exp, which is more expensive computationally than a few multiplies. I render 1500 of them at 2880x1671 at 5ms per frame on an Intel Iris Pro [Haswell].)
Though, the work scales with fragment count, so a few large metaballs may be as costly many smaller ones. For large numbers of metaballs, you probably also want to use instancing so you'd need OpenGL ES 3.0 / WebGL 2.0 which are fairly recent.
But 40 metaballs with a simple kernel at 700x500? That's easy for a GPU.
The important bit is getting the metaball function into the fragment shader. I'm not really a web guy, but I know you can do that with WebGL.
For a canvas with a more limited API, you can still do it if images are GPU accelerated with a composite mode like "lighter". If that's the case, you can do basically the same thing by first rendering the metaball function to an image once, and then drawing that image for each metaball. Doing it via an image introduces extra aliasing artifacts, but might get around the API limitations.
Edit: I suppose you would still want to find a GPU-accelerated threshold function for the step after that.
I remember what I did to optimize my own metaballs: after rendering all different sizes that were used on their own screen at (0, 0), I would only add the values at relative positions (signs removed) of their respective preset map.
It does take some memory, but n operations per pixel for each frame for n balls plus the overhead of transforming into actual color was still pretty great. Instead of saying "GPU can do this", I'd rather ask, hey, can we do even better than that?
I used to do that, but sampling from a texture seems to be about as expensive as directly calculating the metaball function per fragment.
It is cool to see what we can do. That is one of the things I really like about winkerVSbecks' approach. It's interesting and different. Better for some uses, too, which is always nice to see.
Feels more organic to me if the original metaball gets smaller as the other one moves out (like its stealing material). Haven't worked out the correct math but a quick PoC is here:
Keeping 100% accurate constant area would require a pretty insane closed form equation. Even ignoring the amount of area added by the stretched portions.
Yes, in this case a numeric approach would probably be the way to go:
- Assume that R1, R2 are the radii of the discs and A_ORIG is the original area (eg. R1^2 PI)
- Calculate the area A for a given R1, R2
- Multiply R1 and R2 with SQRT(A_ORIG / A)
- Repeat
If this doesn't converge after a few iterations, you can use the Newton method, or even a simple binary search to find he correct radii very quickly. A(k R1, k R2) should be monotonic for k, so solving it numerically for a given value should be trivial.
Oh wow, it's a gaussian blur and basically a threshold function (boosting the contrast) using SVG filters! I didn't know this was possible. Very clever!
Interesting approach! Coincidentally, I published an article [0] on this very topic last month. It uses sampling, so it's close to the approach mentioned in the Jamie Wong article you (and I) linked to, but with a path-tracing step capable of producing an SVG path definition. I'd be interested to see how the performance of these two methods stack up to each other for a given quality level.
Much better. There's definitely many improvements that could be done, but that was the main one. The other big one is how the bigger disc doesn't change size.
The approximation OP does is a good start but still far from being real metaballs.
I'm extremely impressed with the implementation. I'm not sure what I would say if presented with a design like this slider for web. Wouldn't have imagined it would work this beautifully as well.
It definitely moved the opposite way I expected it to, but this way grew on me, hah.
It's not a perfect UI element - you can't actually see the options without scrolling through it all, but I could imagine something similar being a pretty cool little thing in the right context
It's possible to do this somewhat efficiently beyond two balls with GLSL and lots of uniforms (or a UBO), since metaballs from the graphics perspective are really just distance fields.
If you want more than a few balls, you can do it in two passes: one to produce the distance field, and one to threshold it.
As an added benefit, it's straightforward to generalize these approaches to any two-dimensional continuous function.
I did something fairly similar to this here: https://codepen.io/thomcc/pen/vLzyPY (I need to look into why this isn't running at 60fps anymore on my laptop, it certainly used to...)
The big difference is that it prerenders a gradient for each ball (it uses html5 canvas for that, but doing it with webgl is completely doable, although a bit more work), which is used as a distance field.
> I need to look into why this isn't running at 60fps anymore on my laptop, it certainly used to...
Runs at 60fps for me on a Chromebook from 2014. I suspect you're looking at it on macOS, which has had very poor (arguably the poorest of any x86 platform) OpenGL drivers for the last four or five years.
Far from the worst: if the update rate is reasonably low, this approach makes it a lot simpler to handle high-resolution displays, event registration. It should also run on a few more browsers and devices.
The math can be optimized by at least an order of magnitude.
Trigonometry functions are expensive, especially the reverse ones.
If v=0.5, see [1] for how to find out sine/cosine of a maxSpread * v. For angleBetweenCenters + maxSpread * v, see [2] for how to find sine + cosine of a sum of angles.
If you’ll do all that math in the symbolic form (you can use Maple or Mathematica or something similar), you’ll get the equivalent formulae for p1-p4 that won’t use any trigonometry, only simple math and probably a square root.
Implemented a 2D angle class for cases like that, where you would otherwise use these slow trigonometry functions. It’s C++ but should be easy to convert to JavaScript or any other OO language.
After years of living in the US, I still have trouble that the word is "cockpit" not "cocktip". That became hilariously obvious when at work we had to use a library that uses the term "cockpit" for one of its main components.
I just started learning GLSL shaders. As practice, I wrote a psuedo-metaball joystick. I didn't know about metaballs, but now that I do I can do some more research and improve my next iteration.
About 2 minutes in there's an excellent realtime metaballs implementation that ran smoothly on a 486-66mhz. Metaballs were an extremely popular effect in the early 90's.
Paper.js is truly a great source of vector drawing tricks.
Curious how difficult it would be to extend this technique beyond two circles. Might have to dust off some old experiments ... :)
Metaballs are always nice, but I think this page (that was linked in the article) that shows compass&straight-edge constructions to be especially nifty:
During or just before WW2, Roy Liming developed analytic techniques for calculating a similar class of blend or fillet. They were taken up in aircraft design, a field that I can't imagine ever using implicit surfaces! I think it was Edgar Schmued's design for the P-51 Mustang that famously used Liming's work.
We've use the blur+contrast approach successfully in EventDrops [1], a time series visualisation based on d3.js. It all happens client-side, with OK performance. Not sure the SVG approach brings more in this case.
I wonder how much would need to be adjusted to provide a scaling factor to the first metaball such that the area was constant (Thus ending up with two equally sized metaballs) or even utilizing the speed of the pull in determining the second balls size.
In order to render a surface you have to either use a contouring algorithm like marching cubes to generate a mesh like the above three.js demo, or raytrace or raymarch them. Because metaballs describe a distance function, its really easy to use SDF raymarching and there is a whole category dedicated to metaball shaders on shader toy (https://www.shadertoy.com/results?query=tag%3Dmetaballs).
> SPH ... with 500 000 particles ... about 2.5 fps on my GTX 1070
Still slower than what CNCD & Fairlight demonstrated in 2011 with "Numb Res", at 120fps (stereo 3D) on a geforce 280:
> The demo features up to 500,000 particles running under 3D SPH in realtime on the GPU, with surface tension and viscosity terms; this is in combination with collisions, meshing, high end effects like MLAA and depth of field, and plenty of lighting effects
Metaballs would not be for simulating fluids but for creating the simulated fluid's surface. In your youtube link it would be a step between "simulating particles" and "meshed result".
"Fluid simulation" was a bit of a nonsense response to that question, but there are similarities between metaballs and fluid simulations. The kernel functions used to interpolate Smoothed Particle Hydrodynamics samples are basically the same thing as metaball functions. The main difference is that you probably don't need the isosurface during simulation.
On a related note, one of the annoying things about metaballs for fluid surfacing is that there's some spooky action at a distance. Two drops of water will reach out towards each other as they come closer together, which makes no physical sense at all.
I would love to incorporate this in some of the UI design work we do for startups. Are there more similar libraries available? We could reference it to our network of clients (mostly developer driven startups) to help translate some of the design ideas we propose. If you know of other similar projects like Metaballs, please do share below or ping me (details in my bio)
> Well, for 40 bouncing circles, on a 700x500 grid, that would be on the order of 14 million operations. If we want to have a nice smooth 60fps animation, that would be 840 million operations per second. JavaScript engines may be fast nowadays, but not that fast.
The math is super-cool, and efficiency is important for finding isosurfaces in higher dimensions, but those aren't really scary numbers for normal programs. Just tinting the screen at 2880x1800 is ~2 million operations per frame. GPUs can handle it.
A simple way to render is to draw a quad for the metaball, using the metaball kernel function in the fragment shader. Use additive blending while rendering to a texture for the first pass, then render the texture to screen with thresholding for the second pass. The end result is per-pixel sampling of the isosurface.
Admittedly, it's kind of a brute-force solution, but even the integrated GPU on my laptop can render thousands of metaballs like that at HiDPI resolutions.
(Specifically, I use a Gaussian kernel for my metaballs. It requires exp, which is more expensive computationally than a few multiplies. I render 1500 of them at 2880x1671 at 5ms per frame on an Intel Iris Pro [Haswell].)
Though, the work scales with fragment count, so a few large metaballs may be as costly many smaller ones. For large numbers of metaballs, you probably also want to use instancing so you'd need OpenGL ES 3.0 / WebGL 2.0 which are fairly recent.
But 40 metaballs with a simple kernel at 700x500? That's easy for a GPU.