The frame rate is incredible given the number of particles. JavaScript interpreters really have come a long way.
The frame rate for JS particle simulation beats the frame rate you get when you tell a browser to append content, which is usually native C/C++ optimized to death.
To the author: great work. It looked very much alive on my screen when I first loaded the page.
WebGL is just a JavaScript API specification that exposes an OpenGL-like interface to scripts running within a webpage. The calls that your JavaScript makes to WebGL functions are translated behind the scenes to actual OpenGL calls (although, it doesn't have to be opengl, i think IE uses directx under the hood). Both opengl and directx are API interfaces that allow you to talk to graphics hardware.
Graphics hardware is incredibly fast and programmable (via shaders), and now that WebGL is part of the standard, Javascript programs can actually use the hardware's capabilities. What we see in a lot of this demos is not the javascript interpreter being fast. Although chrome's V8 actually is fast (compared to other js implementations at least), most of the work is being done by the video card.
Firefox and Safari JS compilers are actually fast too. Even faster than V8 in some cases.
For example, I was surprised to find out recently that Firefox 25 is smoother and faster nowadays than Chrome on Mac Os X when running my interactive geometry genereator that is based on javascript/canvas (http://GeoKone.NET).
WebGL (or OpenGL ES) is not written in JS (obviously). JS calls into it, but the heavy lifting is being done by a native library. Calling this a "particle generator in javascript" is somewhat disingenuous.
The Javascript uses the WebGL library to compile and upload a piece of native code called a shader to the graphics card. The graphics card is then instructed to run the shader, which produces the image buffer, the image buffer is rendered directly to the screen.
Does anyone have the theoretical description of what is happening? I'm curious about the fractal patterns being generated and the increasing in entropy and the that results in a catastrophic failure of the stable system. Is this some kind of chaotic system or is just an force field being applied on the particules.
half a million particles are rendered to a texture, which is then blurred, and the gradients are used to update the velocity of the particles in a feedback loop. There's no direct interaction between individual particles. There are two words to describe the haphazard in the behavior. Use Google or Wikipedia to learn about "dissipative systems" and "stigmergy".
I moved the mouse very slightly at the edge, which caused all the particles to eventually settle in two separate spinning groups. Then I started making fast circles around one of the groups, and once the group started following the pattern, added a slowly moving center of rotation towards the other undisturbed group. This caused it to slowly move towards the stationary one, until they collided, each ripping from the other, until they merged.
Galaxies would require coupling of the particles, interactions through gravity... this would kill the efficiency of the solver. N-Particle simulations are always O(N^2) (or more).
It's not cutting off. It's replacing many long-range effects with a single one weighted more (and done it carefully so that the floating-point error is literally smaller than you could tell).
There's even better methods that take O(n) time, like the fast multipole expansion ones.
Btw this goes into numerical analysis, and O(-) is important but so is error convergence. I imagine these big-O figures are for a constant error, but I figure the asymptotic may change behavior as the other parameter changes.
One could also use FFTs and the particle-mesh method or a TreePM method. Basically, the particle masses are deposited on a fine mesh. Poisson's equation gets solved using FFTs and then the particle accelerations are computed by interpolating on the gradient of the potential field.
You could potentially do better than N^2 (ish) by approximating. Quantize the world into a grid, add up mass at each grid point. Then each particle is affected by this weighted grid, which is made up of a relatively small number of pseudo-particles.
Of course, you'd still need to fall back to N^2 for the nearest particles, or the accuracy would be unusably bad. But it's a much smaller N.
I'm the original author of this little WebGL experiment and i want to try to answer some questions that came up here.
1) implementation
there's quite some boilerplate in JS to set up all the textures and the main animation loop, but if you look closely the CPU is mostly idle and the "heavy lifting" is all done on the GPU by several shader programs. There are no libraries used and you can take a copy of the html file and simply start by breaking things apart.
For the massive speed I'm updating the particle data with a clever fragment shader trick that i've learned from https://twitter.com/BlurSpline/status/161806273602519040
And in a DIY fashion, I've mashed this up with my own texture feedback loop.
The main idea is that the particle positions (and the velocity vectors too, each 2D only) are stored in a texture's rgba values (float32). So updating the particles data is in fact a render process of a quad in one draw call. Then I had also rendered the particles to another texture to sum up the "density" of the primitive 1 pixel point projections.
2) complexity
when it comes to the mere particle count, the complexity really is O(n), but there's a wee bit more to it. The projection of the particles to the framebufferobject or the screen is the most costly in this setup and it's fillrate-limited by the graphics card. There's a noticeable difference when the particles are evenly distributed or when they overlap, but it must stay in the O(n) realm i suppose. Then there's another texture feedback loop system that is also directly dependent on the pixel count.
The particles are stored in a 1024x512 pixels wide texture and the hidden texture feedback layer is also of that size, but it could differ too.
There is absolutely no direct interaction between any two particles here. I project the particles to a density texture that is then diffused with an optimized two-pass Gaussian blur calculation and several resolution reduction steps.
All the textures from the different steps are available as in put sampler to the shader programs, in particular "fs-advance" for the Turing patterns and the density projection (hey there, the blue channel is unused ^^) and "fs-move-particles" where i simply grab the gradient from the diffused density to update the particle's velocity vector and do the verlet integration.
The concepts used here also have names - just ask google or wikipedia for "dissipative systems/structures" and "stigmergy".
3) the fluid simulation code is not by me!
Evgeny Demidov is the original author of the WebGL shaders for that: http://www.ibiblio.org/e-notes/webgl/gpu/fluid.htm
I'm only adding to the current advection matrix
4) code size
this could possibly fit into a 4k demo but i have no interest in that kind of challenge. i rather like to share something that is easily readable by others.
This is really very impressive and thanks for sharing!
How many single particles are visible at every moment?
Would it be possible to control the movement of the particles by the fluid field to form predefined shapes? to make them cluster into predefined (even moving) areas?
For some time now, i wonder if it would be possible to visualize some population statistics (think the percentile wealth distribution f.e. ;) by using some thing like this and make every single person "visible" within the statistic.
I believe it would make many "distributions" more intuitively understandable. Imagine some several thousand single "very wealthy" entities contrasted by several thousands or millions of "average" entities ;)
Obviously such interactive diagrams easily hit (hardware/software) limits, but even if displayed with some mapping like 1 point equals several (hundreds/thousands) real world entities, i think such display of magnitude in the real world would be very engaging.
1024x512 = 524288, that's over half a million and they're all visible. So far, they don't carry more individual properties than a position and a motion vector, but i could easily imagine to add another texture for that, and at some point i will also go for more life-like entities with adjustable distributions of status and needs values too, maybe to model behavior in a dynamic system of hunters and gatherers, you know. Give them very basic intelligence, intents, and planning. I had also thought of using a physics simulation for a skeleton model and neural nets for sensors and actuators. But I'm drifting away and I'm helpless in god-mode. Statistics is computationally heavy and i won't add anything like that too soon. It is and will be an abstract cat toy.
It's nice to see this up here, I've been a fan of your work since the milkdrop days. Is that when you started making shaders ? I gotta say MD2 was one of the best shader playgrounds i've ever used and it's how i was introduced to shader programming. You should do cinematic special effects :)
thanks man, and yes. I had already 2 or 3 years experience with the different scriptable elements and partial differential equations in Milkdrop when they introduced the shader editors in the 2007 Winamp5.5 release. There were several original shader presets by Ryan Geiss and i started to edit them and that's how i learned it in the beginning. A good bunch of my presets has also made it into the official download since then too, but i have literally thousands more on my hard drives. I'm still looking into the Milkdrop forum every once in a while but i haven't been much active there since i graduated from university. Have you published something too? What's your handle? In 2010 I started with my first OpenGL clone of the shader pipeline in a Processing sketch and then came WebGL... and I must have gained enough attention with that to get invited to talk about the stuff i do at the FMX2013 this year. The thing is, my current position as a database backend programmer has pretty much nothing to do with my hobby. Thanks for heads up, it's much appreciated! That's surely something i need to fix asap, but soon i will also be the father of _two_ daughters.
It looks like there's some sort of gravity involved. So then how do you get around the O(n^2) problem of comparing the contribution of each particle to every other particle? Did they implement an quadtree of some sort, or is it something else?
It looks like it's using a fluid solver. So, it's not that every particle is interacting with every other particle, but that there's a flow vector being established at each point on the screen and that's being added to each particle's position.
Disclaimer: I haven't actually analyzed the source.
I'm wondering what optimization you can to that do with a quadtree. The only way I can see is if you sacrifice accuracy on distant particles by quantizing and counting. You could use geometric hashing to do the same thing.
Anyway, I don't think this demo is exhibiting inter-particle attraction, but I'm not positive.
Yeah I assumed this when I noticed even the Retina laptops with the i7s weren't doing as well. I have an Nvidia GTX 560 Ti which must be doing most of the work.
Took me a long time to notice that my cursor movements were injecting disturbances into the fluid. What physical laws govern these points, and how is the cursor perturbing them?
The much beloved Plasma Pong for older versions of Windows can still be found on the 'net. It did something similar, and I spent quite some time playing in its sandbox.
I agree. Does anyone know of a website that is just people posting really cool, short code snippets? Things like "Here is a checkers AI in 50 lines of Python" or "Here is a Neural Network in 30 lines of Perl".
Three.js also has some pretty nice to follow particles samples:
http://threejs.org/examples/#webgl_particles_random
http://threejs.org/examples/#webgl_particles_sprites
http://threejs.org/examples/#webgl_buffergeometry_particles