Using Kolmogorov Complexity and then using PCA is not valid since you are approximating a solution and Kolmogorov Complexity is for exact solutions. Anyway perhaps there is, or should be defined, a signal/noise Kolmogorov Complexity measure, that is the shortest length of a program that computes an approximate solution within an epsilon distance of the true solution. Also since PCA is discussed, why not use SVD?
Edited: See (1)for some related ideas: A Safe Approximation for Kolmogorov Complexity
> perhaps there is, or should be defined, a signal/noise Kolmogorov Complexity measure
This is studied in so-called algorithmic rate-distortion theory:
Rooij, S. de, & Vitanyi, P. (2012). Approximating Rate-Distortion Graphs of Individual Data: Experiments in Lossy Compression and Denoising. IEEE Transactions on Computers, 61(3), 395–407. https://doi.org/10.1109/TC.2011.25
Vereshchagin, N., & Vitányi, P. (2006). On Algorithmic Rate-Distortion Function. Information Theory, 2006 IEEE International Symposium On, 798–802.
Well stated. The author also misses two other critical points: (1) accuracy is a poor measure of quality for non-numeric, classification type problems, (2) increasing model complexity has an asymptote in order to prevent overfitting. You can’t arbitrarily increase the number of weights and expect that the NN will continue to improve.
The thing about NN is that increasing the weights does improve the performance. The standard way to get good performance and see if your architecture works is just get the network huge (wide). After you see it works you get it small.
The "common wisdom" of "too many parameters will make you overfit" is most definitely not that important for the way modern NN training works.
Overfitting shouldn't be an issue for approximation of a known function, where you can generate an arbitrary amount of "training data". Of course you may not have the resources to do so, but that's a whole different tradeoff.
Key quote from the conclusion: "Neural Networks are not just good for things we don't know how to solve, they can provide massive performance gains on problems we already know how to solve."
That quote is in reference to tasks such as physics simulation. There is an incredible GIF in the OP which shows a digital mannequin being manipulated, with its dress flowing in a hyper-realistic manner due to ML physics simulation. It would be uncanny to see that type of simulation combined with AR.
I'm curious to what extent ML physics simulation may be beneficial for self-driving cars. Generally, we as drivers know the physical properties of objects that we can collide with. Cars don't have that understanding, so they might "think" that colliding with a large paper bag is unacceptable. Stopping suddenly because of that paper bag may be fatal.
I don’t think the cloth simulation is based on ML, this looks like a normal physics-based solver. If I understand the article correctly the author just uses this as an example of complex behavior that could be learned using ML, but he/she doesn’t indicate that this was actually done using ML.
I know some papers that try to improve physics simulations with deep learning and I think it’s definitely possible, not sure though if it can really improve most physics-based simulations.
Physical models are already highly condensed and have the advantage of being interpretable, deep learning has a long way to go before it could be used as a replacement, IMO.
A “300-5000x” gain in simulation speed for low-complexity simulations would be impressive. If a blade of grass is low complexity, then I assume that would free up resources from basic environmental rendering for more complex objects, and greater complexity in those objects (which themselves might benefit from ML, but with <300x gains). Until everyone has near infinite computing power, that’s a de facto improvement in simulation quality right?
How would you reach such a gain though? Most physics-based simulations are really efficient already. For example, cloth simulations usually work using a grid of points (finite element approach) that are evolved using a differential equation and a constraint solver (not an expert in this particular area but I wrote e.g. electrodynamic simulations). Each point is usually only connected to its immediate neighbors, so the update algorithm only needs to do some simple arithmetic operations for each point. Often there's another step afterwards that ensures the constraints are met. All this can be executed really efficiently on a GPU already. Here's an example of an electrodynamics simulation I wrote: https://github.com/adewes/fdtd-ml
To achieve a 300-500 speedup of this seems impossible because for a single grid point we only do a few numerical operations to update it, so it's hard to see how one could reduce that much further as even an ML-based model will need to update each grid point to maintain the level of detail.
I think there are definitely other areas where ML can speed up things, but IMO cloth simulation is just a really bad example because it's a problem that can be solved using a nearest-neighbors approach with rather simple equations. Problems where you have non-local interactions or more complex dynamics might profit more from ML, but most physics problems can be solved faster with much simpler approaches, I think.
They used low inertia cloth and solids because these are the kinds of scenarios that this technique can best approximate.
Once you buy into the concept that your simulation can be represented by a very limited number of parameters it's not hard to suggest that this small state vector can be mutated from state to state using ML.
The issue with this technique is PCA not ML.
I can certainly see this technique being used in place of some existing simulation or animation based secondary motion in video games.
Note that adding up the contributions of 256 basis vectors might be more expensive than a per vertex cloth simulation.
I am experienced in these areas and have done grad computational physics and work on game engines for a living.
When I first learned about Kolmogorov complexity I understood it as the amount of symbols you have to use, from some specific vocabulary, to represent something.
It walks side-by-side with compression (and pigeon problems). Using Kolmogorov to improve ML in those physical examples means that the solution will be better to the specific case, not that there'll come a one-in-all solution to any kind of clothes animation.
In particular, since the number of binary strings of length n exceeds the number of shorter strings, there must be a string of length n that's not described by a shorter program, i.e. that's incompressible.
More generally, the fraction of strings of length n that can be compressed by k or more bits is less than 2^{-k}.
There's an odd jump in this narrative. PCA is indeed a great technique, but the essay goes from PCA to neural nets without explaining why. PCA was around for a long time before NNs and there are fast incremental ways to do it. Why bother with a million-weight NN if PCA will do the job?
My understanding is that the neural network is used to predict the "magnitude" of each eigenvector/axe (extracted by using PCA) in order to reconstruct an approximation of the original behavior.
Are there any recommended resources to learn more about the design of the fabric physics simulation. The demo with the mannequin looked incredible and taking a stab at an ml algo that could learn that sounds like a fun project.
Edited: See (1)for some related ideas: A Safe Approximation for Kolmogorov Complexity
(1) https://link.springer.com/chapter/10.1007/978-3-319-11662-4_...