I was looking for exactly this only hours ago, and did not know exactly how to phrase it in my google search. Such a bizarre coincidence.
From page 8:
"The manifold assumption is powerful because it lets us
translate many of the things we know how to do in flat Euclidean spaces (e.g., work with vectors,
differentiate, integrate, etc.) to more interesting curved spaces."
To be clear, it's not that I have worked with this in the past and merely forgot or something like that. I was thinking about linear transformations and essentially had the idea of taking that power to curved spaces and had no clue what would get me there or if that was even possible. Thank you for posting this. You saved me a lot of trouble.
I keep finding that Google keeps getting worse at these types of queries. I wonder what kind of metrics would incentivize Google to be better at finding these.
Try to find this article without filtering for hackaday domain and not using the exact title. tl;dr of article - these are triangle / wedge shaped stencil holes for SMD surface mount soldering on square / rectangular pads.
This sort of thing intrigues me tremendously and I'm going over the text. At the same time, I always have a certain "what could you really get" feeling about systems that begin with machinery from objects with lots of structure (differentiable manifolds) and generalize and generalize until it is dealing a structure that seems utterly arbitrary. I mean, locally, "almost everywhere" (and similar caveats), the characteristics of a point of a differentiable manifold determine "nearly everything" about the points in its neighborhood. Oppositely, one node of graph no necessary relation to the next node.
So what exactly do we get from our complex machinery? Are the theorem ultimately more about "summation processes on graphs" than graphs?
Can anyone give a quick summary of what you can do with this theory? I know a bit about geometry (up to the basics of the De Rahm cohomology and the Riemann curvature tensor, say).
What does the discrete version get you? What are its applications to CS?
> What does the discrete version get you? What are its applications to CS?
Everything! When you are programming a computer, everything must be discrete. If you need any differential geometry, it is discrete differential geometry then. You may want to hide this fact and pretend that your stuff is continuous, but at some point you will be computing derivatives by evaluating a function on nearby points. In that case, discrete differential geometry tells you which weights to put in your difference scheme.
> If you need any differential geometry, it is discrete differential geometry then
This is a bit oversimplified/exaggerated.
We need to use discrete bits in our representation for a computer, but our numbers can be the coefficients of continuous functions or relations (e.g. polynomials or trigonometric polynomials), and so it is possible to represent continuous functions to whatever precision we have compute resources to handle without “discretizing” per se.
in a past life i learned a little about PDE and the theory justifying why finite element methods work to approximate solutions.
i never explicitly ran into anything about discrete differential geometry, but that doesn't mean it wasn't there all along, lurking beneath (or perhaps above?) my level of understanding.
some reading: Evans -- Partial Differential Equations, Wendland -- Scattered Data Approximation, Wahba -- Spline models for observational data .
Applications of discrete curvature have been around a while, for example this PDF [0] from 2003, so maybe this paper provides theoretical context for existing work.
[0] "Anisotropic Polygonal Remeshing"
Pierre Alliez, David Cohen-Steiner, Olivier Devillers, Bruno Lévy, Mathieu Desbrun
it seems useful to distinguish between applying transforms to meshes, or not. There seems to be a lot of interest in using math to improve meshes for surface analysis, but there are also many problems that are not like that, hence identify up-front...
Learning a new topic also entails learning its terminology. As a newbie to the subject, I'm amused by chapter headings like "Abstract Simplicial Complex" that seem like an oxymoron :-)
I’m not convinced “abstract” is a good word here, but what they mean is that it doesn’t have any metrical/geometric relationships beyond the graph structure.
FYI the word 'complex' in mathematics is often used similar to it's usage as an English noun - "a whole made up of interrelated parts", e.g. a building complex, apartment complex, military-industrial complex. A 'simplicial complex' is then, informally, "a whole made up of interrelated simplexes".
Very interesting. I've been doing a lot of programming work in 3D computational geometry lately, and spent a lot of time learning/relearning stuff about differential geometry, particularly when applied to discrete meshes. From its title this sounds like a great book to read for people doing computational geometry work.
If somebody wants to discuss something about it, I am happy to hear about it. (Not that I am involved in some way. But I read it once, understood most to a certain depth, and I am always interested in how people use it.)
I took the class these notes were based on at Caltech with Peter Schröder - it was one of the most interesting courses I ever took. Glad to see these notes show up on HN.
this is a very good book. it covers an interesting topic in a readable way with a lot of attention to exposition and intuition (rather than theorem->lemme->proof style). there are tons of good (instructive) diagrams, which is important for developing geometrical intuition and the sections are in "bite-sized" chunks. recommend++
If you care about going deep under the surface I would think that the underlying optimization problems have many ties to DG. Not sure about the OPs pdf specifically
>In some sense DG is the foundation of all modern AI/ML/DL approach.
in what sense? i love when people say pretentious things like this to sound authoritative.
just because you take derivatives doesn't mean it's calculus. DG is not calculus - DG is calculus in spaces aren't globally flat but are locally flat. very different.
>The bar is just too high for normal people
in most schools that have DG classes they're junior level. certainly this class is a junior level class
Per Nash embedding https://en.m.wikipedia.org/wiki/Nash_embedding_theorem DG actually is multivariate calculus. However DG can provide an intrinsic point of view of curved spaces without explicit reference to such an embedding. It would indeed be interesting to see a coherent description of ML in a DG framework.
lol so backwards. i can put a fish in a fish tank in my living room it doesn't make it a mammal.
>It would indeed be interesting to see a coherent description of ML in a DG framework.
there is no need for such a thing - you don't need the machinery of connections, bundles, christoffel symbols, whatever else in order to take derivatives. you use those tools to be able to take derivatives in places where you can't do freshman calculus, not the other way around (bring those tools to places where you do do freshman calculus). it makes no sense.
it's like reasoning that because wiles used algebraic geometry to resolve fermat's last theorem that solving quadratic equations is really about algebraic geometry.
There are quite a few places differential geometry is very useful to know. Generally, if you want to know at a fundamental level "what does it mean to learn, what is inference truly?". You will find yourself in dire need of learning differential geometry. The easiest example is: the deeper your understanding of differential geometry, the more you'll be able to reason about hamiltonian monte carlo algorithms.
Information geometry also applies differential geometry, where you can think of learning as trajectories on a statistical manifold.
K-FAC, mirror descent and the natural gradient also derive from or are closely connected to work in information geometry. There's recent work connecting optimal transport. Optimal transport is an important idea and pops up in many surprising places, from GANs to programming language theory via way of modeling concurrency, for example (Kantorovich metric for bisimulation). Understanding differential geometry allows you to see and navigate such rich connections at a deep level. I heartily recommend it. A good place to start is: https://metacademy.org/roadmaps/rgrosse/dgml
This is beyond my expertise and involves some words that I don’t understand, but my understanding is that quite a few people in ML (eg Michael Jordan) specifically care about things like gradient flows in the space of probability measures, and have research questions involving quite challenging differential geometry. You’d have to browse his papers to get a better understanding as I’m not qualified to expand much on this subject.
From page 8: "The manifold assumption is powerful because it lets us translate many of the things we know how to do in flat Euclidean spaces (e.g., work with vectors, differentiate, integrate, etc.) to more interesting curved spaces."
To be clear, it's not that I have worked with this in the past and merely forgot or something like that. I was thinking about linear transformations and essentially had the idea of taking that power to curved spaces and had no clue what would get me there or if that was even possible. Thank you for posting this. You saved me a lot of trouble.