Taking a look at the comparison paper [0] provided on that site, there does seem to be a significant difference. The images generated with MS are more prone to long horizontal lines while WFC produces larger blocks of texture. This is hard to spot with more homogeneous textures like pipes, but it's more apparent with something like the grey cat faces, where WFC generates a variety of shapes and MS mostly makes very long ones. Going back up to section 2 I found the explanation: "Model synthesis sweeps through the grid in scanline order. WFC chooses the lowest entropy cell." The results from WFC look more natural and varied to me so I'd call this a significant advance.
But it does seem appropriate to say they're variations of the same algorithm. As for the name, I don't like either: Wave Function Collapse is based on a silly pop-science interpretation of quantum mechanics, and Model Synthesis is extremely generic and forgettable. And it's not even a synthesis, more of a filtering process. I guess I'd stick with Wave Function Collapse.
I thought the same - though it actually gets its name from wave function collapse in quantum physics, as each pixel has multiple potential positions it can collapse to, leading to the procedurally generated map
As someone who has actually studied physics (including quantum mechanics) I find this analogy extremely poor. Keeping multiple potential candidates which you gradually reduce into a single solution is a very common approach for solving a variety of problems (e.g. sudoku solver). This is not at all what makes the wave function interesting or useful in quantum mechanics. And the whole concept that you iteratively collapse each "wave element" isn't a thing which is present in quantum mechanics at all.
Quantum does not mean "small and discrete", it only means "discrete", as in "quantisation" - fitting values into a discrete defined set. For instance in mathematics the operation of "rounding" quantises fractions into the set of integers.
I think it is (in this usage) meant as the gain in human understanding of the photoelectric effect (Einstein 1905), specific heat, diamagnetism, and so forth once quantum mechanics began development. The leap from classical mechanical theories being fundamental to quantum mechanical theories being fundamental (with classical physics emerging from that), in other words, was a big shift even though subatomic particles themselves, and their bound states, are generally very small.
is it though? I've heard the argument, but to me it still makes sense as it is used colloquially. It is a discrete jump forward, thus if you are not talking about electrons and just the latin definition can be an arbitrary sized quanta if defined to an an appropriate set. a quantum leap between scientific epochs is quite large for example
also just reread your comment. I don't agree with the definition "small and random", it is smallest unit of measurement an arbitrary system magnitude can change
Here's my interpretation: a classical leap is when you are somewhere, you expend a bit of energy you have, and you end up somewhere else as a result. Basically, you do something and something happens.
A quantum leap, on the other hand, happens when you are somewhere just sitting, and boom, now you're somewhere else. It just happened randomly, without any energy input from you. So to bring this back to the idiom, an example of a quantum leap would be this: "I had a dream yesterday that gave me the insight I needed to finish my proof of XYZ theorem!"
The main example of a quantum leap that comes to mind is between energy levels of an electron of an atom, and that does involve emitting or absorbing a photon, carrying the energy that needs to be accounted for.
It's also not clear what this algorithm has to do with quantum physics. Maybe that's how it was discovered or it has applications to physics, but the algorithm itself doesn't require any advanced math or have anything to do with the wave function. It can be described using elementary combinatorics.
It's reasonable to be surprised when a computer science term uses a metaphor, but I don't think it's fair to call it a poor name unless you also object to almost every other name in programming.
Strings have nothing to do with textiles. And, very confusingly, they aren't even related to fibers or threads. Zippers aren't related to textiles either.
Trees and bloom filters are not related to botany. Heaps and garbage collection are not about municipal waste management. Graph coloring, despite the term, has nothing to do with either pictures or pigments. Red-black trees aren't colored or botanical.
Bubble sort does not involve fluid dynamics. Neither does bucket or cocktail shaker sort for that matter.
Circular buffers are not round. You can't eat a spaghetti stack or poke your finger on a cactus stack.
From the above link you can look single slit, double slit, triple slit, step, spike, or load an arbitrary image as a potential and do experiments in 2d box.
>This WebGL program simulates the quantum mechanics of a single particle confined in a 2D box, where inside this box the user can create new potential barriers and scatter Gaussian wavepackets off them. The full instructions are found here.
> btw. Wave function collapse in quantum physics is completely speculative phenomenon. There is only apparent wave function collapse.
To be more precise, the Born rule non-linear adjustment of the wave function to a single real value after a measurement is strictly necessary for QM to match experiments. Whether this should be interpreted as a physical phenomenon of wave function collapse, or as entanglement with the environment (MWI), or as an update of probabilities for hidden variables (Pilot wave) or some other phenomenon is speculative, but the wave function must be "collapsed" to a single real value after a measurement to correctly predict experimental results.
However you put it, [to a classical observer] the wave function still "collapses" after a measurement. This is most famously seen by adding a detector inside one of the slits for the double-slit experiment: the original wave function is not consistent with the experiment, you have to update the wave function after the interaction (or lack of interaction) with the detector.
Sure, in MWI the wave function of the universe never collapses, but something similar still happens for "parts" of the universal wave function.
As far as I can tell, that is just explaining the numeric value of the Born rule, not the wave function collapse/update. I have not found any claims that Gleason's theorem (which I was unaware of, so thank you for pointing it out!) solves the measurement problem.
In quantum computing the consequence of the theorem (as I understand it) is that once you have asked every yes/no question there is to ask about the state, the state has collapsed.
I didn't think you needed the theorem for that; quantum entanglement behaves very similarly to information theory (specifically, information provenance).
If you like that simulation you might also like my own webGL Gross-Pitaevskii Equation (GPE) solver. It uses RK4 to simulate a 2D box of ultracold atoms undergoing Bose-Einstein Condensation.
The GPE models a condensate as a single-particle quantum wavefunction with a non-linear form of the Schrödinger Equation, so you get some interesting behaviour from the non-linearity while the simulation remains computationally feasible.
You can interact with the potential term by clicking and dragging inside the 2D box.
You can simulate quantum dynamics but the number of bits required is exponential with respect to the number of qubits simulated. So basically you can simulate really simple quantum systems but it becomes pretty much intractable when the system you are simulating has many parts.
This gets brought up every time its mentioned. And I think its interesting. It's only tangentially related to actual physics. But parallelizing the WFC for large inputs might be exactly the kind of problem QC assists.
I've never heard this referred to as "Wave Function Collapse" before. Isn't this just constraint satisfaction being solved via backtracking? Like this is the standard way of solving the N-queens problem. Make a random choice, propagate the constraints, and repeat, backtracking if you reach a contradiction.
After reading [1], I'm not really sure backtracking is even an integral part of WFC, as apparently it sometimes fails to find a solution and needs to retry from start. Backtracking seems to be more of an optional add-on.
I think the statistical aspect is an important part of the algorithm; it's trying to match the statistics of the input data. I guess you can think of it as an approximate sampler for Markov Random Fields.
I tried to use this to generate Numberlink puzzles.
As the input I gave a solved image from https://github.com/thomasahle/numberlink .
However I never even got to the "progress screen" / "cells collapsed" count. I guess the input image has to be very small for this to work well.
I also tried generating some large images (like 1000x1000 pixels output, but small 64x64 input), but it only collapses around 100 cells/second, so this still takes about 3 hours assuming no contradictions are encountered.
This is a typical idiom frequently seen in modern C and C++ codes. As https://github.com/krychu/wfc#how-to-use-the-library says, you need to define a macro in a C file in order to "expand" the actual code there.
Having the code in the header gives a bit more flexibility considering the file layout. IMHO this is an awkward consequence of the missing de-facto-standard in C/C++ build systems.
I hate this idiom every time I come across it. I like to look at the header file to see the interface and important documentation, and this just obscures it. Depending on your compiler it can make debugging a huge pain as well.
> I like to look at the header file to see the interface and important documentation
This still works nicely: the important stuff (documentation and public interface) is at the top, followed by the 'unimportant' implementation at the botton of the file.
STB-style headers are a bit different from typical C++ 'single header libraries' in that they put the declaration and implementation into separate sections in the header file (with the implementation inside an #ifdef/#endif pair), while C++ headers are usually a wild mix of class declarations intermixed with implementation code in template and inline functions (which is indeed harder to read).
I don't quite understand how debugging is affected? E.g. there are (usually) no inline functions in STB-style headers.
It does seem that 'modern' idioms tend to just be doing the same thing in a more obfuscated manner. Just a .c/.h pair would have been easier to add to your source.
This is a valid point. I was on the fence between doing .c/.h pair and a single .h. But finally took the inspiration from: https://github.com/nothings/stb.
This might be an unpopular opinion, but compiling a program into a local directory shouldn't try to install packages globally. `make install`, I could understand doing that.
I had no idea that cmake would do this, after reading quite a lot of things about cmake v.s. make and how to write various makefiles for them. I posit that the C build tools are fundamentally hard to comprehend.
Due to the success of build tools like Gradle, build.rs or any other one that packages a Turing complete language to steer a build, apparently many want such daemons on their homedir.
Just because you can do something, doesn't mean you should. A `build.rs` is supposed to be for compiling non-Rust code and stuff like that, not mucking about with the package manager – and by and large, people don't use it for mucking about with the package manager.
Its a slow to compile idiom. The only reason to use it is if you expect people to download the header and plop it in. Basically its an end-run around build systems.
In my tests with STB-style headers written in C between a few thousand to a few tens of thousands of line of code, the compilation speed difference between including the header with disabled implementation, versus including with the implementation section removed from the file is absolutely negligible. This is because the implementation block is already skipped in the preprocessor, and never parsed by the compiler frontend.
Again, this is different from typical C++ single-header-libraries (e.g. pretty much all C++ stdlib headers) where the implementation code is inlined or in template code.
This is a sad idom followed in C and C++ libraries that cannot be bothered to learn build tools, dealing with C and C++ as if they were scripting languages.
It's a completely valid and useful idiom. Whatever build system you use, it's trivial to add these kind of libraries to the project.
The fact that one has to learn complex build tools (and often multiple ones), is the sad thing here. Luckily there are also unity builds, which are extremely handy in many contexts, because they are so fast and easy to use between different platforms without having to deal with annoying external tools.
Not entirely. If you think about it, a resulting C binary is just data and text - it’s all compiled into a single unit of globals and functions, much like a unity build layout. Regarding optimization it allows for LTO without the need for a link time optimizer. Regarding convenience, it spares one from function prototypes and using Make or CMake. Should one use Make to speed up build times it’s as simple as tossing the unity code you’re not working on into a precompiled header; this primes the compiler for a quick compilation of the newly written code. As for design, I find unity builds resemble the design mind of a mechanical, civil, or electrical engineer - a single assembly (main.c) incorporates sub assemblies (*.c) like a CAD designer would design SolidWorks parts for a vehicle or spacecraft
and now you need to add a .h to where ever you need to call those function. This .h will need to know if it got included multiple times, and so isn't just a trivial declaration.
All code in the header leads to a lot of duplicated work for the compiler, which means slow build times. You should avoid this unless it is absolutely necessary for performance to inline methods.
An STB-style header with the implementation disabled (which is the default) looks exactly the same to the compiler as a regular C header which only contains public API declarations (e.g. just struct declarations and function prototypes, and most importantly, no inline code).
All code that would otherwise live in .c files is between an #ifdef/#endif block which is only activated in a single compilation unit in the whole project.
Not sure how this approach would lead to redundant "function definitions" which would need to be deduped by the linker. The only overhead is in the preprocessor for skipping the implementation block, but that happens pretty much at IO speed - it's not comparable with the parsing overhead in typical C++ headers with template and inline code.
Things can be locally simple but have complex and chaotic implications. E.g. just by pushing the complexity up a level and dumping it on literally everyone else. I've learned over the years that complexity sometimes has a right place and a wrong place. (Most times complexity ends up everywhere, TBH.) I think resolving references between different parts of code in different compilation units is definitely within the purview of a compiler/build system, and should not be up to every programmer to flail at poorly with #ifdefs.
I've come across a lot of C programs that should be libraries, but they're written as an executable tool. They tend to assume that they're the only program in existence, never free their memory, use non-constant globals, stuff a whole bunch of logic into main(), etc.
By contrast, wfc is a breath of fresh air. The good stuff is in wfc.h, and the tool is in wfctool.c, which serves as an example for people who want to use the library. Consumers of the library have the option of writing a thin wrapper to produce an object file, or directly including it, if they prefer.
If this was a gargantuan library, it would make more sense to dicker about what's going into your build. But it's not a gargantuan library; it's tiny, and has the appropriate guard to prevent multiple compilations.
It is an atrocious fad imported a few years ago by young programmers coming from script languages, because they don't grasp modularity à la C and adding -lthatlib in a Makefile or a build script seems out of their reach :-(
- https://robertheaton.com/2018/12/17/wavefunction-collapse-al..., 44 comments https://news.ycombinator.com/item?id=18744696
- https://www.boristhebrave.com/2020/04/13/wave-function-colla..., 18 comments https://news.ycombinator.com/item?id=18321168
More threads: https://hn.algolia.com/?q=wave+function+collapse